Back in 2019, Google proudly announced that they have achieved what researchers in quantum computing have been striving for for years: proof that esoteric techniques can surpass traditional ones. But this demonstration of “quantum superiority” is disputed by researchers who claim they beat Google on a relatively conventional supercomputer.
To be clear, no one is saying that Google lied or misrepresented its work — the painstaking and groundbreaking research that led to the announcement of quantum supremacy in 2019 is still extremely important. But if this new article is correct, the competition between classical and quantum computing is still anyone’s game.
You can read the full story of how Google turned quantum theory into reality in the original articlebut here is a very short version. Quantum computers like the Sycamore are no better than classical computers yet, except perhaps for one task: simulating a quantum computer.
Sounds like a cop-out, but the point of quantum supremacy is to demonstrate the viability of a method by finding at least one very specific and strange task that it can do better than even the fastest supercomputer. Because it gives a quantum opportunity to expand this library of tasks. Perhaps in the end all tasks will be faster in quantum, but for Google’s goals in 2019 there was only one, and they showed in great detail how and why.
Now, a Chinese Academy of Sciences team led by Pan Zhang has published a paper describing a new technique for simulating a quantum computer (specifically, certain noise models) that appears to take a tiny fraction of the time estimated for a classical one. calculation for this in 2019.
Not being a quantum computing expert or a professor of statistical physics, I can only give a general idea of the method used by Zhang and others. They presented the problem as a large 3D network of tensors with 53 qubits in Sycamore, represented by a mesh of nodes extruded 20 times to represent the 20 cycles that the Sycamore gates went through in the simulated process. The mathematical relationships between these tensors (each with its own set of interrelated vectors) were then computed using a cluster of 512 GPUs.
In the original Google article, it was estimated that running this scale of simulation on the most powerful supercomputer available at the time (Summit at Oak Ridge National Laboratory) would take about 10,000 years – although, to be clear, this was their estimate for 54 qubits. performing 25 cycles. 53 qubits doing 20 are significantly less complex, but they estimate would still take on the order of several years.
Zhang’s group claims to have done it in 15 hours. And if they had access to a normal supercomputer like Summit, it could be done in a few seconds – faster than Sycamore. Their article will be published in the journal Physical review letters; You can read it here (PDF).
These results have yet to be fully verified and replicated by those who understand such things, but there is no reason to think that this is some kind of error or fraud. Google has even admitted that the baton may be passed back and forth several times before the championship is firmly established, since quantum computers are incredibly difficult to build and program, while classical ones and their software are constantly being improved. (Others in the quantum world were skeptical of their claims from the start, but some of them are direct competitors.)
As University of Maryland quantist scientist Dominic Hangleiter told sciencethis is in no way a black eye for Google or a knockout blow for quants in general: “The Google experiment did what it was supposed to do and started this race.”
Google may well be hitting back with its new claims – it’s not standing still either – and I’ve reached out to the company for comment. But the fact that it’s even competitive is good news for everyone involved; it’s an exciting area of computing, and work like that of Google and Zhang continues to raise the bar for everyone.
Credit: techcrunch.com /