Math may have caught up with Google’s quantum-supremacy claims

DMCA / Correction Notice
- Advertisement -

- Advertisement -

In 2019, word filtered out that a quantum computer built by Google had performed calculations that the company claimed would be effectively impossible to replicate on supercomputing hardware. This turned out to be not entirely accurate, as Google neglected to consider available storage for supercomputers; If this is included, then the lead of quantum computers will be reduced to a few days.

However, by adding a handful of additional qubits, the quantum computer’s vast edge would be re-established. Recently, however, a draft manuscript was put up on arXiv that points out an important fact: Google’s claims relied on comparisons to a very specific approach to computing on standard computing hardware. There are other ways to do the calculations, and the paper suggests that one of them would allow a supercomputer to actually pull ahead of its quantum competitor.

more than one road for random


The calculations done by Google were specifically designed to be hard to simulate on a normal computer. It sets the 54 qubits of its Sycamore processor to a random state, then quantum interference between neighboring qubits affects how the system evolves over time. After a short interval, the hardware began to measure the position of the qubits repeatedly. Each individual measurement produced a string of random bits, making Sycamore a very expensive random-number generator. But if enough measurements are made, some patterns resulting from quantum interference become apparent.

As the laws of this quantum interference are understood, it is also possible to calculate the patterns we should see in the random numbers generated by Sycamore. But doing those calculations is very computationally expensive and gets more expensive with each additional qubit in the system. Google estimated that it would take an unrealistically long time to perform these calculations on the world’s most advanced supercomputers.

- Advertisement -

A flaw with this argument that was pointed out at the beginning of the process was that Google neglected to consider storage attached to the world’s largest supercomputer, which would significantly cut down on Sycamore’s leadership. But the reality remained that these computations were too difficult for classical computing hardware.

The new manuscript focuses on a key aspect of that phrase: these computations. Google has chosen a very specific way of computing the expected behavior of its processors, but there are other ways to calculate the equivalent. In the intervening time, some alternatives have been explored that perform better. Now, Fang Pan, Qing Chen, and Pan Zhang are describing a specific method that allows a GPU-based cluster to produce uniform output in just 15 hours. Run it on a major supercomputer, and he estimates it will outperform the Sycamore quantum processor.

some light-on-math details

There are many ways to look at what Pan, Chen and Zhang achieved. We’ll try three of them, getting progressively deeper into the math as we go on.

The simplest way to look at this is with reference to the output provided by Sycamore. Individual measurements of the positions of the qubits in the Sycamore processor produced a string of truly random ones and zeros, but the pattern would become apparent if you ran enough measurements of the processor’s single starting position. If you set up a classical calculation that replicates sycamore’s physics, you’ll get the same level of randomness and similar patterns.

What the new paper describes is a way to trade off some fidelity of a calculated reconstruction of the processor’s behavior, but achieves much faster computation in the process. In other words, the new calculations don’t exactly replicate Sycamore’s behavior, but they still produce patterns and inherent randomness, and they can be completed much faster.

That explanation is one. Option number 2 to understand this involves considering how the initial state of the Sycamore processor evolves to its position at the point of measurement. There are many possible paths to get there and, being a quantum system, it explores them all. To get an accurate model of the output of a Sycamore processor, you also need to trace all paths, which is very computationally intensive. Pan, Chen and Zhang found a means of limiting the paths you see, which simplifies the calculations while getting a uniform output.

Those of you who want to avoid math should now move on to the next section heading. The actual computation method describes Sycamore’s interaction of qubits as a three-dimensional tensor network, in which the tensors determine the relationships between the properties of the qubits. The algorithm then simplifies this by isolating some of the network’s connections—the researchers describe this as the equivalent of drilling a three-dimensional hole through the network.

Each hole you drill halves its calculated fidelity. But it does make the fidelity fully adjustable: You can only make sure you have enough recapitulation of the Sycamore’s behavior by limiting the number of holes you drill. The math of where those holes were drilled within the network was dictated by the physical structure of the Sycamore chip itself.

The resulting contracted tensor network was fairly easy to model, although the researchers had to break it down into subtasks that could be stored on the system they were working on. And they used their algorithm to model the behavior of small qubit networks on Sycamore and showed that it produced results that were accurate within their fidelity range.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories