top of page
Philip Ball

Google's Willow and the Future of Quantum Computing

Philip Ball

Google’s latest quantum-computing chip, called Willow, is stirring up excitement in a field that has seemed rather quiet in recent months. For all the big claims that have been made about quantum computing revolutionizing information technologies and AI, the successes of the past decade or two in making prototype working devices has given way to a phase of laborious engineering in order to get them to a point where they can do anything useful. Willow, unveiled in a paper in Nature on 9 December, has passed a performance threshold showing that quantum computers can indeed be scaled up without being overwhelmed by technical difficulties. The result “cracks a key challenge that the field has pursued for almost 30 years,” says Hartmut Neven, a leader of the project and founder of Google Quantum AI in Mountain View, California.

 

Many researchers in the field seem to accept such bold claims. Chao-Yang Lu, a leading figure in quantum information technology at the University of Science and Technology of China in Shanghai told Nature that the work is “a truly remarkable technological breakthrough.” And computer scientist Scott Aaronson of the University of Texas at Austin, a reliably hard-headed commentator on quantum computing, has called it “a real milestone for the field.” So what’s the big deal?

 

It's all about correcting errors. All computers incur random errors because, at the scale of the tiny components that encode information as “bits” of binary code1s and 0sthe world is a noisy place. In conventional computers, each bit is stored in transistors that act as switcheseither “on” (1) or off (0). Every now and then, a switch might get thrown by some random influence such as electrical noise in the circuits, turning a 1 to a 0 say. That could scupper a calculation, especially if uncorrected errors accumulate and multiply as the computation proceeds. Typically, computer circuits are protected against such errors by storing each bit in multiple copies. With three copies, it can be assumed that if at least two of them are the same then that’s the correct value, since it’s unlikely that random errors will strike twice in the same place.

 

Quantum computers store information in a different way: in components (quantum bits or qubits) that are governed by the rules of quantum physics. This means that, as well as being in a state that will give either a 1 or a 0 when measured, a qubit can be placed in a “superposition,” meaning that a measurement could give either a 1 or a 0, each with a specific probability. In effect this means that a single qubit has not just two binary states available to it but an infinite number, each with a different “mixture” of 1 and 0. Because of that massive expansion of possibilities, a quantum computer can carry out complicated calculations using just a few qubits in a fraction of the time that it would take a classical computer to do the same task using millions of ordinary bits.

 

This is why quantum computers are advertised as being much more powerful than conventional ones. But there’s an important proviso: this speed-up doesn’t apply to every type of calculation. There are certain kinds of difficult computational task that can be handled easily and efficiently on quantum computers, especially ones that involve searching for the right answer by plodding through each possible outcome in turn. Factorising large numbers is like this, and one of the first quantum algorithms devised back when quantum computers were just a theoretical possibility can be used for that. But other computational tasks, such as handling graphics, don’t obviously incur any obvious “quantum advantage.” So quantum computers won’t ever necessarily transform all of computing, and it’s not obvious that you’d benefit from quantum chips in your laptop.

 

All the same, there is a lot that quantum computers can potentially do far better than classical computers, for example in some areas of finance, simulations of complex materials, and searching large data banks. There is also intense interest in finding quantum algorithms that can be used for artificial intelligence – witness Neven’s assignation at Google. So there’s surely a huge potential market for these machines, which is why companies like IBM, Microsoft, and Google are investing heavily in the area. From the early days in the 2010s when working quantum computers had just a handful of qubits, we’re now at the stage where some quantum chips have over a thousand: IBM’s Condor chip has 1,121. The more qubits, potentially the greater the processing power – although it’s not just about numbers of qubits but also about their quality, as we’ll see.

 

And in any case, I say “potentially” herebecause of that problem of errors. In what could seem a stroke of irony on nature’s part, the advantages in speed and processing power offered by using quantum rules are offset by the fact that those very same rules prohibit the correction of errors in the simple way used for classical computers: by keeping multiple copies of a bit. A quantum computation involves placing all the qubits into a kind of collective superposition called an entangled state, in which the state of each qubit is interdependent on those of all the others. This is a very delicate situation to sustain, especially as ever more qubits are entangled together. It’s partly why the qubits have to be kept very cold—most of the bulk of today’s quantum computers is the cryogenics. Even then qubits typically only stay entangled for a fraction of a second, so a computation has to be performed quickly.

 

That catch is that, while a qubit is in an entangled state, it’s not possible to know whether it’s a 1 or a 0 or some mixture. But a quantum rule called no-cloning says that it’s impossible to make a copy of an unknown quantum state. So qubit states during a computation can’t be duplicated, even in principle, to make the calculation robust against random errors.

 

Ways to get around this problemto enable quantum error correctionhave been a major focus of the field even before the first true prototype devices were created. The first algorithm for quantum error correction was devised back in 1995, and it involved spreading the informationthe 1 or 0, you might sayacross several of the qubit devices (the “physical qubits”) so that they act as a single “logical qubit” that can’t be corrupted by noise in the system.

 

The idea works in theory, but it’s only practical if not too many physical qubits are needed to make a single logical qubit. Some early estimates suggested that maybe tens or even hundreds of thousands of physical qubits would be required for each logical qubit, meaning that quantum circuits would be massively unwieldy. It now seems that you can get a pretty reliable logical qubit with far fewer physical qubits than that. But the bigger problem is that, since each physical qubit is error-prone, grouping ever more of them together into a logical qubit could just add more sources of error: the error rate could grow faster than the error suppression you get from spreading out the information.

 

So here, finally, is what’s special about Willow: it passes the threshold at which the error rate gets smaller, rather than larger, as more physical qubits are used to make a logical qubit. As the Google researchers increased the size of a square grid of qubits used to encode the information from 3x3 to 5x5 to 7x7 (Willow’s chip hosts 105 qubits in all), the error rate fell by half each time. This means that Willow surpasses the break-even point: in theory the error rate for a logical qubit could be as small as you like, provided that you make it from enough physical qubits. If the chip fabrication technology permits, in principle one could make immense quantum circuits capable of carrying out much more complicated quantum computations than have been feasible to date, with the kind of accuracy thought to be required for widespread commercial applications. The Google team says their current (ambitious) goal is to achieve to a machine with a million qubits, all of them fully error-corrected.

 

In a demonstration of Willow’s capabilities, the Google team have used their device to carry out a calculation that took it about five minutes but would, the researchers estimate, have occupied the world’s largest supercomputer for around ten trillion trillion years: vastly longer than the current age of the universe. It’s a neat demonstration of “quantum advantage” (the earlier term “quantum supremacy” has fallen out of favour), but not really much more so than previous examples of how much quantum computers can speed things along, given the right task. What that feat most assuredly doesn’t do is what Neven claims: “lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse”. This idea stems from the Many Worlds interpretation of quantum mechanics, favoured by some quantum physicists (including David Deutsch, one of the pioneers in the theory of quantum computation) but dismissed by others. The idea here is that quantum computers are faster because in effect all the calculations it performs are each carried out in a parallel universe. Willow’s success adds absolutely nothing to that debate; arguably Neven is here forgetting that the whole point of quantum mechanics is that we shouldn’t try to imagine it in classical terms, say as quintillions of classical computers running in parallel.

 

To be clear, this latest advance doesn’t come out of the blue: Aaronson says ““for anyone who’s been following experimental quantum computing these past five years… there’s no particular shock here.” Researchers at Google, IBM and elsewhere have been showing a steady reduction in error rates – and corresponding improvements in accuracy – in quantum computers over recent years. Neither does the advance clear the path to putting a quantum computer in every company that thinks it could benefit from one. There’s still painstaking engineering work to be done to make the basic hardware better—in particular, to give physical qubits lower intrinsic error rates and to allow them to remain in a “coherent” entangled state for longer times. And there’s plenty still to be done too on the software side, developing algorithms that can take advantage of quantum speed-up for a wider range of problems.

 

There are a lot of parallels here with nuclear fusion, another technology garlanded with utopian promise that is renewed with every headline about a new “breakthrough,” while in reality being constrained by huge challenges that involve dull, technical but utterly necessary improvements in engineering. All the same, I’d put money on quantum computing becoming commercially viable and useful way before fusion does—and indeed, some think that the concomitant boost to computational resources could help solve some of the problems that fusion faces. Quantum computing has fallen victim to some absurd hype, but nonetheless it could turn out to be very handy indeed.

 

Philip Ball is a scientist, writer, and a former editor at the journal Nature. He has won numerous awards and has published more than twenty-five books, most recently How Life Works: A User’s Guide to the New Biology; The Book of Minds: How to Understand Ourselves and Other Beings, From Animals to Aliens; and The Modern Myths: Adventures in the Machinery of the Popular Imagination. He writes on science for many magazines and journals internationally and is the Marginalia Review of Books' contributing editor for Science. Follow @philipcball.bsky.social

Current Issue

bottom of page