Quantcast
Channel: Technology | The Atlantic
Viewing all articles
Browse latest Browse all 7170

The Computers of Our Wildest Dreams

$
0
0

One of the first electronic, programmable computers in the world is remembered today mostly by its nickname: Colossus. The fact that this moniker evokes one of the seven wonders of the ancient world is fitting both physically and conceptually. Colossus, which filled an entire room and included dinner-plate-sized pulleys that had to be loaded with tape, was built in World War II to help crack Nazi codes. Ten versions of the mammoth computer would decrypt tens of millions of characters of German messages before the war ended.

Colossus was a marvel at a time when “computers” still referred to people—women, usually—rather than machines. And it is practically unrecognizable by today's computing standards, made up of thousands of vacuum tubes that contained glowing hot filaments. The machine was programmable, but not based on stored memory. Operators used switches and plugs to modify wires when they wanted to run different programs. Colossus was a beast and a capricious one at that.

In the early days of computing, this was to be expected. Vacuum tubes worked in computers, but they didn’t always work very well. They took up tons of space, overheated, and burned out. The switch to transistor technology in the 1960s was revolutionary for this reason. It was the transistor that led to the creation of the integrated circuit. And it was the steady growth of transistors per unit area—doubling every two years or so for three decades—that came to be known as Moore’s Law. The switch from tubes to transistors represented a turning point in computing that—despite the huge strides since—hasn’t had a contemporary parallel until now.

We are at an analogous crossroads today, a moment in which seemingly incremental and highly technical changes to computing architecture could usher in a new way of thinking about what a computer is. This particular inflection point comes as quantum computing crosses a threshold from the theoretical to the physical.

Quantum computing promises processing speeds and heft that seem unimaginable by today’s standards. A working quantum computer—linked up to surveillance technology, let's say—might be able to instantly identify a single individual in real-time by combing through a database that includes billions of faces. Such a computer might also be able to simulate a complex chemical reaction, or crack through the toughest encryption tools in existence. (There’s an entire field of study dedicated to post-quantum cryptography. It’s based on writing algorithms that could withstand an attack by a quantum computer. People still aren't sure if such security is even possible, which means quantum computing could wreak havoc on global financial systems, governments, and other institutions.)

It’s often said that a working quantum computer would take days to solve a problem that a classical computer would take millions of years to sort through. Now, theoretical ideas about the development of such machines—long relegated to the realm of mathematical formula—are being turned into computer chips.

“As we started making these better controlled, engineered systems that do the physics as written down in the textbook, we start to engage more theorists and people who are more interested in these systems actually existing,” said Jerry Chow, the manager of the experimental quantum computing group at IBM. “It's definitely exciting because we're starting to really make systems which are of interest in terms of not only potential applications but also underlying physics.”

IBM announced in April that it had figured out a critical kind of error detection by building a square lattice of four superconducting qubits—units of quantum information—on a chip roughly one-quarter-inch square. The advances the company announced represent a key step toward actually building a large-scale quantum computer, Chow told me, because it represents a physical structure that could be rebuilt bigger while keeping quantum properties in tact—one of the core challenges in quantum computing. “It's basically a primitive for this scabale architecture,” Chow said. “The idea is to continue to grow this lattice to reach the point where you can encode a perfect qubit—a pefect, logical qubit in a sea of these faulty physical qubits.”

The error detection component is critical to advances in quantum computing. As Chow and his colleagues wrote of their findings in Nature, qubits are “susceptible to a much larger spectrum of errors” than classical bits.  

“So any way to speed this up with a protocol that can deal with errors simultaneously is likely to be a significant improvement,” said Steve Rolston, the co-director of the Joint Quantum Institute at the University of Maryland. “Almost all of the qubits in a real quantum computer are going to be there for error detection. It seems kind of crazy but it could be the case that 99 percent of the qubits that are there in a quantum computer are there for error detection and correction.”

The race to build a large-scale working quantum computer has intensified in recent years—and recent months, in particular. In 2013, Google bought what it says is a quantum computer from the company D-Wave, a Canadian company which has also sold its machine to the defense contractor Lockheed Martin. (Google is also letting NASA use the D-Wave system as part of a public-private partnership.) In March of this year, Google said it had built a nine-qubit device that successfully detected one (but not both) of the key kinds of errors typical in quantum computing. After IBM's announcement that followed in April, D-Wave announced in June it had broken the 1,000-qubit barrier, a processing milestone that it said would allow “significantly more complex computational problems to be solved than was possible on any previous quantum computer.”

D-Wave has a somewhat controversial history, with critics saying its claims about what its computers can do are often overstated. And yet there's no question that much has happened in the two decades since Shor's algorithm, named for the mathemetician Peter Shor, first offered a framework for quantum computing. “Peter shor came up with his algorithm in 1994,” Rolston told me. “It's been a long time now, a surprisingly long time in some ways. If you look at what's really happened in those last 20 years, mainly what people have been doing is really trying to perfect qubits and interactions with one or a handful of qubits—keeping the idea of scability in the back of their minds. There's no pont in me making a perfect qubit if I can't make hundreds, but there's also no point in desinging a hundred if I can't get one or two to behave properly.”

Up until about five years ago, most quantum computing work was still being done on single-qubit level. That's rapidly changing. “The real challenge,” Chow, of IBM, said, “is how we're going to controllably put more and more of these together so we can still control what we need to but the quantum information can be protected. People say we're basically somewhere between the vacuum tube and transistor. We're still in the early days.”

This article was originally published at http://www.theatlantic.com/technology/archive/2015/07/quantum-computer-race/397181/












Viewing all articles
Browse latest Browse all 7170

Trending Articles