Menu
Montecito
Pac Premier
Giving Guide
Loading...
You are here:  Home  >  Tri-County Economy  >  South Coast  >  Current Article

Google, Microsoft vie for UCSB top minds with quantum initiatives

By   /   Friday, September 12th, 2014  /   Comments Off on Google, Microsoft vie for UCSB top minds with quantum initiatives

    Print       Email

Google has selected a UC Santa Barbara lab for a new quantum-computing initiative just a short distance from where Microsoft has an established outpost, setting up a battle among tech giants on the South Coast.

On Sept. 2, Google’s Quantum Artificial Intelligence Lab said that it was launching a new hardware push to design and build quantum information processors based on superconducting electronics. The company said it chose the lab of Professor John Martinis.

The Google group will be a short distance from Microsoft’s Station Q, an effort that has been operating for nearly a decade and is headed by renowned mathematician Michael Freedman. It is housed in the California NanoSystems Institute building on campus.

The two teams take different approaches toward the same goal: stable, reliable quantum computers. Quantum computers use unique properties of very tiny particles to become millions of times more powerful than conventional computers at solving certain kinds of problems. The Martinis group believes it is within striking distance of building a useable machine for some applications. Station Q, by contrast, is pursuing an approach that is at an earlier stage but that it believes could lead to more flexible and scalable quantum computers.  At present, the field is a fast-developing science rather than a proven technology, with commercial viability still in the offing. But that hasn’t stopped

Google and Microsoft, which regularly tangle for the rank of third- and fourth most-valuable company on earth, from taking an interest.

1, 0 or both

Quantum computers differ from their classical counterparts by using what’s called a quantum bit, or qubit. Where an ordinary bit operates as either a 1 or a 0 at any given time, a qubit can operate as a 1 or a 0 at the same time, a phenomenon called superposition. The result is that the power of a quantum computer progresses exponentially as qubits are added. Two qubits can perform four operations, three qubits can perform eight, three qubits can perform 16, and so on.

Huge computing horsepower could change fields like cryptography. Some encryption programs today work on the principle that if you multiply two very large prime numbers together, factoring the product back down to its prime components takes a really long time, even for a bank of classical supercomputers.

But the more practical applications for companies such as Google are optimization problems. For example: Let’s say you want to plan a trip among several different cities, and you want to be able to optimize along several different variables, such as the cheapest route, the least number of stops or the fastest way. That’s a problem quantum computers could solve faster than classical computers.

The challenges, however, are immense. It’s hard to hold qubits in a state of quantum superposition. The slightest bump from stray vibrations, heat or electromagnetic interference can cause a particle to fall out of superposition. And when one qubit falls, it can take the rest of the qubits down with it, causing the entire program to collapse like a house of cards. Scientists have been working to improve the so-called coherence time of quantum computers — that is, the length of time that the particles stay in superposition and can perform calculations.

“About 10 or 15 years ago, when people were starting to look at this seriously, we were at a 10-nanosecond coherence time,” Martinis told the Business Times. “We’re now at 30, 50 or in some cases 100 microseconds.” It might not sound like much, but it’s a 1,000-fold improvement, the equivalent of going from a million dollars to a billion.

The other primary challenge is error correction. Because the systems are so delicate, error rates can be high — high enough that even with the advantages of quantum physics, practical machines wouldn’t outperform classical machines, where error rates are very low and correction technology is well developed.

Google’s efforts

Google teamed up with NASA to buy a machine from D-Wave, a privately held Canadian firm that claims it is selling the world’s first quantum computers. Martinis was among a number of scientists who expressed skepticism earlier this year that the D-Wave computer was actually leveraging quantum properties to speed up computations.

But examining the D-Wave results led to the Google partnership. D-Wave uses a process called quantum annealing. Annealing translates the problem into a set of peaks and valleys, and uses a property called quantum tunneling to drill though the hills to find the lowest valley. The approach limits the device to solving certain kinds of optimization problems rather than being a generalized computer, but it could also speed up progress toward a commercial machine. Martinis was intrigued by what might be possible if the group combined some of the annealing in the D-Wave machine with his own group’s advances in error correction and coherence time.

“There are some indications they’re not going to get a quantum speed up, and there are some indications they are. It’s still kind of an open question, but it’s definitely an interesting question,” Martinis said. “Looking at that, we decided it would be really interesting to start another hardware approach, looking at the quantum annealer but basing it on our fabrication technology, where we have qubits with very long memory times.”

Martinis will retain a position with UCSB but also become a Google employee, as will some members of his research group. The main benefit of teaming up with Google is longer-term stability. With students and post-doctoral researchers constantly moving through the academic program, it was becoming hard to retain expertise as the goal of a working quantum computer draws closer.

“We’re getting to such a high level of complexity now that when people leave, it’s just really hard to keep going with the experiments you want to do,” Martinis said.

Microsoft’s topology

Back in 2005, Microsoft established Station Q on the UCSB campus. Located within the California NanoSystems Institute, it collaborates with researchers around the world to work on an approach called topological quantum computing.

Many current qubits use superconducting materials to shuttle around electrons and trap them in logic gates where they exhibit quantum behavior. The systems are so delicate that several qubits in an array might be busying running error correction for every qubit that’s performing computation — a sort of software-based error correction approach.

The topological theory instead aims to build completely different hardware to solve the coherence and error problems. It would crisscross the world lines of two-dimensional quasiparticles to form braids in three-dimensional space-time. It’s the kind of thing that’s a little dizzying even to physicists, and no one is completely certain whether the necessary quasiparticles indeed exist.

But if the theory is correct, these braids could be bumped and jostled without losing their shape and falling out of superposition, making them much more robust than electron-derived qubits. “If you try to build a large-scale quantum information processor through superconducting qubits without topological protection, you have to expend a tremendous amount of overhead — perhaps a factor of 10,000 to 1 — in wrapping these qubits in code so that individual qubits can fail without losing important information,” Freedman said.

Freedman likened the difference between superconducting qubits and potential topological qubits to the difference between vacuum tubes and transistors. Early computers used tubes but took up entire rooms. Solid-state transistors are what shrank those computers to pocket-sized.

“When you start trying to put together a computer with vacuum tubes, you have this problem that some of them are burned out at any moment. You have to keep replacing them and working around them, and the program can collapse,” he said. “Computers began with [vacuum tubes], but they didn’t take off. The full utility wasn’t realized until they became transistor- and ultimately integrated-circuit-based.”

    Print       Email