Results 1 to 2 of 2

Thread: Google D-Wave quantum computer

  1. Header
  2. Header-68

BLiNC Magazine, always served unfiltered
  1. #1

    Google D-Wave quantum computer

    Although we’ll have to wait to discover what innovations Google’s secret “X lab” holds for the future, the company does have at least one bleeding-edge venture it wants to unveil to the public right now. In partnership with NASA, Google has embarked on a project to use a 512-qubit D-Wave Two quantum computer to tackle some of our biggest computational problems.
    See also: 6 Obscure NASA Sites Every Nerd Has to See

    And because visuals often help explain complex concepts, Google has posted a short documentary explaining what quantum computing is, and how it plans to use the technology to make new discoveries.
    First announced back in May, Google’s partnership with NASA will allow the two teams — along with several universities — to share the powerful computer, so they can research several different areas.
    Traditional digital computing – which drives the computer you’re using now — operates using a binary approach, which on a basic level refers to differentiating bits of data in their 1 and 0 states from one another.
    Quantum computing differs from digital computing in that it gives researchers the power to put bits of data into their 0 and 1 states simultaneously, rather than cycling through calculations in a linear fashion as is common with traditional digital computers. The potential applications for quantum computing could advance our understanding of weather patterns, theoretical physics and even artificial intelligence.
    This new computing dynamic not only gives scientists the ability to determine a variety of possible solutions simultaneously, but it also saves energy and speeds up the computing process by many orders of magnitude.
    For Google in particular, quantum computing could help it make advances in search, neural networking and artificial intelligence. These advances will ultimately help the company achieve its goal of being able to tell you what you want before you even ask it.
    Of course, this very general description of quantum computing is still bit too much for some of us — still making our way through the tangle of traditional computing — to fully grasp. That’s why Google’s new video is a welcome primer on the topic, offering a set of very simple and entertaining explanations on the technology.
    While the NASA partnership presents exciting possibilities for harnessing the new technology, Google has been looking at quantum computing for some time. As early as 2009, Google discussed its research work with quantum-computing company D-Wave, which was related to studying machine-learning with quantum algorithms.
    At the time, Google’s Hartmut Neven wrote, “Over the past three years, a team at Google has studied how problems such as recognizing an object in an image or learning to make an optimal decision based on example data can be made amenable to solution by quantum algorithms … These algorithms promise to find higher-quality solutions for optimization problems than obtainable with classical solvers.”
    The field of quantum computing is still so new that even researchers aren’t exactly sure of what they’ll learn over the next few years; it's what makes the project even more exciting.
    Posting a message to Google+, the Google Quantum A.I. Lab team wrote, “We’re still in the early, early days, but we think quantum computing can help solve some of the world’s most challenging computer-science problems.
    "We’re particularly interested in how quantum computing can advance machine learning, which can then be applied to virtually any field: from finding the cure for a disease to understanding changes in our climate.”





    use a 512-qubit
    D-Wave, The Quantum Computing Company
    Last edited by airdog07; November 5th, 2013 at 04:24 PM.

  2. #2

    Re: Google D-Wave quantum computer

    Quantum Computing Disentangled: A Look behind the D-Wave Buzz

    By Jesse Dunietz | August 27, 2013


    The word “quantum” imbues any topic with instant mystique. Unfortunately, it often doubles as a “keep out” sign – a signal that an impenetrable quagmire of math and physics awaits anyone foolish enough to peer behind the label. “Quantum computing” is no exception: an air of inscrutable mystery surrounded the recent flurry of Internet stories about quantum computers. These machines exploit the mysterious phenomena of quantum mechanics – the physics of the ultra-tiny – to solve problems at speeds that leave traditional computers in the dust.

    The latest buzz surrounded the announcement of a new lab where the Universities Space Research Association, Google, and NASA Ames will operate a quantum computer manufactured by D-Wave, a Canadian computer company. The announcement triggered a wave of breathless stories about the near-magical capabilities of such computers. All this coverage has only made quantum computing seem more impenetrable: what makes a computer “quantum?” Why do we want a quantum computer? And has D-Wave actually built one that lives up to the excitement? To address these questions, we’ll need to start with some background on how computers represent information.

    Classical Information vs. Quantum Information

    Few people can claim to have invented a new scientific field. Fewer still can claim to have invented several. Possibly only Claude Shannon could claim to have done so by the age of 32.

    Shannon first became an academic celebrity when, in his 1937 master’s thesis, he laid the mathematical foundations for digital circuits, essentially inventing the digital computer. After a few years’ hiatus, he returned to the spotlight in 1948 with a paper modestly titled “A Mathematical Theory of Communication.” In it, Shannon described a model of communication that still defines how computer scientists think about information today.

    Shannon was struggling to describe mathematically the concept of a sending a message – a piece of information transmitted from Point A to Point B – without reference to any particular transmission technology. One of his chief insights was that any piece of information can be encoded in a simple scheme with just two symbols: the humble 0 and 1. Each 0 and 1 Shannon called a bit – a fundamental unit of information.

    To this day, the bit is the currency of information inside every computer you own. A traditional computer needs each bit to be unambiguously either a 0 or a 1 at every step in a calculation. An enormous amount of engineering effort goes into enforcing this. In fact, the time it takes for wavering electrical signals to settle down to a clear 0 or 1 is largely what determines the speed of a chip.

    Of course, a bit on its own, limited to just two possible messages, isn’t terribly useful. Bits are mainly interesting in crowds. If we have two bits, each can store either a 0 or a 1, giving four possible configurations the pair can store (00, 01, 10, and 11). Each configuration, or state, can be interpreted as a different message (say, the numbers 0, 1, 2, and 3), so we can now represent four distinct messages. The number of possible states – and thus the amount of information we can store – continues to double with each additional bit.

    Ever since that paper, the bit has driven the development of computer technology. A few decades later, though, several physicists and engineers began to ponder more exotic ways of representing information. In particular, they looked to quantum mechanics: could the peculiar physics that apply at tiny scales change the game of storing information?

    Enter the qubit, the quantum mechanical analogue of the bit. Originally a mere theoretical abstraction, the qubit is a considerably stranger beast than its classical cousin. Like a bit, a qubit is a unit of information. Also like a bit, each qubit can be in one of two physical states, which we’ll again call 0 and 1. But there the similarities end: due to the vagaries of quantum mechanics, a qubit can exist in multiple states simultaneously. Much like Schrödinger’s famous cat, any given qubit can hold a 0 and a 1 at the same time. It can be, say, 25% 0 and 75% 1. Or it can be split 50/50, or 63.7/36.3. The possibilities are endless.

    If this seems counterintuitive to the point of impossibility, you can take solace in knowing that the great physicist Richard Feynman thought so too. He once famously remarked, “I think I can safely say that nobody understands quantum mechanics.” Unfortunately, it only gets weirder from here.

    A qubit’s schizophrenic condition, called a superposition, persists until the qubit is measured – that is, until it interacts with another object. It then chooses a single state – a 0 or a 1 – and sticks with it. Each state’s probability of being chosen is governed by the aforementioned percentages. So if we measure 100 independent qubits, each of which is initially 30% 0 and 70% 1, about 30% of them will come up as 0’s and 70% will come up as 1’s.

    The complexity grows as you add more qubits. With 2 qubits, as with 2 bits, you can store any number from 0 to 3. But unlike the classical pair of bits, our qubit duo can also store any mixture of these four numbers.

    Strangely, those percentages aren’t built into the qubit – they depend on how you measure it. Say you measure a beam of qubit-carrying photons at one angle and find they’re split 50/50. If you’d measured the same beam at another angle, you’d have gotten only 0’s. And even if a particular qubit has settled on being a 1 when measured at one angle, it may switch to being a 0 if measured at a different angle. Qubits, like bits, ultimately give you a yes-or-no answer, but which one you get isn’t fixed until you ask, and even then it depends on how you ask the question, and even then the answer will change if you subsequently ask the question differently. It’s like looking at one of those signs where the image changes depending on where you see it from – except the sign occasionally decides at random to show you the view from the other side.

    As if all this weren’t enough, there’s yet another wrinkle: those percentages aren’t strictly percentages. If one rainstorm has a 40% chance of hitting Pittsburgh today and another shows up with a 20% chance, the probability of rain in Pittsburgh must go up (in this case, to 52%). In quantum mechanics, though, the things we’ve been calling probabilities can be negative – they can cancel out. (Technically, they’re complex numbers, not probabilities.) If a qubit is 40% 1, you can add 20% more 1 such that the final probability of getting a 1 is about 3.5%.

    Every aspect of these behaviors is deeply unintuitive, but perfectly in line with what quantum mechanics dictates for certain physical systems. The physics behind this gets rather involved, so let’s just take it as a given that qubits exist. Why should we care? What do you do with a qubit?

    Well, you build a quantum computer, of course – a machine that manipulates qubits to perform calculations. Much of the media hype has suggested that quantum computers are faster than classical computers because they calculate all answers simultaneously. In a certain sense, that’s true: qubits can store a superposition, and if you perform a calculation on the superposition, you’ll get a new superposition of all the possible results. But at the end of the day, you still have to ask the qubits what answer they hold. At that moment of truth, they won’t give you a list of answers – they’ll choose just one at random from the many they’re storing. Fast but random is hardly better than slow but correct.

    What’s the secret, then? How could messy, unpredictable qubits possibly be an improvement over good old-fashioned bits?

    From Quantum Information to Quantum Algorithms

    Last month, I struck out to the back of my new apartment complex on a bold and daring mission: to do my laundry. Squashing my laundry basket precariously against the wall, I paused for an uncomfortably long time in front of the laundry room, fumbling with key after key to find the one that would open the door. For my small keychain, this was just a minor inconvenience. But what if I had 100 keys? Or 1,000? At some point, trying all the possibilities becomes impractical.

    The same problem arises for computers. Word processors, for instance, are often asked to search thousands of lines for a specific word. Thousands of lines don’t faze a computer, but searching trillions is likely impractical.

    Such intractable searches were on the mind of Lev Grover in 1995. Grover was a researcher at Bell Labs, a powerhouse of innovations in computing technology. Inspired by the lab’s recent breakthroughs in quantum computing, Grover realized that this new technique offered a way out of scanning every word. A quantum computer will indeed give a random answer from the possibilities its qubits contain, but the probability of each answer need not be the same. The trick, then, is to rig the game such that the correct answer is more likely than the others.

    To see how, let’s return to the keys. Let’s imagine that our thousand keys are strung out on a long rod, neatly lined up. Let’s also imagine that the key we seek is made of iron, which is magnetic, whereas the others are all brass, which is not. We can’t afford to test one key at a time – but what if we could act on all of them at once?

    Let’s stand our key-laden rod on its end, line up a bar magnet with it, and wave the magnet in a circle around the rod. While we’re doing that, we’ll jiggle the rod to let the keys shift. The iron key will swivel around the rod to follow the magnet, making it slightly easier to pick out. Still can’t tell which one it is? No problem; we’ll just wave the magnet again to make the key stick out farther! The rest of the keys, meanwhile, will shift randomly as we jiggle. After a few repetitions, the uninteresting keys will end up approximately where they started, and the correct one will stick out like an albino squirrel on asphalt.

    The same approach drives Grover’s method for searching for a word. First, we set up a bunch of qubits (collectively, a register) to store a superposition of all possible locations where the word we want could be (1 for first in the list, 2 for second, etc.). Any one of these could be the correct answer, just as any key on the rod could be correct. We then follow a recipe from Grover for tweaking the entire superposition (without measuring it), much as we waved the magnet around the entire rod at once. Grover’s recipe adds probability to both the right and wrong answers, just as our jiggle-and-wave operation shifts all of the keys slightly.

    The cleverness lies in one operation – one gate, in the parlance – of the several that make up the tweak. Given any single number, this gate would make the number’s “probability” negative if it’s the correct answer, or leave it unchanged if it’s not (compare: given any individual key, the magnet will attract or ignore it). When you operate on a superposition, though, you get back a superposition of all the possible results. When we apply this gate to our superposition, then, it will act differently on the different elements of the superposition. This allows Grover’s tweak to treat right and wrong answers differently: for the wrong ones, the additional probability cancels with what was already there, just as the random motions of each non-iron key average out to nothing. The net result is that we boost the probability of the correct answer. After a few boosts, we can measure the qubits, confident that we’ll probably get the right answer.

    Grover’s algorithm perfectly exemplifies the strategy of quantum computation. The trick is to operate on the entire superposition simultaneously, gaming the probabilities so that the right answer comes out on top. Although you may need to repeat the operation many times, some computations still become exponentially faster than on a classical machine.

    Much of the excitement about quantum computing stems from one application in particular: factoring numbers. Factoring 21 into 3 times 7 is easy, but for decades it was assumed that finding the factors of large numbers takes too long to be practical. So strong was this assumption that most encryption technologies, including those underlying e-commerce, are powered by it. But Peter Shor, a mathematician at MIT, wasn’t so sure. His doubt proved well-placed: in 1995, Shor electrified the world of computer science with a fast quantum algorithm for factoring large numbers. Shor’s algorithm could shatter the world of Internet security.

    The devil, as usual, is in the details. Theoretically, a quantum computer could provide massive speedups for factoring, searching databases, biochemistry simulations, and other challenging problems. Practically, however, everyone who tried to build such a computer – including D-Wave – kept running into technical barriers. It is to those barriers that we turn next.

    A Tangle of Qubits

    There’s a fly in the ointment that we’ve overlooked. We’ve been assuming that when you measure a register of qubits, all those qubits decide together what answer to reveal. In reality, each qubit could make that decision independently. If that happens, their answer will be as informative as an image where every pixel is from a different search result. Similarly, tweaks to the superposition need to modify probabilities of entire numbers, not just the individual 0’s and 1’s that comprise the numbers.

    The solution to this problem is entanglement. “Entanglement” describes a particular kind of superposition in a quantum register – one where the individual qubits’ states are bound up with each other. An entangled qubit still behaves probabilistically, but measuring it influences the choices of all the qubits entangled with it.

    Entanglement is the Achilles’ heel of quantum computing, both in its scarcity and in its abundance. Prodding a large herd of qubits into entanglement gets tricky, so any practical quantum computer must contend with insufficient entanglement. On the other hand, it’s all too easy for qubits to get entangled with objects outside the qubit register. When this happens, they usually stop paying attention to each other, tying their fates to the external environment instead. This phenomenon, called decoherence, is utterly crippling to a quantum computer. Reams of research are devoted to counteracting or delaying it, but thus far the problem has stubbornly resisted these efforts.

    A Maverick Company

    And now, at last, we arrive at D-Wave. When it was first founded in 1999, the computer company started exploring the same types of quantum computers as most researchers. But within four years, faced with the insurmountable problem of decoherence, founder Geordie Rose ditched the goal of a gate-based quantum computer. Instead, Rose decided to shoot for a less well-studied type of device, known as an adiabatic quantum computer (AQC).

    Like its gate-based counterpart, an AQC predisposes its qubits to yield the correct answer to a problem. Instead of repeatedly boosting the correct answer in a superposition, though, an AQC tries to avoid superpositions altogether. The qubits start off unambiguously holding the right answer to a trivial question – a non-superposition. They are then repeatedly measured, but each measurement asks the qubits a slightly different question. On each step, the question asked is brought a little closer to the question of interest, until the final measurement actually asks the question we’re trying to answer.

    Normally, changing the question would be like changing the angle you’re measuring a photon-qubit at: it would make the final measurement a measurement of a superposition, so the answers would become unpredictable. If we change the question slowly enough, though, then when the second measurement pulls the qubits out of the slight superposition introduced by the change in the question, the most likely result is the correct answer to the second question. Repeatedly changing the question in this manner ultimately leaves the qubits holding the correct answer to the final question.

    D-Wave’s AQC design is more resilient to decoherence. Because its goal is to avoid entering a superposition, it doesn’t need to protect a delicate qubit state so carefully from outside interference. One thing that does not come with the bargain is an exemption from entanglement, which is still required for an AQC to impose on the qubits the constraints of the question to be answered.

    Since its about-face, D-Wave has become a lightning rod for controversy. The criticism has largely concerned the company’s public claims. In 2007, D-Wave hosted a flashy public demonstration of its first prototype, a 16-qubit chip dubbed Orion. Orion seemed to be miles ahead of any other quantum computing technology, but experts in the field promptly dismissed the demonstration as an uninformative PR stunt. That didn’t fluster D-Wave, which in 2011 went on to announce the 128-qubit D-Wave One. Academics remained largely unimpressed, most notably Scott Aaronson, who appointed himself “Chief D-Wave Skeptic.” Nonetheless, Lockheed Martin found the company’s case compelling enough to purchase the machine.

    The past few months have seen yet another round of back-and-forth. D-Wave announced a more powerful processor, the D-Wave Two, with 512 qubits; researchers disputed its value. In May, NASA and Google announced a new “Quantum Artificial Intelligence Lab” that would operate a D-Wave Two; researchers remained skeptical. Just a few weeks ago, news organizations gushed about new research papers suggesting the D-Wave chip is the real deal; some researchers dismissed those, too.

    Between the media clamor and the general impenetrability of quantum computing, it’s difficult to separate the facts from the posturing, misunderstandings, and wishful thinking. With the background we’ve covered on quantum computing, though, we’re now in a good position to give it a try.

    What’s a D-Wave Worth?

    Ultimately, D-Wave’s offerings raise two main questions: do they provide any advantages over their classical competition, and are they really quantum computers? These questions overlap, but they’re not identical. If D-Wave’s chips do speed up some computations, then whether they’re quantum may not matter practically. On the other hand, if they really are quantum computers, then they represent a significant advance, regardless of whether they’re ready for prime time.

    Within the past few months, one research paper has come out addressing each question. The papers’ answers, according to many press articles, were “yes” and “yes”: D-Wave’s computer is demonstrably an AQC, and is sometimes thousands of times faster than standard computers. That sounds like quite the win for D-Wave! Some digging, however, reveals a more nuanced story.

    Let’s first tackle the question of speedup. In May, Amherst College professor Catherine McGeoch published the first hard data on the speed of a D-Wave chip. Her team compared the D-Wave Two against several standard software packages on problems of the type the D-Wave attempts to solve. (The chip is not a fully general AQC – it is designed to solve only problems like Sudoku puzzles, where you need to find the best solution for a set of constraints.) The D-Wave did indeed find correct solutions to some problems 3600 times faster than the best of the software packages. This was widely cited as evidence that D-Wave had a system that was, in the words of Rose, “better at something than any other option available.”

    But the story doesn’t end there. A few weeks later, other researchers squeezed more speed out of the off-the-shelf software, pushing it to just 14 times slower than the D-Wave. Some experts, meanwhile, complained that the deck was stacked: the software packages are designed to solve a wider variety of problems than the D-Wave chip. It’s no surprise that they couldn’t compete with a processor tailored to the problems being tested; 14 times slower is pretty impressive under the circumstances! In fact, even before McGeoch’s paper was published, Matthias Troyer, a physics professor at ETH Zürich, had beaten D-Wave with an equally specialized program running on a conventional laptop. Based on the data published so far, then, it seems premature to claim that D-Wave technology currently offers serious speedup. McGeoch herself, in a comment on Aaronson’s blog, wrote, “Our tests were never meant to be used to compare platform speeds, and it is wrong to use our data to support arguments either way.”

    Arguably, though, speedup is the less interesting question. Even if a D-Wave chip does beat other technologies, the advantage will only last if it’s also genuinely quantum. Otherwise, D-Wave is basically selling an overpriced desktop computer: all classical computers are roughly equivalent, so anything within the reach of a non-quantum D-Wave could eventually be implemented on a desktop.

    On to the essential issue, then: has D-Wave has built an AQC? Critics claimed for years that simpler phenomena than quantum mechanics could explain the D-Wave chips’ behavior. In particular, they alleged that the company had offered no evidence of entanglement in their qubits – a necessary component even for an AQC. That challenge was countered in April by a University of Southern California team led by Sergio Boixo. Boixo’s team compared the behavior of a D-Wave chip against two different simulations: one of the quantum phenomenon D-Wave claims is happening, and the other of the simpler non-quantum possibility. Like the quantum simulation, the D-Wave chip found most problems either extremely difficult or extremely easy. The classical version, meanwhile, had a much more even distribution of difficulty. As even Chief Skeptic Aaronson acknowledges, this is a strong indication that the chip includes quantum entanglement. This makes D-Wave the first to generate entanglement among such a large number of qubits, a significant scientific achievement.

    This brings us, of course, to our final question: should everyone be rushing to buy a D-Wave? If you want to solve a tricky problem tomorrow, the evidence published to date suggests that you’re probably better off just buying a laptop and hiring Matthias Troyer. Rumors of D-Wave’s speedup success have, unfortunately, been greatly exaggerated.

    But one exaggerated claim does not mean we should dismiss the technology. For one thing, one experiment that failed to demonstrate speedup need not be the final word. More importantly, though, D-Wave genuinely does appear to have built a system of entangled qubits, so the possibility for a computing revolution is there. Perhaps the best way to think of the D-Wave machine is as an experiment, on the part of both the company itself and its customers. The device could still turn out to be a dud, but if the D-Wave team is able to prove its worth – as evidence is slowly accumulating that they will be – the technological benefits could be immeasurable.

    Quantum computing is a field with enormous challenges, but also enormous promise. While practically useful quantum computers may still be more aspiration than reality, it will be fascinating to watch as attempts to bridge that gap, both in academia and in industry, continue to unfold.

    The views expressed in this article are those of the author alone, and do not necessarily reflect the views of his company.

Similar Threads

  1. Expanding to a new Super Dedicated Computer
    By mknutson in forum Announcements
    Replies: 1
    Last Post: August 6th, 2011, 06:42 AM
  2. Replies: 0
    Last Post: November 13th, 2008, 08:05 PM
  3. Replies: 0
    Last Post: October 10th, 2008, 09:09 AM
  4. Computer Stupidities
    By Mac in forum The BLiNC Lounge
    Replies: 3
    Last Post: September 13th, 2008, 10:50 AM
  5. The year of the computer geek!
    By guest in forum The 'Original' BASE Board
    Replies: 2
    Last Post: July 10th, 2002, 03:21 AM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •