Every additional qubit doubles the search space of the processor. At 1000 qubits, the new processor considers 2^1000 possibilities simultaneously, a search space which dwarfs the 2^512 possibilities available to the 512-qubit D-Wave Two.
Since we still aren't able to factor any large numbers with it, those 2^1000 bits don't really work like they say they do. I'm guessing there are many caveats behind their description.
I would appreciate any explanation from an expert.
The D-Wave computer is called a quantum computer because it uses some quantum properties when running a simulated annealing kind of algorithm (which is a well known classical algorithm so find sub-obtimal solutions to combinatorial problems, aka NP-Complete problems). But it is NOT a universal quantum computer in which one could run Shor's or Groover algorithms. In order to do that, you need to keep a quantum system with entangled qubits, something that is extremely difficult to achieve due to quantum decoherence, etc.
So, saying that "the new processor considers 2^1000 possibilities simultaneously" is basically a ton of bullsh. Even a real quantum computer cannot do that effectively.
Nevertheless, I think D-Wave is doing a great work and is definitely taking steps towards a real QC.
Well, I will tell one relevant problem that can be annihilated with this: ads placement. Google advertisement adwords is all based on a auction mechanism at the end of which you have to find the best allocation of ads to bidders (to slots) - which has an evident combinatorial nature, to find optimum you should try all possibile combinations. This is typically solved in approximated ways. It may not be a "conventional quantum computer" but it's pretty usefull anyway:) I can't wait to toy with one..
The statement is also a common fallacy that Scott Aaronson has addressed many times. While some exponential speedups are possible, there are no indications that they are possible in general for problems in NP, and even if they were, we know that they would have to use the structure of the problem and not merrely "consider possibilities simulatenously"
"a D-Wave computation is both quantum and analog, with nothing resembling a discrete instruction or basic operation that can be counted"
So it's not really a quantum computer at all in the sense that the term is usually understood nowadays. It's an analog computer. And if you believe Scott Aaronson (and I do) it's a classical analog computer.
These results seem a bit misleading - they're comparing their multi-million dollar system to a classical optimizer running single-threaded on a 3-year old processor (Intel Xeon E5-2670), and then saying 'look how much faster we are than a classical optimizer!'
How does this compare to a giant cluster of new Xeons with faster interconnects?
""CPLEX is a general-purpose, off-the-shelf exact optimization package. Of course an exact solver can’t compete against quantum annealing—or for that matter, against classical annealing or other classical heuristics! """
This is BS. It's really easy to shoot down a new technology in its second iteration by comparing it with the gazillionth iteration of super-duper-optimised older technology. So what if CPLEX is this and this, it's still running on a platform that has had multiple Billions of dollars poured into it to optimize and improve it. What do you think will a QC be capable of when it has had a similar investment in R&D - I beg to ask of you?? The truth is, classical computers are reaching their zenith. Those who stubbornly denounce new technologies based on such a truly horrific comparison such as what we have here are doomed to be clutching their 8 core beast of yesteryear, sulking as the part of the world that is willing to adapt and change move past them at a rapid pace. I can promise you there were many similar arguments such as this when radio technology, TV broadcasting, computer technology, and LCDs first came about. Sure, people can argue against a new technology based on the underlying fundamental principles of the technology, but saying a new technology is stupid because it's not as good (yet!) as the current technology is the worst argument in human history. Using your argument I can say that Albert Einstein is definitely the most stupid person that ever lived - because look, as a 3-year-old he was useless. The argument forgets that a three year old Einstein is a much different person than a 23-year-old Einstein. A 23-year-old quantum computer will be a much different thing than a three-year-old quantum computer!
Read the article. Aaronson is in fact arguing against a new technology based on the fundamental principles of the technology.
DWave is not a quantum computer, they aren't even a quantum annealing computer. They (probably, but not certainly) exploit a very particular effect to optimize a very particular problem. It cannot run Shor's or Grover's, and it never will be able to.
Not to mention the continual energy requirements of cooling a system to near absolute zero, in addition to the fixed cost of the system. Although there are almost certainly some specialized niches where it would make sense.
For it to make sense to buy one, it would have to actually be a better all-around solution to a problem. Say this machine costs $10 million (the same price of one of their earlier machines), and is the size and power consumption of many racks of equipment. Do you spend $10 million on this, or $10 million on 4 racks of xeon servers with GPUs?
Their machine is probably hilariously worse than the commodity solution for nearly all real-world problems.
If we're talking about the D-Wave in particular then no, I don't think there's anybody who's going to prefer them for real work as they're not competitive. I'm pretty sure most people are buying them as experimental platforms to play with an emerging technology and nobody expects them to actually be practical.
If we're talking about quantum computers in general, then there's several places where a hypothetical ideal quantum computer with competitive performance would have an advantage. The typical example is in cryptography, where Shor's integer factorization algorithm ( https://en.wikipedia.org/wiki/Shor%27s_algorithm ) would likely be fast enough to break modern public key cryptography in reasonable (i.e., polynomial) time.
My understanding is that quantum computers are very good at simulating quantum physics. Which, granted, probably doesn't have many commercial applications.
"Normal" quantum computers are good at simulating quantum physics, or would be if we could build big enough ones. D-Wave's adiabatic allegedly-quantum computer is not, so far as anyone knows.
The only thing (so far as I am aware) that anyone knows how to do with D-Wave's machine is to use it to find approximate solutions to certain optimization problems. The last I heard, it did so slower than a standard-issue laptop running (non-quantum!) software designed to find approximate solutions to the same optimization problem that D-Wave's underlying hardware models.
That was before the release of the latest D-Wave machine. I don't think it was ever clear whether D-Wave's device scales better than a conventional computer when trying to solve larger problems. Perhaps it does, in which case their new $10M machine may outperform commodity laptops.
[EDITED to add: see http://www.archduke.org/stuff/d-wave-comment-on-comparison-w... and the other pages linked therefrom for some comparisons between D-Wave's reported performance and that of some heuristic optimization software running on a commodity laptop. Disclosure: the author is a friend of mine.]
AFAIK this kind of problem is easily parallelized, and it should be possible to accelerate it using GPGPU... So, they may not be able to beat a well-tuned implementation on a desktop workstation with a few GPUs costing less than 10K.
I think the most significant gain made here is not speed but power consumption. On http://www.dwavesys.com/d-wave-two-system they compare a supercomputer using almost 2 gW while 2X uses 27 kW when you factor in "the fridge". I don't know if that's an apples-2-apples comparison, hoping the supercomputer is set up to solve comparable problems. If I am understanding this properly they have a great product on their hands and a great deal of units to make.
On http://www.dwavesys.com/d-wave-two-system they compare a
supercomputer using almost 2 gW while 2X uses 27 kW when
you factor in "the fridge".
Megawatts, not gigawatts. A gigawatt is a thousand times larger than a megawatt, and a million times larger than a kilowatt. A two gigawatt computer would consume the entire output of a large coal power station, such as https://en.wikipedia.org/wiki/Homer_City_Generating_Station
According to Aaronson and others, the D-Wave architecture has not demonstrated any behavior that can't be explained using classical physics, so just making the device bigger shouldn't change it into a quantum device with a quantum speed up. That said -- who knows, maybe the architecture is different enough from everything else that it could be a better classical annealer than standard processors (though it's only ~1000 bits, so it probably can't outperform any conventional processor sold today with billions of bits).
> According to Aaronson and others, the D-Wave architecture has not demonstrated any behavior that can't be explained using classical physics
That's not Scott's definitive opinion anymore. For example, from [1]:
> Now, I’d say, D-Wave finally has cleared the evidence-for-entanglement bar—and, while they’re not the first to do so with superconducting qubits, they’re certainly the first to do so with so many superconducting qubits. [caveats about D-wave over-hyping and not having a demonstrated speedup yet]
What language is used to program for this processor? I wouldn't think you'd take the same approach programming a quantum computer as you would a conventional computer. Is there a quantum computer simulator? What does the development workflow look like? How do you debug? Boy I have a lot of questions!
There is one button - "on". There is no program. The program is baked into the machine itself. The input data is carefully prepared and fed into the system. The machine then runs and spits out the output. That's it.
You should think of it more as a piece of lab equipment rather than a computer.
Adiabatic quantum computers are polynomially equivalent to what is usually referred as quantum computer so no surprise there. Dwave is neither, nor is it faster than classical computers. At best it is somewhat interesting from an engineering point of view.
This announcement, claims:
Every additional qubit doubles the search space of the processor. At 1000 qubits, the new processor considers 2^1000 possibilities simultaneously, a search space which dwarfs the 2^512 possibilities available to the 512-qubit D-Wave Two.
Since we still aren't able to factor any large numbers with it, those 2^1000 bits don't really work like they say they do. I'm guessing there are many caveats behind their description.
I would appreciate any explanation from an expert.