Physicists around the world are engaged in a number of very large scale programs (for example: QC Australia) hoping to create a “quantum computer” – a device that utilizes the unusual and often counterintuitive laws of quantum mechanics (the framework that governs how “small <1>” things work) to perform computations and calculations in new and sometimes very important ways.
There is an important reason why the quantum computer is an important target for research: The fast pace of classical computer development is destined to hit a brick wall in the next decades as the size of the transistors on the chip approach the size where quantum mechanics becomes a dominant factor in the passage of the electron through the chip. At that point, the fast pace of progress encompassed by Moore's Law would probably cease as it becomes impossible to cram components closer together on the silicon wafers. In order to continue to increase computational power it becomes necessary to exploit new ideas of computation. These new ideas are showing some dramatic results.
One, now well established, method is “probabilistic” or “Monte-Carlo” computing. The idea behind this is to use conventional computers coupled with effective random number generators to speed up calculations, with a sacrifice of an amount of accuracy. For example, rather than calculating the volume of a complicated object directly, a computer might pick a large number of in a space around the object and count what fraction of those points lie within the object. As the number of points increases, the precision of this value also increases.
A newer paradigm is quantum information. Rather than manipulating bits (1’s and 0’s, usually represented by voltages in electronic components) the quantum computer processes the quantum version of a bit – a qubit. Qubits are represented by systems that have two distinct states, |1> and |0> (the brackets are just a conventional way of denoting a quantum state) such as an electron within an atom – zero might be the ground state and one the excited state, but qubits posses the uniquely quantum mechanical property known as superposition. That is, a qubit can be simultaneously both the one and zero states <2>. This allows the quantum computer to perform several feats that a classical computer cannot.
For instance, if you wanted to know something about the results of a calculation for several different inputs, you could create an input state that is the superposition of all the inputs and process this state through a quantum computer and then your output would the superposition of all the answers. Unfortunately, this is where we run into a problem – when we try to read out this superposition, we collapse the superposition state randomly down to just one of the answers. However, there are ways, for specific problems, that the information you are interested in can be moved around within the state so that it can be directly accessible. This, in principle, allows results to be determined with one computation, rather than several, which may be a huge benefit if the individual calculation is effort intensive.
Although the field is still young, there are already several examples of problems (such as finding prime factors of large numbers, see: Factoring Large Numbers) that are believed to be extraordinarily difficult to solve using conventional algorithms, which can potentially be solved much faster on a quantum computer. For example, a quantum algorithm exists to find the prime factors of a large number (Shor's Algorithm) in very much less time than the fastest published <3> classical algorithm.
There are a number of factors standing in the way of the development of a successful quantum computer, but probably the only important one is Decoherence. When you measure the state of a superposition, it collapses, destroying the interesting and powerful quantumness of the state. But when the environment can interact with a qubit, it implicitly measures the state too! If the environment causes a collapse of a superposition in between steps of a quantum computation, the result will not be as expected, ruining the calculation. In order to avoid this problem, physicists are simultaneously trying to find systems which have minimal interaction with the environment and develop algorithms to reduce the errors that decoherence creates. These algorithms can then be spliced into other algorithms to make them more robust in the face of decoherence. Despite of all this effort, it is still unclear whether decoherence make quantum computation fundamentally impossible or simply rather difficult.
<1> The definition of small in this sense is deliberately rather vague – in most cases, something the size of an electron or atom needs quantum mechanics to be described with accuracy, whereas the parabolic trajectory of a juggling ball does not. Unless you need to talk about the color of the balls – then you’ll need to look at the quantum mechanical properties of the material on the outside of the balls.
<2> There are limits to the superposition – measurements upon the superposition state will reduce it into one of the two basis states. For example, while the state |1> will always be measured to be in the “1” state while the state (|1>+|0>) when measured will have a 50% probability of being measured in the “1” state. Moreover, once it is measured, all further measurements will give the same result. The technical jargon for this is “collapsing the wavefunction”.<3> As factoring is so important to cryptography, it is likely that intelligence agencies have developed, in secret, methods to find prime factors that they are understandably unwilling to share. By way of a precedent, it has been revealed that British intelligence developed and used and RSA style public key encryption system several years before its “discovery”.