‘t Hooft and quantum computation

23/08/2012

ResearchBlogging.org

Gerard ‘t Hooft is one of greatest living physicists, one of the main contributors to the Standard Model. He has been awarded the Nobel prize in physics on 1999. I have had the opportunity to meet him in Piombino (Italy) at a conference on 2006 where he was there to talk about his view on foundations of quantum mechanics. He is trying to understand the layer behind quantum mechanics and this question has been a source of discussions here where he tried to find a fair audience to defend his view. Physics StackExchange, differently from MathOverflow for mathematicians,  has not reached the critical mass with most of the community, Fields medalists included for the latter, where the stars of the physics community take time to contribute. The reason is in the different approaches of these communities that can make a hard life for a Nobel winner while mathematicians’ approach appears polite and often very helpful. This can also be seen with the recent passing away of the great mathematician William Thurston (see here).

‘t Hooft’s thesis received an answer from Peter Shor that is one of the masters of quantum computation. What makes interesting the matter is that two authoritative persons discussed about foundations. Shor made clear, as you can read, that, out of three possibilities, a failure of quantum computation could give strong support to ‘t Hooft’s view and this is the case ‘t Hooft chose. Today, there is no large scale quantum computer notwithstanding large efforts by research and industry. Large scale quantum computers is what we need to turn this idea into a meaningful device. The main reason is that, as you enlarge your device, environment driven decoherence changes this into a classical system losing its computational capabilities. But, in principle, there is nothing else to prevent a large scale quantum computer from working. So, if you are so able to remove external disturbances, your toy will be turned into a powerful computational machine.

People working on this research area rely heavily on the idea that, in principle, there is nothing from preventing a quantum device to become a large scale computing device. ‘t Hooft contends that this is not true and that, at some stage, a classical computer will always outperform the quantum computer. Today, this has not been contradicted from experiment as we have not a large scale quantum computer yet.

‘t Hooft’s idea has a support from mathematical theorems. I tried to point out this in the comments below Shor’s answer and I received the standard answer:

and how does your interpretation of this theorem permit the existence of (say) superconductors in large condensed matter systems? How about four-dimensional self-correcting topological memory?

This is a refrain you will always hear as, when some mathematical theorems seem to contradict someone pet theory, immediately theoretical physicists become skeptical about mathematics. Of course, as everybody can see, most of the many body quantum systems turn into classical systems and this is a lesson we learn from decoherence but few many-body systems do not. So, rather to think that these are peculiar we prefer to think that a mathematical theorem is attacking our pet theory and, rather to do some minimal effort to understand, one attacks with standard arguments as this can change a mathematical truth into some false statement that does not apply to our case.

There are a couple of mathematical theorems that support the view that increasing the number of elements into a quantum system makes it unstable and this turns into a classical system. The first is here and the second is this. These theorems are about quantum mechanics, the authors use laws of quantum mechanics consistently and are very well-known, if not famous, mathematical physicists. The consequences of these theorems are that, increasing the numbers of components of a quantum system, in almost all cases, the system will turn into a classical system making this a principle to impede for a general working of a large scale quantum computer. These theorems strongly support ‘t Hooft’s idea. Apparently, they clash with the urban legend about superconductors and all that. Of course, this is not true and the Shor’s fears can be easily driven away (even if I was punished for this). What makes a system unstable with respect to the number of components can make it, in some cases, unstable with respect to quantum perturbations that could be amplified to a macroscopic level. So, these theorems are just saying that in almost all cases a system will be turned into a classical one but there exists an non-empty set, providing a class of macroscopic quantum systems, that can usefully contain a quantum computer. This set is not large as one could hope but there is no reason to despair whatsoever.

Laws of physics are just conceived to generate a classical world.

M. Hartmann, G. Mahler, & O. Hess (2003). Gaussian quantum fluctuations in interacting many particle systems Lett. Math. Phys. 68, 103-112 (2004) arXiv: math-ph/0312045v2

Elliott H. Lieb,, & Barry Simon (1973). Thomas-Fermi Theory Revisited Phys. Rev. Lett. 31, 681–683 (1973) DOI: 10.1103/PhysRevLett.31.681

Advertisements

The question of the arrow of time

31/08/2009

A recent paper by Lorenzo Maccone on Physical Review Letters (see here) has produced some fuss around. He tries to solve the question of the arrow of time from a quantum standpoint. Lorenzo is currently a visiting researcher at MIT and, together with Vittorio Giovannetti and Seth Lloyd, he produced several important works in the area of quantum mechanics and its foundations. I have had the luck to meet him in a conference at Gargnano on the Garda lake together with Vittorio. So, it is not a surprise to see this paper of him in an attempt to solve one of the outstanding problems of physics.

The question of the arrow of time is open yet. Indeed, one can think that Boltzmann’s H-theorem closed this question definitely but this is false. This theorem has been the starting point for a question yet to be settled. Indeed, Boltzmann presented a first version of his theorem that showed one of the most beautiful laws in physics: the relation between entropy and probability. This proof was criticized by Loschmidt (see here) and this criticism was sound. Indeed, Boltzmann had to modifiy his proof by introducing the so called Stosszahlansatz or molecular chaos hypothesis introducing in this way time asymmetry by hand.  Of course, we know for certain that this theorem is true and so, also the hypothesis of molecular chaos must be true. So, the question of the arrow of time will be solved only when we will know where molecular chaos comes from. This means that we need a mechanism, a quantum one, to explain Boltzmann’s hypothesis. It is important to emphasize that, till today, a proof does not exist of the H-theorem that removes such an assumption.

Quantum mechanics is the answer to this situation and this can be so if we knew how reality forms. An important role in this direction could be given by environmental decoherence and how it relates to the question of the collapse. A collapse grants immediately asymmetry in time and here one has to cope with many-body physics with a very large number of components. In this respect there exists a beautiful theorem by Elliot Lieb and Barry Simon, two of the most prominent living mathematical-physicists, that says:

Thomas-Fermi model is the limit of quantum theory when the number of particles goes to infinity.

For a more precise statement you can look at Review of Modern Physics page 620ff. Thomas-Fermi model is just a semi-classical model and this just means that this fundamental theorem can be simply restated as saying that the limit of a very large number of particles in quantum mechanics is the classical world. In some way, there exists a large number of Hamiltonians in quantum mechanics that are not stable with  respect to such a particle limit losing quantum coherence. For certain we know that there exist other situations where quantum coherence is kept at a large extent in many-body systems. This would mean that exist situations where quantum fluctuations are not damped out with increasing number of particles.  But the very existence of this effect implied in the Lieb and Simon theorem means that quantum mechanics has an internal mechanism producing time-asymmetry. This, together with environmental decoherence (e.g. the box containing a gas is classical and so on), should grant a fully understanding of the situation at hand.

Finally, we can say that Maccone’s attempt, being on this line of thought, is a genuine way to understand from quantum mechanics the origin of time-asymmetry. I hope his ideas will meet with luck.

Update: In Cosmic Variance you will find an interesting post and worthwhile to read discussion involving Sean Carroll, Lorenzo Maccone and others on the questions opened with Lorenzo’s paper.


Environmental decoherence or not?

23/07/2008

One of the most relevant open questions currently under study in physics is how a classical world emerges from the laws of quantum mechanics. This can be resumed in a standard philosophical question:”How does reality form?”. We have learnt from standard quantum mechanics courses that one just takes the mathematical limit \hbar\rightarrow 0 and the classical limit emerges from quantum mechanics. Indeed, as always, things are not that simple as in Nature \hbar is never zero and this means that all objects that are seen should be in a quantum mechanical state. So, why in our everyday life we never observe weird behaviors? What is that cuts out most of the Hilbert space states to maintain a systematic classical behavior in macroscopic objects? More carefully stated the question can be put as:”Where is the classical-quantum border, if any?”. Indeed, if we are able to draw such a border we can buy all the Copenaghen interpretation and be happy.

A proposal that is going to meet increasing agreement is environmental decoherence. In this case one assumes that an external agent does the job erasing all the quantum behavior of a system and leaving only a superselected set of pointer states that grants a classical behavior. From a mathematical standpoint one can see interference terms disappear but one cannot say what is the exact state the system is left on, leaving in some way the measurement problem unsolved. It should be said that the starting point of this approach has been a pioneering work of Caldeira and Leggett that firstly studied quantum dissipation. In order to have an idea of how environmental decoherence should work, the Einstein’s question: “Do you really believe that the moon is not there when we do not look at it?” is just answered through the external effect of sun radiation and, maybe, the cosmic microwave background radiation that should act as a constant localizing agent even if I have never seen such an interpretation in the current literature. It is clear that to try to explain a physical effect by a third undefined agent is somewhat questionable and sometime happens to read in literature some applications of this idea without a real understanding of what such an agent should be. This happens typically in cosmology where emergence of classicality cannot be easily understood. As unsatisfactory may be such an approach can be seen from the relevant conclusion that it gives strong support to a multiverse interpretation of quantum mechanics. Of course, this can be a welcomed conclusion for string theorists.

Today on arxiv a preprint is appeared by Steven Weinstein of Perimeter Institute that proves a theorem showing that environmental decoherence cannot be effective in producing classical states from generic quantum states. Indeed, all applications of environmental decoherence in literature consider just well built toy models that, due to their construction, produce classicality but this behavior is not generic even if in agreement with the theorem proved by Weinstein. So, the question is still there unanswered:”Where is the classical-quantum border, if any?”. We have already seen here the thermodynamic limit, that is the border at infinity, is an answer, but the question requires a deep experimental investigation. A hint of this was seen in interference experiments with large molecules by Zeilinger’s group. This group is not producing any new result about since 2003 but their latest paper was showing some kind of blurry behavior with larger molecules. This effect has not been confirmed and one cannot say if it was just a problem of the apparatus.

The question is still there and we can state it as:”How does reality form?”.


%d bloggers like this: