Intrinsic decoherence observed again!

25/05/2013

ResearchBlogging.org

Decoherence is the effect that causes a quantum system to behave classically. The most known of this kind of effects is due to environment where the interaction of an open quantum system with its surrounding is the reason for the loss of quantum coherence. This effect is well-proven on an experimental ground and must be considered acquired knowledge. On the other side, it is a correct scientific question to ask if a closed quantum system ever displays classical behavior for some reason. I have already put forward my take in this blog (see here). This week, on Physical Review Letters (see here and here), it is appeared a paper showing how intrinsic decoherence comes out in an experimental setup of two coupled kicked rotors. Kicked rotors are the epitome of studies on classical chaos and corresponding quantum behavior. It is known that, classically, such a system display diffusion above a certain threshold, firstly computed by Boris Chirikov. The corresponding quantum system localizes instead when its classical counterpart is chaotic. This is the hallmark of a proper quantum behavior that refrains from chaos proper to classical nonlinear systems. The main reason is that the Schrödinger equation is just linear and superposition principle applies. On 1988, S. Adachi, M. Toda, and K. Ikeda showed a real beautiful result that two of such coupled systems lose quantum coherence (see here). The paper by Bryce Gadway, Jeremy Reeves, Ludwig Krinner, and Dominik Schneble (see here) is an experimental proof of the fact that the original theoretical result is a correct insight and we have again a proof that environmental decoherence is not all the story. An interesting recount is given here. This paper is really striking and open the door to a new class of experiments where closed quantum systems, possibly with a lot of systems involved, will be studied to give a full understanding of the quantum-classical transition.

Bryce Gadway, Jeremy Reeves, Ludwig Krinner, & Dominik Schneble (2012). Evidence for a Quantum-to-Classical Transition in a Pair of Coupled
Quantum Rotors Phys. Rev. Lett. 110, 190401 (2013) arXiv: 1203.3177v2

Adachi, S., Toda, M., & Ikeda, K. (1988). Quantum-Classical Correspondence in Many-Dimensional Quantum Chaos Physical Review Letters, 61 (6), 659-661 DOI: 10.1103/PhysRevLett.61.659

Advertisements

‘t Hooft and quantum computation

23/08/2012

ResearchBlogging.org

Gerard ‘t Hooft is one of greatest living physicists, one of the main contributors to the Standard Model. He has been awarded the Nobel prize in physics on 1999. I have had the opportunity to meet him in Piombino (Italy) at a conference on 2006 where he was there to talk about his view on foundations of quantum mechanics. He is trying to understand the layer behind quantum mechanics and this question has been a source of discussions here where he tried to find a fair audience to defend his view. Physics StackExchange, differently from MathOverflow for mathematicians,  has not reached the critical mass with most of the community, Fields medalists included for the latter, where the stars of the physics community take time to contribute. The reason is in the different approaches of these communities that can make a hard life for a Nobel winner while mathematicians’ approach appears polite and often very helpful. This can also be seen with the recent passing away of the great mathematician William Thurston (see here).

‘t Hooft’s thesis received an answer from Peter Shor that is one of the masters of quantum computation. What makes interesting the matter is that two authoritative persons discussed about foundations. Shor made clear, as you can read, that, out of three possibilities, a failure of quantum computation could give strong support to ‘t Hooft’s view and this is the case ‘t Hooft chose. Today, there is no large scale quantum computer notwithstanding large efforts by research and industry. Large scale quantum computers is what we need to turn this idea into a meaningful device. The main reason is that, as you enlarge your device, environment driven decoherence changes this into a classical system losing its computational capabilities. But, in principle, there is nothing else to prevent a large scale quantum computer from working. So, if you are so able to remove external disturbances, your toy will be turned into a powerful computational machine.

People working on this research area rely heavily on the idea that, in principle, there is nothing from preventing a quantum device to become a large scale computing device. ‘t Hooft contends that this is not true and that, at some stage, a classical computer will always outperform the quantum computer. Today, this has not been contradicted from experiment as we have not a large scale quantum computer yet.

‘t Hooft’s idea has a support from mathematical theorems. I tried to point out this in the comments below Shor’s answer and I received the standard answer:

and how does your interpretation of this theorem permit the existence of (say) superconductors in large condensed matter systems? How about four-dimensional self-correcting topological memory?

This is a refrain you will always hear as, when some mathematical theorems seem to contradict someone pet theory, immediately theoretical physicists become skeptical about mathematics. Of course, as everybody can see, most of the many body quantum systems turn into classical systems and this is a lesson we learn from decoherence but few many-body systems do not. So, rather to think that these are peculiar we prefer to think that a mathematical theorem is attacking our pet theory and, rather to do some minimal effort to understand, one attacks with standard arguments as this can change a mathematical truth into some false statement that does not apply to our case.

There are a couple of mathematical theorems that support the view that increasing the number of elements into a quantum system makes it unstable and this turns into a classical system. The first is here and the second is this. These theorems are about quantum mechanics, the authors use laws of quantum mechanics consistently and are very well-known, if not famous, mathematical physicists. The consequences of these theorems are that, increasing the numbers of components of a quantum system, in almost all cases, the system will turn into a classical system making this a principle to impede for a general working of a large scale quantum computer. These theorems strongly support ‘t Hooft’s idea. Apparently, they clash with the urban legend about superconductors and all that. Of course, this is not true and the Shor’s fears can be easily driven away (even if I was punished for this). What makes a system unstable with respect to the number of components can make it, in some cases, unstable with respect to quantum perturbations that could be amplified to a macroscopic level. So, these theorems are just saying that in almost all cases a system will be turned into a classical one but there exists an non-empty set, providing a class of macroscopic quantum systems, that can usefully contain a quantum computer. This set is not large as one could hope but there is no reason to despair whatsoever.

Laws of physics are just conceived to generate a classical world.

M. Hartmann, G. Mahler, & O. Hess (2003). Gaussian quantum fluctuations in interacting many particle systems Lett. Math. Phys. 68, 103-112 (2004) arXiv: math-ph/0312045v2

Elliott H. Lieb,, & Barry Simon (1973). Thomas-Fermi Theory Revisited Phys. Rev. Lett. 31, 681–683 (1973) DOI: 10.1103/PhysRevLett.31.681


Intrinsic decoherence is a scientific truth

01/10/2009

I would like to talk nicely of an initiative that helped me to find out that my view of decoherence, intrinsic decoherence, is indeed a scientific truth. Periodically, the Journal Club of Condensed Matter Physics presents an interesting selection of published papers in the area of condensed state of matter. This on-line journal was formerly started at Bell Labs and, due to its significant editorial members, contains a selection of very interesting works. This month, the first listed paper is a striking one, appeared in Physical Review Letters. It is an experimental paper and this means that the effect was indeed observed and measured. You can find this paper here but a subscription is needed to read it in full.

Let me summarize what I am claiming about this matter (see also here and here). A theorem due to Lieb and Simon says that, when the number of particles is taken to go to infinity for a quantum system with Coulomb interactions then Thomas-Fermi model is recovered. Thomas-Fermi model is a semiclassical model and so, a quantum system loses coherence and starts to behave classically. Please, note that this is a mathematical theorem. On the same ground, a beautiful theorem due to Hartmann, Mahler and Hess (see here)  shows that the decay is Gaussian when the same limit of particles going to infinity is taken. Both theorems, taken together, give a definite scenario of what happens, intrinsically, to quantum coherence of an isolated system. Can this be seen experimentally?

As I have already said, more than ten years ago, Horacio Pastawski and his group (check two papers by him here) proved, with NMR experiments, the very existence of this effect. They met a lot of difficulties to get their paper published. It was not and you can find it here. This group produces  a lot of very good physics and also this was fine as testified by a successive confirmation due to Dieter Suter and Hans Georg Krojanski appeared in Physical Review Letters. So far, it appeared as some pieces of a big jigsaw were around and nobody noticed them to make each other fit. Rather, researchers tried, in a way or another, to insert them in known matters. But this is completely new physics!

On August 8th of the last year, a paper on Physical Review Letters appeared that confirmed all this. This paper is the one I cited at the start of this post and is due to A. P. D. Love,  D. N. Krizhanovskii,  D. M. Whittaker,  R. Bouchekioua,  D. Sanvitto,  S. Al Rizeiqi,  R. Bradley,  M. S. Skolnick,  P. R. Eastham,  R. André, and Le Si Dang. I cite all of them because they did a great job and must be named. The physics relies on the behavior of polaritons. These are quasi-particles appearing in a Bose-Einstein condensate and, being bosons themselves, they condensate too. But observing such a condensate and to understand its decay it is not an easy task. Rather, this makes for an experimentalist a true challenge. Authors above accomplished this task and proved that number fluctuations are involved in the process, the decay is Gaussian and, all in all, the effect is purely intrinsic. The true signature of this effect is the dependence of the Gaussian decay on the number of particles and this is clearly seen by these authors.

All of this shows clearly that two effects are at work in producing the world we observe: an intrinsic effect that appears for a large number of interacting particles and a decay of quantum coherence produced by the interaction with the environment. For the particular case of cosmological perturbations, it is the intrinsic mechanism that induces a classical behavior (see here for an alternative view).


Ballentine and the decoherence program

10/03/2009

A problem that I have treated in this blog is the question of the quantum-classical transition. This question is hotly debated by people working in quantum optics, quantum computation and wherever foundations of quantum mechanics may enter. Of course, this problem today appears far from being settled and is a heavy burden left us by the fathers of quantum mechanics. Something has been acquired as environmental decoherence. Fighting this effect is a problem experimentalists have today in their everyday activity. But we know that this cannot be all the story.

Some time ago Wojciech Hubert Zurek, one of the main contributors to environmental decoherence, claimed that Hyperion, a Saturn’s moon, behaves classically in its motion just for environmental decoherence otherwise we would observe a macroscopic quantum object splashed in its orbit as happens to electrons in an atom. Of course some people contested these conclusions and come out with a sound explanation of classicality of Hyperion’s motion without the need of environmental decoherence. One of these authors is Leslie Ballentine. I think that a lot of people have read his beautiful book about quantum mechanics. Ballentine and Nathan Wiebe wrote a paper (see here), that went published on Physical Review A (see here), where they soundly proved that Hyperion behaves classically without recurring to any kind of external agent. In some way they gave an hint of an intrinsic emerging of classicality for macroscopic objects (“for all practical purposes” as John Bell taught us). This means that classicality may be an emerging property of quantum objects.

Of course, defenders of environmental decoherence tried to attack Ballentine and Wiebe view (see here). Ballentine’s answer is here. This gives a lucid view of the present criticisms to environmental decoherence, that I would like to recall it is a true observed effect, claiming an intrinsic decoherence effect for isolated quantum systems. The last word has not been said yet. Future experiments will say.

Most of the supporters of environmental decoherence share Ballentine’s views as are well aware of the limitations of this approach. This is part of the truth. I think that here is in view some new deep understanding of how reality forms. Some subtlities are implied and this can explain difficulties researchers have currently met.


A significant progress in large molecule interferometry

16/09/2008

We have written several posts in this blog about the question of decoherence and thermodynamic limit. One of the crucial experiments to understand if a body made by a large number of molecules can become classical is through this ingenious way to do interferometry with large molecules. This idea has been realized by Anton Zeilinger and his group at University of Vienna using initially molecules of fullerene. The results were striking as they were able to prove the wave nature of these large molecules. The next step is to try to use more heavy bodies to do such an experiment. With their device, a Talbot-Lau interferometer, they were able to see wave behavior for fluorofullerene but in this case they obtained a visibility lower than expected (see here and here). They were unable to claim if this was a genuine new effect or rather a limitation of the experimental apparatus. Further analyses were needed but, mostly, the apparatus needed significant improvement to manage heavier molecules and to be sure in this way that any observed effect is a genuine one and not an artifact of the used device. Anyhow, I show here this picture that is really striking.

From this figure is blatantly evident that the expected curve is not in perfect agreement with measured points for fluorofullerene. But, as already said, the experimenters were not able to do any claim about this as these differences could be due to the interferometer.

Since then, all these activities have gone in the hands of Markus Arndt that worked with Zeilinger to these experiments. Arndt is full professor at University of Vienna and has taken in charge the not that easy activity to improve the apparatus to perform experiments with larger molecules. Recently Arndt and his group published a paper on Nature ( see here) where they showed a really significat improvement in the apparatus that should grant analysis of interferometry of very large molecules. We note in this way the complexity of this enterprise that required several years for the achievement. So, now the expectations are high to see interferometry with some unexpected kind of molecules and the possibility that Arndt and his group will do some breakthrough is surely high.


%d bloggers like this: