Unpublishable

31/12/2015

I tried in different ways to get this paper through the community with standard channels. As far as I can tell, this paper is unpublishable. By this I mean that journals not even send it to referees to start a normal review process or all people try to stop it from making it known. The argument is always the same: A reformulation of quantum mechanics using stochastic processes but using noncommutative geometry this time. I apologize to the community if this unacceptable approach has bothered people around the World but this is the fate of some ideas. Of course, if somebody has the courage and the willing to publish, let me know and I will appreciate the tentative with infinite gratefulness.

Now, back to sane QCD.

Happy new year!

Advertisement

Quantum gravity

27/12/2015

ResearchBlogging.org

Quantum gravity appears today as the Holy Grail of physics. This is so far detached from any possible experimental result but with a lot of attentions from truly remarkable people anyway. In some sense, if a physicist would like to know in her lifetime if her speculations are worth a Nobel prize, better to work elsewhere. Anyhow, we are curious people and we would like to know how does the machinery of space-time work this because to have an engineering of space-time would make do to our civilization a significant leap beyond.

A fine recount of the current theoretical proposals has been rapidly presented by Ethan Siegel in his blog. It is interesting to notice that the two most prominent proposals, string theory and loop quantum gravity, share the same difficulty: They are not able to recover the low-energy limit. For string theory this is a severe drawback as here people ask for a fully unified theory of all the interactions. Loop quantum gravity is more limited in scope and so, one can think to fix theAlain Connes problem in a near future. But of all the proposals Siegel is considering, he is missing the most promising one: Non-commutative geometry. This mathematical idea is due to Alain Connes and earned him a Fields medal. So far, this is the only mathematical framework from which one can rederive the full Standard Model with all its particle content properly coupled to the Einstein’s general relativity. This formulation works with a classical gravitational field and so, one can possibly ask where quantized gravity could come out. Indeed, quite recently, Connes, Chamseddine and Mukhanov (see here and here), were able to show that, in the context of non-commutative geometry, a Riemannian manifold results quantized in unitary volumes of two kind of spheres. The reason why there are two kind of unitary volumes is due to the need to have a charge conjugation operator and this implies that these volumes yield the units (1,i) in the spectrum. This provides the foundations for a future quantum gravity that is fully consistent from the start: The reason is that non-commutative geometry generates renormalizable theories!

The reason for my interest in non-commutative geometry arises exactly from this. Two years ago, I, Alfonso Farina and Matteo Sedehi obtained a publication about the possibility that a complex stochastic process is at the foundations of quantum mechanics (see here and here). We described such a process like the square root of a Brownian motion and so, a Bernoulli process appeared producing the factor 1 or i depending on the sign of the steps of the Brownian motion. This seemed to generate some deep understanding about space-time. Indeed, the work by Connes, Chamseddine and Mukhanov has that understanding and what appeared like a square root process of a Brownian motion today is just the motion of a particle on a non-commutative manifold. Here one has simply a combination of a Clifford algebra, that of Dirac’s matrices, a Wiener process and the Bernoulli process representing the scattering between these randomly distributed quantized volumes. Quantum mechanics is so fundamental that its derivation from a geometrical structure with added some mathematics from stochastic processes makes a case for non-commutative geometry as a serious proposal for quantum gravity.

I hope to give an account of this deep connection in a near future. This appears a rather exciting new avenue to pursue.

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Quanta of Geometry: Noncommutative Aspects Phys. Rev. Lett. 114 (2015) 9, 091302 arXiv: 1409.2471v4

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Geometry and the Quantum: Basics JHEP 12 (2014) 098 arXiv: 1411.0977v1

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y


DICE 2014

21/09/2014

ResearchBlogging.org

I have spent this week in Castiglioncello participating to the Conference DICE 2014. This Conference is organized with a cadence of two years with the main efforts due to Thomas Elze.

Castello Pasquini a Castiglioncello sede di DICE 2014

Castello Pasquini at Castiglioncello  (DICE 2014)

I have been a participant to the 2006 edition where I gave a talk about decoherence and thermodynamic limit (see here and here). This is one of the main conferences where foundational questions can be discussed with the intervention of some of the major physicists. This year there have been 5 keynote lectures from famous researchers. The opening lecture was held by Tom Kibble, one of the founding fathers of the Higgs mechanism. I met him at the registration desk and I have had the luck of a handshake and a few words with him. It was a recollection of the epic of the Standard Model. The second notable lecturer was Mario Rasetti. Rasetti is working on the question of big data that is, the huge number of information that is currently exchanged on the web having the property to be difficult to be managed and not only for a matter of quantity. What Rasetti and his group showed is that topological field theory yields striking results when applied to such a case. An application to NMRI for the brain exemplified this in a blatant manner.

The third day there were the lectures by Avshalom Elitzur and Alain Connes, the Fields medallist. Elitzur is widely known for the concept of weak measurement that is a key idea of quantum optics. Connes presented his recent introduction of the quanta of geometry that should make happy loop quantum gravity researchers.Alain Connes at DICE2014 You can find the main concepts here. Connes explained how the question of the mass of the Higgs got fixed and said that, since his proposal for the geometry of the Standard Model, he was able to overcome all the setbacks that appeared on the way. This was just another one. From my side, his approach appears really interesting as the Brownian motion I introduced in quantum mechanics could be understood through the quanta of volumes that Connes and collaborators uncovered.

Gerard ‘t Hooft talked on Thursday. The question he exposed was about cellular automaton and quantum mechanics (see here). It is several years that ‘t Hoof t is looking for a classical substrate to quantum mechanics and this was also the point of other speakers at the Conference. Indeed, he has had some clashes with people working on quantum computation as ‘t Hooft, following his views, is somewhat sceptical about it.'t Hooft at DICE2014 I intervened on this question based on the theorem of Lieb and Simon, generally overlooked in such discussions, defending ‘t Hoof ideas and so, generating some fuss (see here and the discussion I have had with Peter Shor and Aram Harrow). Indeed, we finally stipulated that some configurations can evade Lieb and Simon theorem granting a quantum behaviour at macroscopic level.

This is my talk at DICE 2014 and was given the same day as that of  ‘t Hooft (he was there listening)My talk at DICE 2014. I was able to prove the existence of fractional powers of Brownian motion and presented new results with the derivation of the Dirac equation from a stochastic process.

The Conference was excellent and I really enjoyed it. I have to thank the organizers for the beautiful atmosphere and the really pleasant stay with a full immersion in wonderful science. All the speakers yielded stimulating and enjoyable talks. For my side, I will keep on working on foundational questions and look forward for the next edition.

Marco Frasca (2006). Thermodynamic Limit and Decoherence: Rigorous Results Journal of Physics: Conference Series 67 (2007) 012026 arXiv: quant-ph/0611024v1

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Quanta of Geometry arXiv arXiv: 1409.2471v3

Gerard ‘t Hooft (2014). The Cellular Automaton Interpretation of Quantum Mechanics. A View on the Quantum Nature of our Universe, Compulsory or Impossible? arXiv arXiv: 1405.1548v2


Evidence of the square root of Brownian motion

06/03/2014

ResearchBlogging.org

A mathematical proof of existence of a stochastic process involving fractional exponents seemed out of question after some mathematicians claimed this cannot exist. This observation is strongly linked to the current definition and may undergo revision if nature does not agree with it. Stochastic processes are very easy to simulate on a computer. Very few lines of code can decide if something works or not. I and Alfonso Farina, together with Matteo Sedehi,  have introduced the idea that the square root of a Wiener process yields the Schroedinger equation (see here or download a preprint here). This implies that one has to attach a meaning to the equation

dX=(dW)^\frac{1}{2}.

In a paper appeared today on arxiv (see here) we finally have provided this proof: We were right. The idea is to solve such an equation by numerical methods. These methods are themselves a proof of existence. We used the Euler-Maruyama method, the simplest one and we compared the results as shown in the following figure

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d)  Same from the stochastic equation in this post.

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d) Same from the stochastic equation in this post.

There is now way to distinguish each other and the original Brownian motion is completely recovered by taking the square of the square root process computed in three different ways. Each one of these completely supports the conclusions we have drawn in our published paper. You can find the code to recover this figure in our arxiv paper. It is obtained by a Monte Carlo simulation with 10000 independent paths. You can play with it changing the parameters as you like.

This paper has an important consequence: Our current mathematical understanding of stochastic processes should be properly extended to account for our results. As a by-product, we have shown how, using Pauli matrices, this idea can be generalized to include spin introducing a new class of stochastic processes in a Clifford algebra.

In conclusion, we would like to remember that, it does not matter what your mathematical definition could be, a stochastic process is always a well-defined entity on a numerical ground. Tests can be easily performed as we proved here.

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

Marco Frasca, & Alfonso Farina (2014). Numerical proof of existence of fractional Wiener processes arXiv arXiv: 1403.1075v1


Nature already patched it

09/02/2014

ResearchBlogging.org

Dennis Overbye is one of the best science writer around. Recently, he wrote a beautiful piece on the odd behavior of non-converging series like 1+2+3+4+\ldots and so on to infinity (see here). This article contains a wonderful video, this one

where it shown why 1+2+3+4+\ldots=-1/12 and this happens only when this series is taken going to infinity. You can also see a 21 minutes video on the same argument from these authors

This is really odd as we are summing up all positive terms and in the end one gets a negative result. This was a question that already bothered Euler and is generally fixed with the Riemann zeta function. Now, if you talk with a mathematician, you will be warned that such a series is not converging and indeed intermediate results become even more larger as the sum is performed. So, this series should be generally discarded when you meet it in your computations in physics or engineering. We know that things do not stay this way as nature already patched it. The reason is exactly this: Infinity does not exist in nature and whenever one is met nature already fixed it, whatever a mathematician could say. Of course, smarter mathematicians are well aware of this as you can read from Terry Tao’s blog. Indeed, Terry Tao is one of the smartest living mathematicians. One of his latest successes is to have found a problem in the presumed Otelbaev’s proof of the existence of solutions to Navier-Stokes equations, a well-known millennium problem (see the accepted answer and comments here).

This idea is well-known to physicists and when an infinity is met we have invented a series of techniques to remove it in the way nature has chosen. This can be seen from the striking agreement between computed and measured quantities in some quantum field theories, not last the Standard Model. E.g. the gyromagnetic ratio of the electron agrees to one part on a trillion with the measured quantity (see here). This perfection in the computations was never seen before in physics and belongs to the great revolution that was completed by Feynman, Schwinger, Tomonaga and Dyson that we have inherited in the Standard Model, the latest and greatest revolution seen so far in particle physics. We just hope that LHC will uncover the next one at the restart of operations. It is possible again that nature will have found further ways to patch infinities and one of these could be 1+2+3+4+\ldots=-1/12.

So, we recall one of the greatest principles of physics: Nature patches infinities and use techniques to do it that are generally disgusting mathematicians. I think that diverging series should be taught at undergraduate level courses. Maybe, using the standard textbook by Hardy (see here). These are not just pathologies in an otherwise wonderful world but rather these are the ways nature has chosen to behave!

The reason for me to write about this matter is linked to a beautiful work I did with my colleagues Alfonso Farina and Matteo Sedehi on the way the Tartaglia-Pascal triangle generalizes in quantum mechanics. We arrived at the conclusion that quantum mechanics arises as the square root of a Brownian motion. We have got a paper published on this matter (see here or you can see the Latest Draft). Of course, the idea to extract the square root of a Wiener process is something that was disgusting mathematicians, mostly Didier Piau, that was claiming that an infinity goes around. Of course, if I have a sequence of random numbers, these are finite, I can arbitrarily take their square root. Indeed, this is what one sees working with Matlab that easily recovers our formula for this process. So, what does it happen to the infinity found by Piau? Nothing, but nature already patched it.

So, we learned a beautiful lesson from nature: The only way to know her choices is to ask her.

A. Farina,, M. Frasca,, & M. Sedehi (2014). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y


Intrinsic decoherence observed again!

25/05/2013

ResearchBlogging.org

Decoherence is the effect that causes a quantum system to behave classically. The most known of this kind of effects is due to environment where the interaction of an open quantum system with its surrounding is the reason for the loss of quantum coherence. This effect is well-proven on an experimental ground and must be considered acquired knowledge. On the other side, it is a correct scientific question to ask if a closed quantum system ever displays classical behavior for some reason. I have already put forward my take in this blog (see here). This week, on Physical Review Letters (see here and here), it is appeared a paper showing how intrinsic decoherence comes out in an experimental setup of two coupled kicked rotors. Kicked rotors are the epitome of studies on classical chaos and corresponding quantum behavior. It is known that, classically, such a system display diffusion above a certain threshold, firstly computed by Boris Chirikov. The corresponding quantum system localizes instead when its classical counterpart is chaotic. This is the hallmark of a proper quantum behavior that refrains from chaos proper to classical nonlinear systems. The main reason is that the Schrödinger equation is just linear and superposition principle applies. On 1988, S. Adachi, M. Toda, and K. Ikeda showed a real beautiful result that two of such coupled systems lose quantum coherence (see here). The paper by Bryce Gadway, Jeremy Reeves, Ludwig Krinner, and Dominik Schneble (see here) is an experimental proof of the fact that the original theoretical result is a correct insight and we have again a proof that environmental decoherence is not all the story. An interesting recount is given here. This paper is really striking and open the door to a new class of experiments where closed quantum systems, possibly with a lot of systems involved, will be studied to give a full understanding of the quantum-classical transition.

Bryce Gadway, Jeremy Reeves, Ludwig Krinner, & Dominik Schneble (2012). Evidence for a Quantum-to-Classical Transition in a Pair of Coupled
Quantum Rotors Phys. Rev. Lett. 110, 190401 (2013) arXiv: 1203.3177v2

Adachi, S., Toda, M., & Ikeda, K. (1988). Quantum-Classical Correspondence in Many-Dimensional Quantum Chaos Physical Review Letters, 61 (6), 659-661 DOI: 10.1103/PhysRevLett.61.659


Tartaglia-Pascal triangle and quantum mechanics

26/04/2013

ResearchBlogging.org

The paper I wrote with Alfonso Farina and Matteo Sedehi about the link between the Tartaglia-Pascal triangle and quantum mechanics is now online (see here). This paper contains as a statement my theorem that provides a connection between the square root of a Wiener process and the Schrödinger equation that arose a lot of interest and much criticisms by some mathematicians (see here). So, it is worthwhile to tell how all this come about.

On fall 2011, Alfonso Farina called me as he had an open problem after he and his colleagues got published a paper on Signal, Image and Video Processing, a journal from Springer, where it was shown how the Tartaglia-Pascal triangle is deeply connected with diffusion and the Fourier equation. Tartaglia-Pascal triangleThe connection comes out from the Joseph Fourierbinomial coefficients, the elements of the Tartaglia-Pascal triangle, that in some limit give a Gaussian and this Gaussian, in the continuum, is the solution of the Fourier equation of heat diffusion. This entails a deep connection with stochastic processes. Stochastic processes, for most people working in the area of radar and sensors, are essential to understand how these device measure through filtering theory. But, in the historic perspective Farina & al. put their paper, they were not able to get a proper connection for the Schrödinger equation, notwithstanding they recognized there is a deep formal analogy with the Fourier equation. This was the open question: How to connect Tartaglia-Pascal triangle and Schrödinger equation?

People working in quantum physics are aware of the difficulties researchers have met to link stochastic processes a la Wiener and quantum mechanics. Indeed, skepticism is the main feeling of all of us about this matter. So, the question Alfonso put forward to me was not that easy. But Alfonso & al. paper contains also a possible answer: Just start from discrete and then go back to continuum. So, the analog of the heat equation is the Schrödinger equation for a free particle and its kernel and, indeed, the evolution of a Gaussian wave-packet can be managed on the discrete and gives back the binomial coefficient. What you get in this way are the square root of binomial coefficients. Erwin SchrödingerSo, the link with the Tartaglia-Pascal triangle is rather subtle in quantum mechanics and enters through a square root, reminiscent of the Dirac’s work and his greatest achievement, Dirac equation. This answered Alfonso’s question and in a way that was somewhat unexpected.

Then, I thought that this connection could be deeper than what we had found. I tried to modify Itō calculus to consider fractional powers of a Wiener process. I posted my paper on arxiv and performed both experimental and numerical computations. All this confirms my theorem that the square root of a Wiener process has as a diffusion equation the Schrödinger equation. You can easily take the square root of a natural noise (I did it) or compute this on your preferred math software. It is just interesting that mathematicians never decided to cope with this and still claim that all this evidence does not exist, basing their claims on a theory that can be easily amended.

We have just thrown a seed in the earth. This is our main work. And we feel sure that very good fruits will come out. Thank you very much Alfonso and Matteo!

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing DOI: 10.1007/s11760-013-0473-y

Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2

Farina, A., Giompapa, S., Graziano, A., Liburdi, A., Ravanelli, M., & Zirilli, F. (2011). Tartaglia-Pascal’s triangle: a historical perspective with applications Signal, Image and Video Processing, 7 (1), 173-188 DOI: 10.1007/s11760-011-0228-6


Fooling with mathematicians

28/02/2013

ResearchBlogging.org

I am still working with stochastic processes and, as my readers know, I have proposed a new view of quantum mechanics assuming that at the square root of a Wiener process can be attached a meaning (see here and here). I was able to generate it through a numerical code. A square root of a number can always be taken, irrespective of any deep and beautiful mathematical analysis. The reason is that this is something really new and deserves a different approach much in the same way it happened to the Dirac’s delta that initially met with skepticism from the mathematical community (simply it did not make sense with the knowledge of the time). Here I give you some Matlab code if you want to try by yourselves:

nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Square root of the Brownian motion
dB05=(dB).^(1/2);

Nothing can prevent you from taking the square root of  a number as is a Brownian displacement and so all this has a well sound meaning numerically. The point is just to understand how to give this a full mathematical meaning. The wrong approach in this case is just to throw all away claiming all this does not exist. This is exactly the behavior I met from Didier Piau. Of course, Didier is a good mathematician but simply refuses to accept the possibility that such concepts can have a meaning at all based on what has been so far coded in the area of stochastic processes. This notwithstanding that they can be easily computed on your personal computer at home.

But this saga is not over yet. This time I was trying to compute the cubic root of a Wiener process and I posted this at Mathematics Stackexchange. I put this question with  the simple idea in mind to consider a stochastic process with a random mean and I did not realize that I was provoking a small crisis again. This time the question is the existence of the process {\rm sign}(dW). Didier Piau immediately wrote down that it does not exist. Again I give here the Matlab code that computes it very easily:

nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Sign and absolute value of a Wiener process
dS = sign(dB);
dA = dB./dS;

Didier Piau and a colleague of him just complain on the Matlab way the sign operation is performed. My view is that it is all legal as Matlab takes + or – depending on the sign of the displacement, a thing that can be made by hand and that does not imply anything exotic.  What it is exotic here it the strong opposition this evidence meets notwithstanding is easily understandable by everybody and, of course, easily computable on a tabletop computer. The expected distribution for the signs of Brownian displacements is a Bernoulli with p=1/2. Here is the histogram from the above code

Histogram sign(dW)This has mean 0 and variance 1 as it should for N=\pm 1 and p=\frac{1}{2} but this can be verified after some Montecarlo runs. This is in agreement with what I discussed here at Mathematics Stackexchange as a displacement in a Brownian motion is a physics increment or decrement of the moving particle and has a sign that can be managed statistically. My attempt to compare all this to the case of Dirac’s delta turns out into a complain of overstatement as delta was really useful and my approach is not (but when Dirac put forward his idea this was just airy-fairy for the time). Of course, a reformulation of quantum mechanics would be a rather formidable support to all this but this mathematician does not seem to realize it.

So, in the end, I am somewhat surprised by the behavior of the community against novelties. I can understand skepticism, it belongs to our profession, but for facing new concepts that can be easily checked numerically to exist I would prefer a more constructive behavior trying to understand rather than an immediate dismissal. It appears like history of science never taught anything leaving us with a boring repetition of stereotyped reactions to something that instead would be worthwhile further consideration. Meanwhile, I hope my readers will enjoy playing around with these new computations using some exotic mathematical operations on a stochastic process.

Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2


A first paper on square root of a Brownian motion and quantum mechanics gets published!

20/11/2012

ResearchBlogging.org

Following my series of posts on the link between the square root of a stochastic process and quantum mechanics (see here, here, here, here, here), that I proved to exist both theoretically and experimentally, I am pleased to let you know that the first paper of my collaboration with Alfonso Farina and Matteo Sedehi was finally accepted in Signal, Image and Video Processing. This paper contains the proof of what I named the “Farina-Frasca-Sedehi proposition” in my paper that claims that for a well localized free particle there exists a map between the wave function and the square root of binomial coefficients. This finally links the Pascal-Tartaglia triangle, given through binomial coefficients, to quantum mechanics and closes a question originally open by Farina and collaborators on the same journal (see here). My theorem about the square root of a stochastic process also appears in this article but without a proof.

Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2

Farina, A., Giompapa, S., Graziano, A., Liburdi, A., Ravanelli, M., & Zirilli, F. (2011). Tartaglia-Pascal’s triangle: a historical perspective with applications Signal, Image and Video Processing DOI: 10.1007/s11760-011-0228-6


‘t Hooft and quantum computation

23/08/2012

ResearchBlogging.org

Gerard ‘t Hooft is one of greatest living physicists, one of the main contributors to the Standard Model. He has been awarded the Nobel prize in physics on 1999. I have had the opportunity to meet him in Piombino (Italy) at a conference on 2006 where he was there to talk about his view on foundations of quantum mechanics. He is trying to understand the layer behind quantum mechanics and this question has been a source of discussions here where he tried to find a fair audience to defend his view. Physics StackExchange, differently from MathOverflow for mathematicians,  has not reached the critical mass with most of the community, Fields medalists included for the latter, where the stars of the physics community take time to contribute. The reason is in the different approaches of these communities that can make a hard life for a Nobel winner while mathematicians’ approach appears polite and often very helpful. This can also be seen with the recent passing away of the great mathematician William Thurston (see here).

‘t Hooft’s thesis received an answer from Peter Shor that is one of the masters of quantum computation. What makes interesting the matter is that two authoritative persons discussed about foundations. Shor made clear, as you can read, that, out of three possibilities, a failure of quantum computation could give strong support to ‘t Hooft’s view and this is the case ‘t Hooft chose. Today, there is no large scale quantum computer notwithstanding large efforts by research and industry. Large scale quantum computers is what we need to turn this idea into a meaningful device. The main reason is that, as you enlarge your device, environment driven decoherence changes this into a classical system losing its computational capabilities. But, in principle, there is nothing else to prevent a large scale quantum computer from working. So, if you are so able to remove external disturbances, your toy will be turned into a powerful computational machine.

People working on this research area rely heavily on the idea that, in principle, there is nothing from preventing a quantum device to become a large scale computing device. ‘t Hooft contends that this is not true and that, at some stage, a classical computer will always outperform the quantum computer. Today, this has not been contradicted from experiment as we have not a large scale quantum computer yet.

‘t Hooft’s idea has a support from mathematical theorems. I tried to point out this in the comments below Shor’s answer and I received the standard answer:

and how does your interpretation of this theorem permit the existence of (say) superconductors in large condensed matter systems? How about four-dimensional self-correcting topological memory?

This is a refrain you will always hear as, when some mathematical theorems seem to contradict someone pet theory, immediately theoretical physicists become skeptical about mathematics. Of course, as everybody can see, most of the many body quantum systems turn into classical systems and this is a lesson we learn from decoherence but few many-body systems do not. So, rather to think that these are peculiar we prefer to think that a mathematical theorem is attacking our pet theory and, rather to do some minimal effort to understand, one attacks with standard arguments as this can change a mathematical truth into some false statement that does not apply to our case.

There are a couple of mathematical theorems that support the view that increasing the number of elements into a quantum system makes it unstable and this turns into a classical system. The first is here and the second is this. These theorems are about quantum mechanics, the authors use laws of quantum mechanics consistently and are very well-known, if not famous, mathematical physicists. The consequences of these theorems are that, increasing the numbers of components of a quantum system, in almost all cases, the system will turn into a classical system making this a principle to impede for a general working of a large scale quantum computer. These theorems strongly support ‘t Hooft’s idea. Apparently, they clash with the urban legend about superconductors and all that. Of course, this is not true and the Shor’s fears can be easily driven away (even if I was punished for this). What makes a system unstable with respect to the number of components can make it, in some cases, unstable with respect to quantum perturbations that could be amplified to a macroscopic level. So, these theorems are just saying that in almost all cases a system will be turned into a classical one but there exists an non-empty set, providing a class of macroscopic quantum systems, that can usefully contain a quantum computer. This set is not large as one could hope but there is no reason to despair whatsoever.

Laws of physics are just conceived to generate a classical world.

M. Hartmann, G. Mahler, & O. Hess (2003). Gaussian quantum fluctuations in interacting many particle systems Lett. Math. Phys. 68, 103-112 (2004) arXiv: math-ph/0312045v2

Elliott H. Lieb,, & Barry Simon (1973). Thomas-Fermi Theory Revisited Phys. Rev. Lett. 31, 681–683 (1973) DOI: 10.1103/PhysRevLett.31.681


%d bloggers like this: