## Yang-Mills theory paper gets published!

30/12/2016

Exact solutions of quantum field theories are very rare and, normally, refer to toy models and pathological cases. Quite recently, I put on arxiv a pair of papers presenting exact solutions both of the Higgs sector of the Standard Model and the Yang-Mills theory made just of gluons. The former appeared a few month ago (see here) while the latter has been accepted for publication a few days ago (see here). I have updated the latter just today and the accepted version will appear on arxiv on 2 January next year.

What does it mean to solve exactly a quantum field theory? A quantum field theory is exactly solved when we know all its correlation functions. From them, thanks to LSZ reduction formula, we are able to compute whatever observable in principle being these cross sections or decay times. The shortest way to correlation functions are the Dyson-Schwinger equations. These equations form a set with the former equation depending on the higher order correlators and so, they are generally very difficult to solve. They were largely used in studies of Yang-Mills theory provided some truncation scheme is given or by numerical studies. Their exact solutions are generally not known and expected too difficult to find.

The problem can be faced when some solutions to the classical equations of motion of a theory are known. In this way there is a possibility to treat the Dyson-Schwinger set. Anyhow, before to enter into their treatment, it should be emphasized that in literature the Dyson-Schwinger equations where managed just in one way: Using their integral form and expressing all the correlation functions by momenta. It was an original view by Carl Bender that opened up the way (see here). The idea is to write the Dyson-Schwinger equations into their differential form in the coordinate space. So, when you have exact solutions of the classical theory, a possibility opens up to treat also the quantum case!

This shows unequivocally that a Yang-Mills theory can display a mass gap and an infinite spectrum of excitations. Of course, if nature would have chosen the particular ground state depicted by such classical solutions we would have made bingo. This is a possibility but the proof is strongly related to what is going on for the Higgs sector of the Standard Model that I solved exactly but without other matter interacting. If the decay rates of the Higgs particle should agree with our computations we will be on the right track also for Yang-Mills theory. Nature tends to repeat working mechanisms.

Marco Frasca (2015). A theorem on the Higgs sector of the Standard Model Eur. Phys. J. Plus (2016) 131: 199 arXiv: 1504.02299v3

Marco Frasca (2015). Quantum Yang-Mills field theory arXiv arXiv: 1509.05292v1

Carl M. Bender, Kimball A. Milton, & Van M. Savage (1999). Solution of Schwinger-Dyson Equations for ${\cal PT}$-Symmetric Quantum Field Theory Phys.Rev.D62:085001,2000 arXiv: hep-th/9907045v1

## Unpublishable

31/12/2015

I tried in different ways to get this paper through the community with standard channels. As far as I can tell, this paper is unpublishable. By this I mean that journals not even send it to referees to start a normal review process or all people try to stop it from making it known. The argument is always the same: A reformulation of quantum mechanics using stochastic processes but using noncommutative geometry this time. I apologize to the community if this unacceptable approach has bothered people around the World but this is the fate of some ideas. Of course, if somebody has the courage and the willing to publish, let me know and I will appreciate the tentative with infinite gratefulness.

Now, back to sane QCD.

Happy new year!

## Quantum gravity

27/12/2015

Quantum gravity appears today as the Holy Grail of physics. This is so far detached from any possible experimental result but with a lot of attentions from truly remarkable people anyway. In some sense, if a physicist would like to know in her lifetime if her speculations are worth a Nobel prize, better to work elsewhere. Anyhow, we are curious people and we would like to know how does the machinery of space-time work this because to have an engineering of space-time would make do to our civilization a significant leap beyond.

A fine recount of the current theoretical proposals has been rapidly presented by Ethan Siegel in his blog. It is interesting to notice that the two most prominent proposals, string theory and loop quantum gravity, share the same difficulty: They are not able to recover the low-energy limit. For string theory this is a severe drawback as here people ask for a fully unified theory of all the interactions. Loop quantum gravity is more limited in scope and so, one can think to fix the problem in a near future. But of all the proposals Siegel is considering, he is missing the most promising one: Non-commutative geometry. This mathematical idea is due to Alain Connes and earned him a Fields medal. So far, this is the only mathematical framework from which one can rederive the full Standard Model with all its particle content properly coupled to the Einstein’s general relativity. This formulation works with a classical gravitational field and so, one can possibly ask where quantized gravity could come out. Indeed, quite recently, Connes, Chamseddine and Mukhanov (see here and here), were able to show that, in the context of non-commutative geometry, a Riemannian manifold results quantized in unitary volumes of two kind of spheres. The reason why there are two kind of unitary volumes is due to the need to have a charge conjugation operator and this implies that these volumes yield the units $(1,i)$ in the spectrum. This provides the foundations for a future quantum gravity that is fully consistent from the start: The reason is that non-commutative geometry generates renormalizable theories!

The reason for my interest in non-commutative geometry arises exactly from this. Two years ago, I, Alfonso Farina and Matteo Sedehi obtained a publication about the possibility that a complex stochastic process is at the foundations of quantum mechanics (see here and here). We described such a process like the square root of a Brownian motion and so, a Bernoulli process appeared producing the factor 1 or i depending on the sign of the steps of the Brownian motion. This seemed to generate some deep understanding about space-time. Indeed, the work by Connes, Chamseddine and Mukhanov has that understanding and what appeared like a square root process of a Brownian motion today is just the motion of a particle on a non-commutative manifold. Here one has simply a combination of a Clifford algebra, that of Dirac’s matrices, a Wiener process and the Bernoulli process representing the scattering between these randomly distributed quantized volumes. Quantum mechanics is so fundamental that its derivation from a geometrical structure with added some mathematics from stochastic processes makes a case for non-commutative geometry as a serious proposal for quantum gravity.

I hope to give an account of this deep connection in a near future. This appears a rather exciting new avenue to pursue.

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Quanta of Geometry: Noncommutative Aspects Phys. Rev. Lett. 114 (2015) 9, 091302 arXiv: 1409.2471v4

Ali H. Chamseddine, Alain Connes, & Viatcheslav Mukhanov (2014). Geometry and the Quantum: Basics JHEP 12 (2014) 098 arXiv: 1411.0977v1

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

## Is Higgs alone?

14/03/2015

I am back after the announcement by CERN of the restart of LHC. On May this year we will have also the first collisions. This is great news and we hope for the best and the best here is just the breaking of the Standard Model.

The Higgs in the title is not Professor Higgs but rather the particle carrying his name. The question is a recurring one since the first hints of existence made their appearance at the LHC. The point I would like to make is that the equations of the theory are always solved perturbatively, even if exact solutions exist that provide a mass also if the theory is massless or has a mass term with a wrong sign (Higgs model). All you need is a finite self-interaction term in the equation. So, you will have bad times to recover such exact solutions with perturbation techniques and one keeps on living in the ignorance. If you would like to see the technicalities involved just take a cursory look at Dispersive Wiki.

What is the point? The matter is rather simple. The classical theory has exact massive solutions for the potential in the form $V(\phi)=a\phi^2+b\phi^4$ and this is a general result implying that a scalar self-interacting field gets always a mass (see here and here). Are we entitled to ignore this? Of course no. But today exact solutions have lost their charm and we can get along with them.

For the quantum field theory side what could we say? The theory can be quantized starting with these solutions and I have shown that one gets in this way that these massive particles have higher excited states. These are not bound states (maybe could be correctly interpreted in string theory or in a proper technicolor formulation after bosonization) but rather internal degrees of freedom. It is always the same Higgs particle but with the capability to live in higher excited states. These states are very difficult to observe because higher excited states are also highly depressed and even more hard to see. In the first LHC run they could not be seen for sure. In a sense, it is like Higgs is alone but with the capability to get fatter and present himself in an infinite number of different ways. This is exactly the same for the formulation of the scalar field as originally proposed by Higgs, Englert, Brout, Kibble, Guralnik and Hagen. We just note that this formulation has the advantage to be exactly what one knows from second order phase transitions used by Anderson in his non-relativistic proposal of this same mechanism. The existence of these states appears inescapable whatever is your best choice for the quartic potential of the scalar field.

It is interesting to note that this is also true for the Yang-Mills field theory. The classical equations of this theory display similar solutions that are massive (see here) and whatever is the way you develop your quantum filed theory with such solutions the mass gap is there. The theory entails the existence of massive excitations exactly as the scalar field does. This have been seen in lattice computations (see here). Can we ignore them? Of course no but exact solutions are not our best choice as said above even if we will have hard time to recover them with perturbation theory. Better to wait.

Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2014). Exact solutions for classical Yang-Mills fields arXiv arXiv: 1409.2351v2

Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N Physics Reports 526 (2013) 93-163 arXiv: 1210.4997v2

## What is going on at NASA?

09/01/2015

As a physicist I have been always interested about experiments that can corroborate theoretical findings. Most of these often become important applications for everyday life or change forever the course of the history of mankind. With this in view, I am currently following with great interest the efforts by the NASA group headed by Harold White. This work has arisen uproar in the web and in the media as it was come to envision the possibility to realize a warp drive, in the way Alcubierre devised it, and the stars were in the reach shortly. As it is well-known, Alcubierre drive implies exotic matter something that does not appear at hand neither in small nor in large quantity. On the other side, it was indirectly observed in the Casimir effect, a beautiful application of quantum field theory to real life. So, it is rather normal to link warp drive with exotic matter. It should be emphasized that nobody on Earth ever managed it in some way and it is not available at your nearest grocery store. The experiment carried out by Harold White and his group is realized with an interference device using lasers on an optical table. The idea is to observe a modification of space-time, a minuscule one, that would modify the paths of the laser beams. This would be comparable to the realization of the Chicago pile by Enrico Fermi that was the starting point for the Manhattan project. I would like to emphasize that such a laboratory small-scale manipulation of space-time would be a huge breakthrough in physics and would open up the way to a new kind of engineering, that of space-time. So, our hopes for a warp drive would be totally fulfilled.

There is an eager desire to obtain any possible information about the progress of White’s work but, of course, there are a couple of hurdles. The first one is that a scientist needs to be certain before to claim a result and we know very well why from some blatant examples in the last years. Extraordinary claims require extraordinary evidence. Last but not least, Harold White is employed at NASA and some restrictions could be required by the organization he is working with. So, it is really interesting a video appeared quite recently where White claims that the effect is there but further work is needed for confirmation. If you have a hour of your spare time, this video is worthwhile to be seen.

This video is interesting per se because Harold White is talking to his colleagues at NASA. But in the question time happens the interesting fact. A White’s colleague asks him “where is the exotic matter?”:

and here something interesting happens. White seems to avoid the question and admits that they talked before in the office. What is more interesting is what the White’s colleague is saying then unveiling some of the machinery behind the experiment. The colleague says that the experiment could be carried out in some strong coupling regime that makes the magic happen without any exotic matter. White denies and disagrees. We know that he is using strong electromagnetic fields in the interference zone. Indeed, the matter of the behaviour of the space-time in a strong perturbation was studied for cosmological aims by Belinski, Kalathnikov and Lifshitz, the BKL trio. This scenario was confirmed by numerical studies by David Garfinkle (see here). I was able to derive it by analysing the behaviour of the Einstein equations under a strong perturbation (see here) in analytical way. So, the chance to study such effects in a laboratory would be really striking and would mean an incredible breakthrough for people working in general relativity and related fields. What the exchange between White and his colleague implies is that this could be already at hand and without exotic matter. All the growing concerns about the work at NASA are then not applicable and a different kind of analysis would be needed. Particularly, Alcubierre drive should be devised in a different way. As a physicist, I am eager to learn more about this and to know the real answer, from the horse’s mouth, to the question “where is the exotic matter?”.

Miguel Alcubierre (2000). The warp drive: hyper-fast travel within general relativity Class.Quant.Grav.11:L73-L77,1994 arXiv: gr-qc/0009013v1

David Garfinkle (2003). Numerical simulations of generic singuarities Phys.Rev.Lett. 93 (2004) 161101 arXiv: gr-qc/0312117v4

Marco Frasca (2005). Strong coupling expansion for general relativity Int.J.Mod.Phys.D15:1373-1386,2006 arXiv: hep-th/0508246v3

## Standard Model at the horizon

08/12/2014

Hawking radiation is one of the most famous effects where quantum field theory combines successfully with general relativity. Since 1975 when Stephen Hawking uncovered it, this result has obtained a enormous consideration and has been derived in a lot of different ways. The idea is that, very near the horizon of a black hole, a pair of particles can be produced one of which falls into the hole and the other escapes to infinity and is seen as emitted radiation. The overall effect is to drain energy from the hole, as the pair is formed at its expenses, and its ultimate fate is to evaporate. The distribution of this radiation is practically thermal and a temperature and an entropy can be attached to the black hole. The entropy is proportional to the area of the black hole computed at the horizon, as also postulated by Jacob Bekenstein, and so, it can only increase. Thermodynamics applies to black holes as well. Since then, the quest to understand the microscopic origin of such an entropy has seen a huge literature production with the notable understanding coming from string theory and loop quantum gravity.

In all the derivations of this effect people generally assumes that the particles are free and there are very good reasons to do so. In this way the theory is easier to manage and quantum field theory on curved spaces yields definite results. The wave equation is separable and exactly solvable (see here and here). For a scalar field, if you had a self-interaction term you are immediately in trouble. Notwithstanding this, in  the ’80 Unruh and Leahy, considering the simplified case of two dimensions and Schwarzschild geometry, uncovered a peculiar effect: At the horizon of the black the interaction appears to be switched off (see here). This means that the original derivation by Hawking for free particles has indeed a general meaning but, the worst conclusion, all particles become non interacting and massless at the horizon when one considers the Standard Model! Cooper will have very bad times crossing Gargantua’s horizon.

Turning back from science fiction to reality, this problem stood forgotten for all this time and nobody studied this fact too much. The reason is that the vacuum in a curved space-time is not trivial, as firstly noted by Hawking, and mostly so when particles interact. Simply, people has increasing difficulties to manage the theory that is already complicated in its simplest form. Algebraic quantum field theory provides a rigorous approach to this (e.g. see here). These authors consider an interacting theory with a $\varphi^3$ term but do perturbation theory (small self-interaction) probably hiding in this way the Unruh-Leahy effect.

The situation can change radically if one has exact solutions. A $\varphi^4$ classical theory can be indeed solved exactly and one can make it manageable (see here). A full quantum field theory can be developed in the strong self-interaction limit (see here) and so, Unruh-Leahy effect can be accounted for. I did so and then, I have got the same conclusion for the Kerr black hole (the one of Interstellar) in four dimensions (see here). This can have devastating implications for the Standard Model of particle physics. The reason is that, if Higgs field is switched off at the horizon, all the particles will lose their masses and electroweak symmetry will be recovered. Besides, further analysis will be necessary also for Yang-Mills fields and I suspect that also in this case the same conclusion has to hold. So, the Unruh-Leahy effect seems to be on the same footing and importance of the Hawking radiation. A deep understanding of it would be needed starting from quantum gravity. It is a holy grail, the switch-off of all couplings, kind of.

Further analysis is needed to get a confirmation of it. But now, I am somewhat more scared to cross a horizon.

V. B. Bezerra, H. S. Vieira, & André A. Costa (2013). The Klein-Gordon equation in the spacetime of a charged and rotating black hole Class. Quantum Grav. 31 (2014) 045003 arXiv: 1312.4823v1

H. S. Vieira, V. B. Bezerra, & C. R. Muniz (2014). Exact solutions of the Klein-Gordon equation in the Kerr-Newman background and Hawking radiation Annals of Physics 350 (2014) 14-28 arXiv: 1401.5397v4

Leahy, D., & Unruh, W. (1983). Effects of a λΦ4 interaction on black-hole evaporation in two dimensions Physical Review D, 28 (4), 694-702 DOI: 10.1103/PhysRevD.28.694

Giovanni Collini, Valter Moretti, & Nicola Pinamonti (2013). Tunnelling black-hole radiation with $φ^3$ self-interaction: one-loop computation for Rindler Killing horizons Lett. Math. Phys. 104 (2014) 217-232 arXiv: 1302.5253v4

Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2

Marco Frasca (2013). Scalar field theory in the strong self-interaction limit Eur. Phys. J. C (2014) 74:2929 arXiv: 1306.6530v5

Marco Frasca (2014). Hawking radiation and interacting fields arXiv arXiv: 1412.1955v1

## Evidence of the square root of Brownian motion

06/03/2014

A mathematical proof of existence of a stochastic process involving fractional exponents seemed out of question after some mathematicians claimed this cannot exist. This observation is strongly linked to the current definition and may undergo revision if nature does not agree with it. Stochastic processes are very easy to simulate on a computer. Very few lines of code can decide if something works or not. I and Alfonso Farina, together with Matteo Sedehi,  have introduced the idea that the square root of a Wiener process yields the Schroedinger equation (see here or download a preprint here). This implies that one has to attach a meaning to the equation

$dX=(dW)^\frac{1}{2}.$

In a paper appeared today on arxiv (see here) we finally have provided this proof: We were right. The idea is to solve such an equation by numerical methods. These methods are themselves a proof of existence. We used the Euler-Maruyama method, the simplest one and we compared the results as shown in the following figure

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d) Same from the stochastic equation in this post.

There is now way to distinguish each other and the original Brownian motion is completely recovered by taking the square of the square root process computed in three different ways. Each one of these completely supports the conclusions we have drawn in our published paper. You can find the code to recover this figure in our arxiv paper. It is obtained by a Monte Carlo simulation with 10000 independent paths. You can play with it changing the parameters as you like.

This paper has an important consequence: Our current mathematical understanding of stochastic processes should be properly extended to account for our results. As a by-product, we have shown how, using Pauli matrices, this idea can be generalized to include spin introducing a new class of stochastic processes in a Clifford algebra.

In conclusion, we would like to remember that, it does not matter what your mathematical definition could be, a stochastic process is always a well-defined entity on a numerical ground. Tests can be easily performed as we proved here.

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

Marco Frasca, & Alfonso Farina (2014). Numerical proof of existence of fractional Wiener processes arXiv arXiv: 1403.1075v1

09/02/2014

Dennis Overbye is one of the best science writer around. Recently, he wrote a beautiful piece on the odd behavior of non-converging series like $1+2+3+4+\ldots$ and so on to infinity (see here). This article contains a wonderful video, this one

where it shown why $1+2+3+4+\ldots=-1/12$ and this happens only when this series is taken going to infinity. You can also see a 21 minutes video on the same argument from these authors

This is really odd as we are summing up all positive terms and in the end one gets a negative result. This was a question that already bothered Euler and is generally fixed with the Riemann zeta function. Now, if you talk with a mathematician, you will be warned that such a series is not converging and indeed intermediate results become even more larger as the sum is performed. So, this series should be generally discarded when you meet it in your computations in physics or engineering. We know that things do not stay this way as nature already patched it. The reason is exactly this: Infinity does not exist in nature and whenever one is met nature already fixed it, whatever a mathematician could say. Of course, smarter mathematicians are well aware of this as you can read from Terry Tao’s blog. Indeed, Terry Tao is one of the smartest living mathematicians. One of his latest successes is to have found a problem in the presumed Otelbaev’s proof of the existence of solutions to Navier-Stokes equations, a well-known millennium problem (see the accepted answer and comments here).

This idea is well-known to physicists and when an infinity is met we have invented a series of techniques to remove it in the way nature has chosen. This can be seen from the striking agreement between computed and measured quantities in some quantum field theories, not last the Standard Model. E.g. the gyromagnetic ratio of the electron agrees to one part on a trillion with the measured quantity (see here). This perfection in the computations was never seen before in physics and belongs to the great revolution that was completed by Feynman, Schwinger, Tomonaga and Dyson that we have inherited in the Standard Model, the latest and greatest revolution seen so far in particle physics. We just hope that LHC will uncover the next one at the restart of operations. It is possible again that nature will have found further ways to patch infinities and one of these could be $1+2+3+4+\ldots=-1/12$.

So, we recall one of the greatest principles of physics: Nature patches infinities and use techniques to do it that are generally disgusting mathematicians. I think that diverging series should be taught at undergraduate level courses. Maybe, using the standard textbook by Hardy (see here). These are not just pathologies in an otherwise wonderful world but rather these are the ways nature has chosen to behave!

The reason for me to write about this matter is linked to a beautiful work I did with my colleagues Alfonso Farina and Matteo Sedehi on the way the Tartaglia-Pascal triangle generalizes in quantum mechanics. We arrived at the conclusion that quantum mechanics arises as the square root of a Brownian motion. We have got a paper published on this matter (see here or you can see the Latest Draft). Of course, the idea to extract the square root of a Wiener process is something that was disgusting mathematicians, mostly Didier Piau, that was claiming that an infinity goes around. Of course, if I have a sequence of random numbers, these are finite, I can arbitrarily take their square root. Indeed, this is what one sees working with Matlab that easily recovers our formula for this process. So, what does it happen to the infinity found by Piau? Nothing, but nature already patched it.

So, we learned a beautiful lesson from nature: The only way to know her choices is to ask her.

A. Farina,, M. Frasca,, & M. Sedehi (2014). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

## Ending and consequences of Terry Tao’s criticism

21/09/2013

Summer days are gone and I am back to work. I thought that Terry Tao’s criticism to my work was finally settled and his intervention was a good one indeed. Of course, people just remember the criticism but not how the question evolved since then (it was 2009!). Terry’s point was that the mapping given here between the scalar field solutions and the Yang-Mills field in the classical limit cannot be exact as it is not granted that they represent an extreme for the Yang-Mills functional. In this way the conclusions given in the paper are not granted being based on this proof. The problem can be traced back to the gauge invariance of the Yang-Mills theory that is explicitly broken in this case.

Terry Tao, in a private communication, asked me to provide a paper, to be published on a refereed journal, that fixed the problem. In such a case the question would have been settled in a way or another. E.g., also a result disproving completely the mapping would have been good, disproving also my published paper.

This matter is rather curious as, if you fix the gauge to be Lorenz (Landau), the mapping is exact. But the possible gauge choices are infinite and so, there seems to be infinite cases where the mapping theorem appears to fail. The lucky case is that lattice computations are generally performed in Landau gauge and when you do quantum field theory a gauge must be chosen. So, is the mapping theorem really false or one can change it to fix it all?

In order to clarify this situation, I decided to solve the classical equations of the Yang-Mills theory perturbatively in the strong coupling limit. Please, note that today I am the only one in the World able to perform such a computation having completely invented the techniques to do perturbation theory when a perturbation is taken to go to infinity (sorry, no AdS/CFT here but I can surely support it). You will note that this is the opposite limit to standard perturbation theory when one is looking for a parameter that goes to zero. I succeeded in doing so and put a paper on arxiv (see here) that was finally published the same year, 2009.

The theorem changed in this way:

The mapping exists in the asymptotic limit of the coupling running to infinity (leading order), with the notable exception of the Lorenz (Landau) gauge where it is exact.

So, I sighed with relief. The reason was that the conclusions of my paper on propagators were correct. But these hold asymptotically in the limit of a strong coupling. This is just what one needs in the infrared limit where Yang-Mills theory becomes strongly coupled and this is the main reason to solve it on the lattice. I cited my work on Tao’s site, Dispersive Wiki. I am a contributor to this site. Terry Tao declared the question definitively settled with the mapping theorem holding asymptotically (see here).

In the end, we were both right. Tao’s criticism was deeply helpful while my conclusions on the propagators were correct. Indeed, my gluon propagator agrees perfectly well, in the infrared limit, with the data from the largest lattice used in computations so far  (see here)

As generally happens in these cases, the only fact that remains is the original criticism by a great mathematician (and Terry is) that invalidated my work (see here for a question on Physics Stackexchange). As you can see by the tenths of papers I published since then, my work stands and stands very well. Maybe, it would be time to ask the author.

Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6

Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4

Attilio Cucchieri, & Tereza Mendes (2007). What’s up with IR gluon and ghost propagators in Landau gauge? A puzzling answer from huge lattices PoS LAT2007:297,2007 arXiv: 0710.0412v1

## Fooling with mathematicians

28/02/2013

I am still working with stochastic processes and, as my readers know, I have proposed a new view of quantum mechanics assuming that at the square root of a Wiener process can be attached a meaning (see here and here). I was able to generate it through a numerical code. A square root of a number can always be taken, irrespective of any deep and beautiful mathematical analysis. The reason is that this is something really new and deserves a different approach much in the same way it happened to the Dirac’s delta that initially met with skepticism from the mathematical community (simply it did not make sense with the knowledge of the time). Here I give you some Matlab code if you want to try by yourselves:

nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Square root of the Brownian motion
dB05=(dB).^(1/2);

Nothing can prevent you from taking the square root of  a number as is a Brownian displacement and so all this has a well sound meaning numerically. The point is just to understand how to give this a full mathematical meaning. The wrong approach in this case is just to throw all away claiming all this does not exist. This is exactly the behavior I met from Didier Piau. Of course, Didier is a good mathematician but simply refuses to accept the possibility that such concepts can have a meaning at all based on what has been so far coded in the area of stochastic processes. This notwithstanding that they can be easily computed on your personal computer at home.

But this saga is not over yet. This time I was trying to compute the cubic root of a Wiener process and I posted this at Mathematics Stackexchange. I put this question with  the simple idea in mind to consider a stochastic process with a random mean and I did not realize that I was provoking a small crisis again. This time the question is the existence of the process ${\rm sign}(dW)$. Didier Piau immediately wrote down that it does not exist. Again I give here the Matlab code that computes it very easily:

nstep = 500000;
dt = 50;
t=0:dt/nstep:dt;
B = normrnd(0,sqrt(dt/nstep),1,nstep);
dB = cumsum(B);
% Sign and absolute value of a Wiener process
dS = sign(dB);
dA = dB./dS;

Didier Piau and a colleague of him just complain on the Matlab way the sign operation is performed. My view is that it is all legal as Matlab takes + or – depending on the sign of the displacement, a thing that can be made by hand and that does not imply anything exotic.  What it is exotic here it the strong opposition this evidence meets notwithstanding is easily understandable by everybody and, of course, easily computable on a tabletop computer. The expected distribution for the signs of Brownian displacements is a Bernoulli with p=1/2. Here is the histogram from the above code

This has mean 0 and variance 1 as it should for $N=\pm 1$ and $p=\frac{1}{2}$ but this can be verified after some Montecarlo runs. This is in agreement with what I discussed here at Mathematics Stackexchange as a displacement in a Brownian motion is a physics increment or decrement of the moving particle and has a sign that can be managed statistically. My attempt to compare all this to the case of Dirac’s delta turns out into a complain of overstatement as delta was really useful and my approach is not (but when Dirac put forward his idea this was just airy-fairy for the time). Of course, a reformulation of quantum mechanics would be a rather formidable support to all this but this mathematician does not seem to realize it.

So, in the end, I am somewhat surprised by the behavior of the community against novelties. I can understand skepticism, it belongs to our profession, but for facing new concepts that can be easily checked numerically to exist I would prefer a more constructive behavior trying to understand rather than an immediate dismissal. It appears like history of science never taught anything leaving us with a boring repetition of stereotyped reactions to something that instead would be worthwhile further consideration. Meanwhile, I hope my readers will enjoy playing around with these new computations using some exotic mathematical operations on a stochastic process.

Marco Frasca (2012). Quantum mechanics is the square root of a stochastic process arXiv arXiv: 1201.5091v2