Evidence of the square root of Brownian motion

06/03/2014

ResearchBlogging.org

A mathematical proof of existence of a stochastic process involving fractional exponents seemed out of question after some mathematicians claimed this cannot exist. This observation is strongly linked to the current definition and may undergo revision if nature does not agree with it. Stochastic processes are very easy to simulate on a computer. Very few lines of code can decide if something works or not. I and Alfonso Farina, together with Matteo Sedehi,  have introduced the idea that the square root of a Wiener process yields the Schroedinger equation (see here or download a preprint here). This implies that one has to attach a meaning to the equation

dX=(dW)^\frac{1}{2}.

In a paper appeared today on arxiv (see here) we finally have provided this proof: We were right. The idea is to solve such an equation by numerical methods. These methods are themselves a proof of existence. We used the Euler-Maruyama method, the simplest one and we compared the results as shown in the following figure

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d)  Same from the stochastic equation in this post.

a) Original Brownian motion. b) Same but squaring the formula for the square root. c) Formula of the square root taken as a stochastic equation. d) Same from the stochastic equation in this post.

There is now way to distinguish each other and the original Brownian motion is completely recovered by taking the square of the square root process computed in three different ways. Each one of these completely supports the conclusions we have drawn in our published paper. You can find the code to recover this figure in our arxiv paper. It is obtained by a Monte Carlo simulation with 10000 independent paths. You can play with it changing the parameters as you like.

This paper has an important consequence: Our current mathematical understanding of stochastic processes should be properly extended to account for our results. As a by-product, we have shown how, using Pauli matrices, this idea can be generalized to include spin introducing a new class of stochastic processes in a Clifford algebra.

In conclusion, we would like to remember that, it does not matter what your mathematical definition could be, a stochastic process is always a well-defined entity on a numerical ground. Tests can be easily performed as we proved here.

Farina, A., Frasca, M., & Sedehi, M. (2013). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y

Marco Frasca, & Alfonso Farina (2014). Numerical proof of existence of fractional Wiener processes arXiv arXiv: 1403.1075v1


Nailing down the Yang-Mills problem

22/02/2014

ResearchBlogging.org Millennium problems represent a major challenge for physicists and mathematicians. So far, the only one that has been solved was the Poincaré conjecture (now a theorem) by Grisha Perelman. For people working in strong interactions and quantum chromodynamics, the most interesting of such problems is the Yang-Mills mass gap and existence problem. The solutions of this problem would imply a lot of consequences in physics and one of the most important of these is a deep understanding of confinement of quarks inside hadrons. So far, there seems to be no solution to it but things do not stay exactly in this way. A significant number of researchers has performed lattice computations to obtain the propagators of the theory in the full range of energy from infrared to ultraviolet providing us a deep understanding of what is going on here (see Yang-Mills article on Wikipedia). The propagators to be considered are those for  the gluon and the ghost. There has been a significant effort from theoretical physicists in the last twenty years to answer this question. It is not so widely known in the community but it should because the work of this people could be the starting point for a great innovation in physics. In these days, on arxiv a paper by Axel Maas gives a great recount of the situation of these lattice computations (see here). Axel has been an important contributor to this research area and the current understanding of the behavior of the Yang-Mills theory in two dimensions owes a lot to him. In this paper, Axel presents his computations on large volumes for Yang-Mills theory on the lattice in 2, 3 and 4 dimensions in the SU(2) case. These computations are generally performed in the Landau gauge (propagators are gauge dependent quantities) being the most favorable for them. In four dimensions the lattice is (6\ fm)^4, not the largest but surely enough for the aims of the paper. Of course, no surprise comes out with respect what people found starting from 2007. The scenario is well settled and is this:

  1. The gluon propagator in 3 and 4 dimensions dos not go to zero with momenta but is just finite. In 3 dimensions has a maximum in the infrared reaching its finite value at 0  from below. No such maximum is seen in 4 dimensions. In 2 dimensions the gluon propagator goes to zero with momenta.
  2. The ghost propagator behaves like the one of a free massless particle as the momenta are lowered. This is the dominant behavior in 3 and 4 dimensions. In 2 dimensions the ghost propagator is enhanced and goes to infinity faster than in 3 and 4 dimensions.
  3. The running coupling in 3 and 4 dimensions is seen to reach zero as the momenta go to zero, reach a maximum at intermediate energies and goes asymptotically to 0 as momenta go to infinity (asymptotic freedom).

Here follows the figure for the gluon propagator Gluon Propagators

and for the running coupling

RunningCoupling

There is some concern for people about the running coupling. There is a recurring prejudice in Yang-Mills theory, without any support both theoretical or experimental, that the theory should be not trivial in the infrared. So, the running coupling should not go to zero lowering momenta but reach a finite non-zero value. Of course, a pure Yang-Mills theory in nature does not exist and it is very difficult to get an understanding here. But, in 2 and 3 dimensions, the point is that the gluon propagator is very similar to a free one, the ghost propagator is certainly a free one and then, using the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, the theory is really trivial also in the infrared limit. Currently, there are two people in the World that have recognized a duck here:  Axel Weber (see here and here) using renormalization group and me (see here, here and here). Now, claiming to see a duck where all others are pretending to tell a dinosaur does not make you the most popular guy  in the district. But so it goes.

These lattice computations are an important cornerstone in the search for the behavior of a Yang-Mills theory. Whoever aims to present to the World his petty theory for the solution of the Millennium prize must comply with these results showing that his theory is able to reproduce them. Otherwise what he has is just rubbish.

What appears in the sight is also the proof of existence of the theory. Having two trivial fixed points, the theory is Gaussian in these limits exactly as the scalar field theory. A Gaussian theory is the simplest example we know of a quantum field theory that is proven to exist. Could one recover the missing part between the two trivial fixed points as also happens for the scalar theory? In the end, it is possible that a Yang-Mills theory is just the vectorial counterpart of the well-known scalar field, the workhorse of all the scholars in quantum field theory.

Axel Maas (2014). Some more details of minimal-Landau-gauge Yang-Mills propagators arXiv arXiv: 1402.5050v1

Axel Weber (2012). Epsilon expansion for infrared Yang-Mills theory in Landau gauge Phys. Rev. D 85, 125005 arXiv: 1112.1157v2

Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1

Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6

Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3


Nature already patched it

09/02/2014

ResearchBlogging.org

Dennis Overbye is one of the best science writer around. Recently, he wrote a beautiful piece on the odd behavior of non-converging series like 1+2+3+4+\ldots and so on to infinity (see here). This article contains a wonderful video, this one

where it shown why 1+2+3+4+\ldots=-1/12 and this happens only when this series is taken going to infinity. You can also see a 21 minutes video on the same argument from these authors

This is really odd as we are summing up all positive terms and in the end one gets a negative result. This was a question that already bothered Euler and is generally fixed with the Riemann zeta function. Now, if you talk with a mathematician, you will be warned that such a series is not converging and indeed intermediate results become even more larger as the sum is performed. So, this series should be generally discarded when you meet it in your computations in physics or engineering. We know that things do not stay this way as nature already patched it. The reason is exactly this: Infinity does not exist in nature and whenever one is met nature already fixed it, whatever a mathematician could say. Of course, smarter mathematicians are well aware of this as you can read from Terry Tao’s blog. Indeed, Terry Tao is one of the smartest living mathematicians. One of his latest successes is to have found a problem in the presumed Otelbaev’s proof of the existence of solutions to Navier-Stokes equations, a well-known millennium problem (see the accepted answer and comments here).

This idea is well-known to physicists and when an infinity is met we have invented a series of techniques to remove it in the way nature has chosen. This can be seen from the striking agreement between computed and measured quantities in some quantum field theories, not last the Standard Model. E.g. the gyromagnetic ratio of the electron agrees to one part on a trillion with the measured quantity (see here). This perfection in the computations was never seen before in physics and belongs to the great revolution that was completed by Feynman, Schwinger, Tomonaga and Dyson that we have inherited in the Standard Model, the latest and greatest revolution seen so far in particle physics. We just hope that LHC will uncover the next one at the restart of operations. It is possible again that nature will have found further ways to patch infinities and one of these could be 1+2+3+4+\ldots=-1/12.

So, we recall one of the greatest principles of physics: Nature patches infinities and use techniques to do it that are generally disgusting mathematicians. I think that diverging series should be taught at undergraduate level courses. Maybe, using the standard textbook by Hardy (see here). These are not just pathologies in an otherwise wonderful world but rather these are the ways nature has chosen to behave!

The reason for me to write about this matter is linked to a beautiful work I did with my colleagues Alfonso Farina and Matteo Sedehi on the way the Tartaglia-Pascal triangle generalizes in quantum mechanics. We arrived at the conclusion that quantum mechanics arises as the square root of a Brownian motion. We have got a paper published on this matter (see here or you can see the Latest Draft). Of course, the idea to extract the square root of a Wiener process is something that was disgusting mathematicians, mostly Didier Piau, that was claiming that an infinity goes around. Of course, if I have a sequence of random numbers, these are finite, I can arbitrarily take their square root. Indeed, this is what one sees working with Matlab that easily recovers our formula for this process. So, what does it happen to the infinity found by Piau? Nothing, but nature already patched it.

So, we learned a beautiful lesson from nature: The only way to know her choices is to ask her.

A. Farina,, M. Frasca,, & M. Sedehi (2014). Solving Schrödinger equation via Tartaglia/Pascal triangle: a possible link between stochastic processing and quantum mechanics Signal, Image and Video Processing, 8 (1), 27-37 DOI: 10.1007/s11760-013-0473-y


Back to work

02/02/2014

ResearchBlogging.org

I would like to have a lot more time to write on my blog. Indeed, time is something I have no often and also the connection is not so good as I would like in the places I spend most of it. So, I take this moment to give an update of what I have seen around in these days.

LHC has found no evidence of dark matter so far (see here). Dark matter appears even more difficult to see and theory is not able to help the search. This is also one of our major venues to go beyond the Standard Model. On the other side, ASACUSA experiment at CERN produced the first beam of antihydpogen atoms (see here, this article is free to read). We expect no relevant news about the very nature of Higgs until, on 2015, LHC will restart. It must be said that the data collected so far are saying to us that this particle is behaving very nearly as that postulated by Weinberg on 1967.

In these days there has been some fuss about the realization in laboratory of a Dirac magnetic monopole (see here).  Notwithstanding this is a really beautiful experiment, nobody has seen a magnetic monopole so far. It is a simulation performed with another physical system: A BEC. This is a successful technology that will permit us an even better understanding of physical systems that are difficult to observe. Studies are ongoing to realize a simulation of  Hawking radiation in such a system.  Even if this is the state of affairs, I have read in social networks and in the news that a magnetic monopole was seen in laboratory. Of course, this is not true.

The question of black holes is always at the top of the list of the main problems in physics. Mostly when a master of physics comes out with a new point of view. So, a lot of  fuss arose from this article in Nature involving a new idea from Stephen Hawking that the author published in a paper on arxiv (see here). Beyond the resounding title, Hawking is just proposing a way to avoid the concept of firewalls that was at the center of a hot debate in the last months. Again we recognize that a journalist is not making a good job but is generating a lot of noise around and noise can hide a signal very well.

Finally, we hope in a better year in science communication. The start was somewhat disappointing.

Kuroda N, Ulmer S, Murtagh DJ, Van Gorp S, Nagata Y, Diermaier M, Federmann S, Leali M, Malbrunot C, Mascagna V, Massiczek O, Michishio K, Mizutani T, Mohri A, Nagahama H, Ohtsuka M, Radics B, Sakurai S, Sauerzopf C, Suzuki K, Tajima M, Torii HA, Venturelli L, Wu Nschek B, Zmeskal J, Zurlo N, Higaki H, Kanai Y, Lodi Rizzini E, Nagashima Y, Matsuda Y, Widmann E, & Yamazaki Y (2014). A source of antihydrogen for in-flight hyperfine spectroscopy. Nature communications, 5 PMID: 24448273

M. W. Ray,, E. Ruokokoski,, S. Kandel,, M. Möttönen,, & D. S. Hall (2014). Observation of Dirac monopoles in a synthetic magnetic field Nature, 505, 657-660 DOI: 10.1038/nature12954

Zeeya Merali (2014). Stephen Hawking: ‘There are no black holes’ Nature DOI: 10.1038/nature.2014.14583

S. W. Hawking (2014). Information Preservation and Weather Forecasting for Black Holes arXiv arXiv: 1401.5761v1


End of the year quote

31/12/2013

“An error does not become truth by reason of multiplied propagation, nor does truth become error because nobody sees it. Truth stands, even if there be no public support. It is self sustained.”

(Mahatma Gandhi, Young India 1924-1926 (1927), p. 1285.)


That strange behavior of supersymmetry…

07/12/2013

ResearchBlogging.org

I am a careful reader of scientific literature and an avid searcher for already published material in peer reviewed journals. Of course, arxiv is essential to accomplish this task and to satisfy my needs for reading. In these days, I am working on Dyson-Schwinger equations. I have written on this a paper (see here) a few years ago but this work is in strong need to be revised. Maybe, some of these days I will take the challenge. Googling around and looking for the Dyson-Schwinger equations applied to the well-known supersymmetric model due to Wess and Zumino, I have uncovered a very exciting track of research that uses Dyson-Schwinger equations to produce exact results in quantum field theory. The paper I have got was authored by Marc Bellon, Gustavo Lozano and Fidel Schaposnik and can be found here. These authors get the Dyson-Schwinger equations for the Wess-Zumino model at one loop and manage to compute the self-energies of the involved fields: A scalar, a fermion and an auxiliary bosonic field. Their equations are yielded for three different self-energies, different for each field. Self-energies are essential in quantum field theory as they introduce corrections to masses in a propagator and so enters into the physical part of an object that is not an observable.

Now, if you are in a symmetric theory like the Wess-Zumino model, such a symmetry, if it is not broken, will yield equal masses to all the components of the multiplet entering into the theory. This means that if you start with the assumption that in this case all the self-energies are equal, you are doing a consistent approximation. This is what Bellon, Lozano and Schaposnik just did. They assumed from the start that all the self-energies are equal for the Dyson Schwinger equations they get and go on with their computations. This choice leaves an open question: What if do I choose different self-energies from the start? Will the Dyson-Schwiner equations drive the solution toward the symmetric one?

This question is really interesting as the model considered is not exactly the one that Witten analysed in his famous paper  on 1982 on breaking of a supersymmetry (you can download his paper here). Supersymmetric model generates non-linear terms and could be amenable to spontaneous symmetry breaking, provided the Witten index has the proper values. The question I asked is strongly related to the idea of a supersymmetry breaking at the bootstrap: Supersymmetry is responsible for its breaking.

So, I managed to numerically solve Dyson-Schwinger equations for the Wess-Zumino model as yielded by Bellon, Lozano and Schaposnik and presented the results in a paper (see here). If you solve them assuming from the start all the self-energies are equal you get the following figure for coupling running from 0.25 to 100 (weak to strong):

All equal self-energies for the Wess-Zumino model

It does not matter the way you modify your parameters in the Dyson-Schwinger equations. Choosing them all equal from the start makes them equal forever. This is a consistent choice and this solution exists. But now, try to choose all different self-energies. You will get the following figure for the same couplings:

Not all equal self-energies for the Wess-Zumino model

This is really nice. You see that exist also solutions with all different self-energies and supersymmetry may be broken in this model. This kind of solutions has been missed by the authors. What one can see here is that supersymmetry is preserved for small couplings, even if we started with all different self-energies, but is broken as the coupling becomes stronger. This result is really striking and unexpected. It is in agreement with the results presented here.

I hope to extend this analysis to more mundane theories to analyse behaviours that are currently discussed in literature but never checked for. For these aims there are very powerful tools developed for Mathematica by Markus Huber, Jens Braun and Mario Mitter to get and numerically solve Dyson-Schwinger equations: DoFun anc CrasyDSE (thanks to Markus Huber for help). I suggest to play with them for numerical explorations.

Marc Bellon, Gustavo S. Lozano, & Fidel A. Schaposnik (2007). Higher loop renormalization of a supersymmetric field theory Phys.Lett.B650:293-297,2007 arXiv: hep-th/0703185v1

Edward Witten (1982). Constraints on Supersymmetry Breaking Nuclear Physics B, 202, 253-316 DOI: 10.1016/0550-3213(82)90071-2

Marco Frasca (2013). Numerical study of the Dyson-Schwinger equations for the Wess-Zumino
model arXiv arXiv: 1311.7376v1

Marco Frasca (2012). Chiral Wess-Zumino model and breaking of supersymmetry arXiv arXiv: 1211.1039v1

Markus Q. Huber, & Jens Braun (2011). Algorithmic derivation of functional renormalization group equations and
Dyson-Schwinger equations Computer Physics Communications, 183 (6), 1290-1320 arXiv: 1102.5307v2

Markus Q. Huber, & Mario Mitter (2011). CrasyDSE: A framework for solving Dyson-Schwinger equations arXiv arXiv: 1112.5622v2


Living dangerously

05/11/2013

ResearchBlogging.org

Today, I read an interesting article on New York Times by Dennis Overbye (see here). Of course, for researchers, a discovery that does not open new puzzles is not really a discovery but just the end of the story. But the content of the article is intriguing and is related to the question of the stability of our universe. This matter was already discussed in blogs (e.g. see here) and is linked to a paper by Giuseppe Degrassi, Stefano Di Vita, Joan Elias-Miró, José R. Espinosa, Gian F. Giudice, Gino Isidori, Alessandro Strumia (see here)  with the most famous picture

Stability and Higgs

Our universe, with its habitants, lives in that small square at the border between stability and meta-stability. So, it takes not too much to “live dangerously” as the authors say. Just a better measurement of the mass of the top quark can throw us there and this is in our reach at the restart of LHC. Anyhow, their estimation of the tunnel time is really reassuring as the required time is bigger than any reasonable cosmological age. Our universe, given the data coming from LHC, seems to live in a metastable state. This is further confirmed in a more recent paper by the same authors (see here). This means that the discovery of the Higgs boson with the given mass does not appear satisfactory from a theoretical standpoint and, besides the missing new physics, we are left with open questions that naturalness and supersymmetry would have properly assessed. The light mass of the Higgs boson, 125 GeV, in the framewrok of the Higgs mechanism, recently awarded with a richly deserved Nobel prize to Englert and Higgs, with an extensive use of weak perturbation theory is looking weary.

The question to be answered is: Is there any point in this logical chain where we can intervene to put all this matter on a proper track? Or is this the situation with the Standard Model to hold down to the Planck energy?

In all this matter there is a curious question that arises when you work with a conformal Standard Model. In this case, there is no mass term for the Higgs potential but rather, the potential gets modified by quantum corrections (Coleman-Weinberg mechanism) and a non-null vacuum expectation value comes out. But one has to grant that higher order quantum corrections cannot spoil conformal invariance. This happens if one uses dimensional regularization rather than other renormalization schemes. This grants that no quadratic correction arises and the Higgs boson is “natural”. This is a rather strange situation. Dimensional regularization works. It was invented by ‘t Hooft and Veltman and largely used by Wilson and others in their successful application of the renormalization group to phase transitions. So, why does it seem to behave differently (better!) in this situation? To decide we need a measurement of the Higgs potential that presently is out of discussion.

But there is a fundamental point that is more important than “naturalness” for which a hot debate is going on. With the pioneering work of Nambu and Goldstone we have learned a fundamental lesson: All the laws of physics are highly symmetric but nature enjoys a lot to hide all these symmetries. A lot of effort was required by very smart people to uncover them being very well hidden (do you remember the lesson from Lorentz invariance?). In the Standard Model there is a notable exception: Conformal invariance appears to be broken by hand by the Higgs potential. Why? Conformal invariance is really fundamental as all two-dimensional theories enjoy it. A typical conformal theory is string theory and we can build up all our supersymmetric models with such a property then broken down by whatever mechanism. Any conceivable more fundamental theory has conformal invariance and we would like this to be there also in the low-energy limit with a proper mechanism to break it. But not by hand.

Finally, we observe that all our theories seem to be really lucky: the coupling is always small and we can work out small perturbation theory. Also strong interactions, at high energies, become weakly interacting. In their papers, Gian Giudice et al. are able to show that the self-interaction of the Higgs potential is seen to decrease at higher energies and so, they satisfactorily apply perturbation theory. Indeed, they show that there will be an energy for which this coupling is zero and is due to change sign. As they work at high energies, the form of their potential just contains a quartic term. My question here is rather peculiar: What if exist exact solutions for finite (non-zero) quartic coupling that go like the inverse power of the coupling? We were not able to recover them with perturbation theory  but nature could have sat there. So, we would need to properly do perturbation theory around them to do the right physics. I have given some of there here and here but one cannot exclude that others exist. This also means that the mechanism of symmetry breaking can hide some surprises and the matter could not be completely settled. Never heard of breaking a symmetry by a zero mode?

So, maybe it is not our universe on the verge of showing a dangerous life but rather some of our views need a revision or a better understanding. Only then the next step will be easier to unveil. Let my bet on supersymmetry again.

Living Dangerously

Giuseppe Degrassi, Stefano Di Vita, Joan Elias-Miró, José R. Espinosa, Gian F. Giudice, Gino Isidori, & Alessandro Strumia (2012). Higgs mass and vacuum stability in the Standard Model at NNLO JHEP August 2012, 2012:98 arXiv: 1205.6497v2

Dario Buttazzo, Giuseppe Degrassi, Pier Paolo Giardino, Gian F. Giudice, Filippo Sala, Alberto Salvio, & Alessandro Strumia (2013). Investigating the near-criticality of the Higgs boson arXiv arXiv: 1307.3536v1

Marco Frasca (2009). Exact solutions of classical scalar field equations J.Nonlin.Math.Phys.18:291-297,2011 arXiv: 0907.4053v2

Marco Frasca (2013). Exact solutions and zero modes in scalar field theory arXiv arXiv: 1310.6630v1


Follow

Get every new post delivered to your Inbox.

Join 69 other followers

%d bloggers like this: