The Saga of Landau-Gauge Propagators: A Short History

28/01/2011

ResearchBlogging.org

I have never discussed too much in-depth the history of the matter of Yang-Mills propagators in Landau gauge even if I often expressed a clearcut position. This is a kind of disclaimer when I say that I would not like to offend the work of anyone but my results agree excellently well with lattice computations that my point of view cannot be much different. But a recent paper on arxiv by Attilio Cucchieri and Tereza Mendes and an email exchange with Attilio motivated the idea to put down these rows to give my audience an idea of the stake we are playing for and why no peace treaty has been signed yet by people working in this area of research.

Firstly, I would like to give an idea of why this part of our scientific community is pursuing such a task to identify the gluon and ghost propagators of a pure Yang-Mills theory. The most obvious reason is to understand confinement. The idea that confinement is coded into these propagators dates back to the works of Vladimir Gribov taken to their natural extension by Daniel Zwanziger. Daniel, that I have had the luck to meet and hear in Ghent last year, did a great job in this direction and proved that, for Yang-Mills theory to be confining, the gluon propagator must go to zero with momenta. This scenario was then named Gribov-Zwanzinger arising from the contributions of these authors. It implies that positivity is maximally violated by the propagator and the real space propagator should be seen to cut the time axis. I would like to emphasize that the propagator is a gauge-dependent quantity, even if the spectrum one could obtain from it is not, and here we aim to talk about Landau-gauge propagators both for gluon and ghost fields that are generally easier to manage both on a lattice and theoretically.

The next and also relevant reason to get such propagators is to understand how Yang-Mills theory behaves at lower energies, as we know quite well its behavior at higher ones, and if a mass gap indeed forms. This could have impact on a lot of activities in high-energy physics and nuclear physics. In accelerator facilities one needs to have an exact idea of what is the background arising from QCD and this is not quite well controlled. We have seen a clear example with the charge asymmetry seen by CDF at Tevatron at 3.4 sigma. We cannot be sure this is new physics yet and if a mass gap exists at lower energies, what happens to such massive particles going to higher energies? So, my conclusion here is that we cannot live forever ignoring low energy behavior of QCD as its complete understanding could have impact at unexpected large scales.

After the work of Zwanzinger, people was motivated to get an explicit form of these propagators. Two approaches were clearly at hand. The first one is the use of large computer facilities to solve the theory on the lattice. The other is to attack the problem theoretically through a non-perturbative hierarchy of equations: Dyson-Schwinger equations. The first technique has the drawback that increasing resources are needed to approach meaningful volumes to get a proper understanding of the the theory. At the start of nineties the computing facilities today available were just a dream. On the side of Dyson-Schwinger equations the problem is mathematically very easy to state but very difficult to solve: How to truncate the hierarchy to get the proper results at lower energies? On 1997 an important paper by Lorenz von Smekal, Andreas Hauck and Reinhard Alkofer made its appearance on Physical Review Letters (see here). The authors of this paper claimed to have found a proper truncation of the Dyson-Schwinger hierarchy of equations showing that the gluon propagator should go to zero at lower momenta while the ghost propagator should go to infinity faster than the free particle case. Also the running coupling should reach a finite non-zero fixed point in the same limit. The so-called “scaling solution” was born. The importance of this paper relies on the fact that it strongly supports the Gribov-Zwanzinger scenario and so the theoretical results of these authors appeared vindicated! The idea of the scaling solution and the school built on it by von Smekal and Alkofer with a lot of students after is one of the main aspects of the history we are telling here.

After this paper, a lot of them followed on high impact journals and all the community working on Landau-gauge propagators was genuinely convinced that this indeed should have been the right behavior of propagators. Also the common wisdom that there was a fixed point at infrared was supported by these results. Indeed, people doing lattice computations seemed to confirm these findings even if the gluon propagator was never seen to converge toward zero. But this was said to depend just on the small volumes used by them with the inherent limitations of the computer facilities at that time. Anyhow, data fits seemed to agree quite well with the scaling solution. At this point, till the begin of the new century, the scaling solution become a paradigm for all the community working on the computation of propagators both theoretically and numerically.

At the beginning of the new century things started to change. People started to solve Dyson-Schwinger equations with computers and the results did not appear to agree with the scaling solution. The advantage to solve Dyson-Schwinger equations numerically is that the limitations due to the volume that were plaguing lattice computations, here are absent. It is worthwhile to cite a couple of papers (here and here) that have their culmination in a work by Joannis Papavassiliou, Daniele Binosi and Arlene Aguilar (see here). It should be noticed that the second one of these three papers went unpublished  and the first one met severe difficulties to get published. Indeed, it should be remembered that a paradigm was already formed while these papers contain completely opposite results. What was found was really shocking: the gluon propagator was seen to reach a finite non-zero value and the ghost propagator was going to infinite like that of a free particle! This could be said to be the discover of the “decoupling solution” but it is not completely true. Such a solution was obtained by John Cornwall back in the eighties and since then was waiting for a confirmation (see here). I would like to emphasize that Joannis Papavassiliou worked about this with Cornwall and this work merged in some way with that of solving numerically Dyson-Schwinger equations. The really striking part of these results was that the gluon acquires mass dynamically with a mechanism that is alike the one Schwinger devised long ago and this is an essential starting point to understand confinement.

In parallel to this research line, computers improved and more powerful ones were available during these years to attack the problem on the lattice. Increasing volumes did not seem to change the situation. The scaling solution appeared more and more distant from numerical results. The crushing event happened with the Regensburg conference, Lattice 2007. Large volume computations were finally available. Three contributions appeared by a Brazilian group (Attilio Cucchieri and Tereza Mendes), a Russian-German group (I. Bogolubsky, E.M. Ilgenfritz, M. Muller-Preussker and A. Sternbeck) and a Portuguese-German group (O. Oliveira, P. J. Silva, E.-M.Ilgenfritz, A. Sternbeck). Attilio and Tereza considered huge volumes as 27fm! There was no doubt that the solution with massive gluons, the one contradicting the initial paradigm introduced by Alkofer, von Smekal and Hauck, was the one seen on lattice at large volumes.The propagator was not going to zero but at a finite non-zero value reaching a plateau at lower energies. Also on the lattice the gluon appeared to get a mass at least in four dimensions.

These results were shocking but the question is not settled yet and a lot I would have more to say. People supporting the scaling solution is still there alive and kicking and the propagators war is still on. Nobody wants to cast armies down and surrender. This means that I will have a lot to write yet and the reasons to keep alive this blog are several. For the moment I hope to have kept your attention alive and if you have something more to add or to precise we are open to comments.

von Smekal, L., Hauck, A., & Alkofer, R. (1997). Infrared Behavior of Gluon and Ghost Propagators in Landau Gauge QCD Physical Review Letters, 79 (19), 3591-3594 DOI: 10.1103/PhysRevLett.79.3591

Aguilar, A., & Natale, A. (2004). A dynamical gluon mass solution in a coupled system of the Schwinger-Dyson equations Journal of High Energy Physics, 2004 (08), 57-57 DOI: 10.1088/1126-6708/2004/08/057

Ph. Boucaud, J. P. Leroy, A. Le Yaouanc, A. Y. Lokhov, J. Micheli, O. Pene, J. Rodriguez-Quintero, & C. Roiesnel (2005). The Infrared Behaviour of the Pure Yang-Mills Green Functions arxiv arXiv: hep-ph/0507104v4

Aguilar, A., Binosi, D., & Papavassiliou, J. (2008). Gluon and ghost propagators in the Landau gauge: Deriving lattice results from Schwinger-Dyson equations Physical Review D, 78 (2) DOI: 10.1103/PhysRevD.78.025010

Cornwall, J. (1982). Dynamical mass generation in continuum quantum chromodynamics Physical Review D, 26 (6), 1453-1478 DOI: 10.1103/PhysRevD.26.1453

Attilio Cucchieri, & Tereza Mendes (2007). What’s up with IR gluon and ghost propagators in Landau gauge? A
puzzling answer from huge lattices PoSLAT2007:297,2007 arXiv: 0710.0412v1

I. L. Bogolubsky, E. -M. Ilgenfritz, M. Müller-Preussker, & A. Sternbeck (2007). The Landau gauge gluon and ghost propagators in 4D SU(3) gluodynamics in
large lattice volumes PoSLAT2007:290,2007 arXiv: 0710.1968v2

O. Oliveira, P. J. Silva, E. -M. Ilgenfritz, & A. Sternbeck (2007). The gluon propagator from large asymmetric lattices PoSLAT2007:323,2007 arXiv: 0710.1424v1


Quote of the day

27/01/2011

“Freedom begins with an act of defiance!”

Defiance (2008)


The Saga of Landau-Gauge Propagators

26/01/2011

ResearchBlogging.org

Also today arxiv reserves some interesting papers. But the one that mostly hit my attention is this one by Attilio Cucchieri and Tereza Mendes. Just the title, “The Saga of Landau-Gauge Propagators: Gathering New Ammo”, is a program and I would like to emphasize what is the new ammo:

We recently installed at IFSC–USP a new machine
with 18 CPUs Intel quadcore Xeon 2.40GHz (with InfiniBand
network and a total of 216 GB of memory)
and 8 NVIDIA Tesla S1070 boards (500 Series), each
with 960 cores and 16 GB of memory. The peak performance
of the 8 Tesla boards is estimated in about
2.8 Tflops in double precision and 33 Tflops in single
precision.

This makes appear my PC as a small toy for children! As my readers know, Attilio and Tereza have given fundamental contributions to our current understanding of propagators for Yang-Mills theory through lattice computations and personally I think that these works are already history. They rightly declare in this paper that

…and the existence of a
gluon mass is now firmly established.

and show as all the attempts made by people supporting the scaling solution to save it are doomed. In this paper they perform a 2d computation with a lattice having 2560^2 points showing that indeed in this case one observes the scaling solution. I have already explained here why this is bad news for supporters of the scaling solution from my approach. Indeed, this is further confirmation to the fact that the scaling solution is just an artifact of wrong approximations on the hierarchy of the Dyson-Schwinger equations.

The reason why Attilio and Tereza are forced to use such wartime jargon is due to the fact that people supporting the scaling solution did not surrender yet and keep on looking for further justifications to an even stronger evidence that this solution is not seen on lattice computations in 3 and 4 dimensions. But they are just grasping at straws.

My results completely confirm the findings of Attilio and Tereza and other people in this research line that were able to uncover the so called “decoupling solution” or, more correctly, the solution displaying a massive gluon. Besides, there is a general convergence on the way the scenario is. I hope that this new ammo from Attilio and Tereza will just be useful to move on from the present stalled situation to learn more and more on the behavior of Yang-Mills theory.

Attilio Cucchieri, & Tereza Mendes (2011). The Saga of Landau-Gauge Propagators: Gathering New Ammo arxiv arXiv: 1101.4779v1


The Tevatron affair and the “fat” gluon

25/01/2011

ResearchBlogging.org

Tevatron is again at the forefront of the blogosphere mostly due to Jester and Lubos. Top quark seems the main suspect to put an end to the domain of the Standard Model in particle physics. Indeed, years and years of confirmations cannot last forever and somewhere some odd behavior must appear. But this is again an effect at 3.4 sigma and so all could reveal to be a fluke and the Standard Model will escape again to its end. But in the comment area of the post in the Lubos’ blog there is a person that pointed out my proposal for a “fat” gluon. “Fat” here stays just for massive and now I will explain this idea and its possible problems.

The starting point is the spectrum of Yang-Mills theory that I have obtained recently (see here and here). I have shown that, at very low energies, the gluon field has a propagator proportional to

G(p)=\sum_{n=0}^\infty(2n+1)\frac{\pi^2}{K^2(i)}\frac{(-1)^{n+1}e^{-(n+\frac{1}{2})\pi}}{1+e^{-(2n+1)\pi}}\frac{1}{p^2-m_n^2+i\epsilon}

with the spectrum given by

m_n=\left(n+\frac{1}{2}\right)\frac{\pi}{K(i)}\sqrt{\sigma}

being \sigma the string tension being about (440\ MeV)^2. If we go beyond the leading order of such a strong coupling expansion one gets that the masses run with momenta. This has been confirmed on the lattice quite recently by Orlando Oliveira and Pedro Bicudo (see here). The interesting point about such a spectrum is that is not bounded from above and, in principle, one could take n large enough to reach TeV energies. These glueballs are very fat indeed and could explain CDF’s results if these should be confirmed by them, their colleagues at D0 and LHC.

It should be emphasized that these excitations of the glue field have spin zero and so will produce t-tbar pairs in a singlet state possibly explaining the charge asymmetry through the production rate of such very massive glueballs.

A problem can be seen immediately from the form of the propagator that has each contribution in the sum exponentially smaller as n increases. Indeed, this has a physical meaning as this is also what appears in the decay constants of such highly massive gluons (see here). Decay constants are fundamental in the computation of cross sections and if they are very near zero so could be the corresponding cross sections. But Oliveira and Bicudo also showed that these terms in the propagator depend on the momenta too, evading the problem at higher energies. Besides, I am working starting from the low energy part of the theory and assuming that such a spectrum will not change too much at such high energies where asymptotic freedom sets in and gluons seem to behave like massless particles. But we know from the classical theory that a small self-interaction in the equations is enough to get masses for the field and massless gluons are due to the very high energies we are working with. For very high massive excitations this cannot possibly apply. The message I would like to convey with this analysis is that if we do not know the right low-energy behavior of QCD we could miss important physics also at high-energies. We cannot live forever assuming we can forget about the behavior of Yang-Mills theory in the infrared mostly if the mass spectrum is not bounded from above.

Finally, my humble personal conviction, also because I like the idea behind Randall-Sundrum scenario, is that KK gluons are a more acceptable explanation if these CDF’s results will prove not to be flukes. The main reason to believe this is that we would obtain for the first time in the history of mankind a proof of existence for other dimensions and it would be an epochal moment indeed. And all this just forgetting what would imply for me to be right…

Frasca, M. (2008). Infrared gluon and ghost propagators Physics Letters B, 670 (1), 73-77 DOI: 10.1016/j.physletb.2008.10.022

Frasca, M. (2009). Mapping a Massless Scalar Field Theory on a Yang–Mills Theory: Classical Case Modern Physics Letters A, 24 (30) DOI: 10.1142/S021773230903165X

P. Bicudo, & O. Oliveira (2010). Gluon Mass in Landau Gauge QCD arxiv arXiv: 1010.1975v1

Frasca, M. (2010). Glueball spectrum and hadronic processes in low-energy QCD Nuclear Physics B – Proceedings Supplements, 207-208, 196-199 DOI: 10.1016/j.nuclphysbps.2010.10.051


Gian Giudice and Lisa Randall in Rome

24/01/2011

ResearchBlogging.org

As usual, also for this year there has been the Festival delle Scienze (Festival of the Sciences) in Rome. This lasted for all the last week and ended this sunday. This is the chance to hear from leading scientists the status of forefront research. This year’s theme was “The End of the World – Instructions for the Use”. Two leading theoretical physicists were present in different events: Lisa Randall and Gian Francesco Giudice. I have had not the chance to listen Lisa Randall but something she said come out in Italian newspapers. Lisa declared that KK particles are spies of other dimensions and that these are in the reach of LHC. I think that readers of the blogosphere already know what we are talking about. Indeed, KK stays for Kaluza-Klein and these particles generally arise as an effect of compactification of the other dimensions beyond the four we everyday experience. Lisa has written a paper, in collaboration with Ben Lillie and Lian-Tao Wang, providing an expectation of mass for a KK particle arising as an excitation of gluons at LHC. Some hints in this direction appeared with the measurement of charge asymmetry at Tevatron. I would like to remember that the Randall-Sundrum scenario to explain the hierarchy problem between interactions is one of the most successful ones devised so far due to the real cleverness of the idea. I regret to have missed the opportunity to listen from Lisa due to my very few time, being her present on Friday evening.

Of course, on Saturday I have much more time to spend and so I have had the opportunity to hear from Gian Francesco Giudice from CERN Theoretical Division. His talk was scheduled on Saturday evening at 19 o’clock. The talk was addressed to a non-specialist public so it was also a good opportunity to take my thirteen year’s old boy to listen. The title was “Black holes, accelerators and the end of the World”. I think you have already heard of the fine book Gian Giudice wrote recently in Tommaso Dorigo’s blog. The talk was in-line with the content of the book trying to make common people aware of what are the endeavors we physicists are pursuing with such an enormous enterprise. What makes me hope for the better has been to see a really crowded room such that the saturation point was promptly achieved and the talk started ten minutes in advance with respect to the scheduled time.

Gian started the talk discussing with a lot of irony the question of the LHC and the end of the World. He cited Nostradamus, Apocalypse by S. Giovanni and the date when the construction of the LHC started that sums up to a worrisome 666, the number of the devil. But he pointed out how a fine report to which he collaborated shows that no black hole could possibly form swallowing Earth and its neighborhood. The idea is that cosmic rays already produced even larger energies than LHC and corresponding collisions without ever producing such an effect. In this way, the probability of an unknown event can be evaluated and the event itself ruled out.

He then showed the extraordinary numbers of LHC that prompted a former NASA engineer participating to Apollo project to say that the latter was just a game for children with respect to the machine that was assembling.

Gian clarified that the mass arising from the Higgs field is not the same seen at a macroscopic level. Indeed, this is due to other reasons explained more and more in this blog and gives also another strong motivation to understand the behavior of Yang-Mills theory at low energies. To explain Higgs field, Gian used the example of a fish moving in the water. The fish cannot say there is a medium but if some excitations like waves are perceived these are an evidence in this sense. So, in the same way, we need LHC to get such excitations for the Higgs field and prove its existence. A small boy, claiming to be a physics amateur and well aware of quantum mechanics, asked if such a “Higgs fluid” could slow down particles as happens for normal fluids. Of course, we are talking of different things as relative motion is perceived in a case but not in the other. Higgs vacuum is absolutely indifferent to motion but not in the way it couples to different particles.

A question that naturally arose was if the fact that we have such a fixed space-time stage does not implies a resurrection of Newton’s absolute space. Gian explained with the example of general relativity that this idea is well dead and buried.

An important point Gian made was to note how, starting with a simple field, this field can give the seeds for fluctuations in an otherwise homogeneous space-time producing the galaxies and the large scale structures we observe today in the Universe. So, LHC is an essential starting point to understand our Universe and to answer fundamental questions by observing the particles that are the excitations of this kind of fields. Indeed, he said that are already several years that fields like cosmology, astrophysics and particle physics are going entangled inextricably together. He also pointed out how there are recurring ages in physics when some fields seem to have more results than other but this is just due to the fact that research goes through hits rather than with continuity.

Gian presented the contents with beautiful slides and animations keeping always alive the attention of the public. This was confirmed by the large number of questions people asked. What I have found interesting was the numbers Gian declared for “brains at work” for the LHC at CERN. He said that 4500 experimental physicists are involved against a mere 80 theoretical physicists! But the point that appeared to me more exciting was his declaration that the Higgs sector in the Standard Model appears somehow misplaced in an otherwise very beautiful theory and we, the theorists, all suspect that here is hidden the new physics that LHC will uncover for sure. Supersymmetry was in the air more and more, sometime just whispered but it was clear, at least to me, that this is the next actor due to appear on the scene. I strongly agree with this view from my humble side. The fear is that the only finding of LHC will be the Higgs boson and nothing more. This would decree the end of particle physics as devised since now. In any case, due to the needed long times, it is today that we are already doing feasibility studies for the accelerator of the next generation. Gian pointed out that the fact that the LHC will uncover something for sure is inside the Standard Model that seems to fail exactly at the order of energies the LHC works. A fine description of the Higgs particle was also given and this prompted several questions from the public. Indeed, it is easy to think that we are back to ether again but this is easily seen not the case as the Higgs vacuum is invariant by Lorentz transformations. Some people in the public seemed really informed about the experiments at CERN and a question arrived about the heavy ion collisions. Gian was very able to explain what are the aims and the reasons why humankind should keep on pursuing research like this.

The journalist Claudia Di Giorgio of the editorial office of Italian version of Scientific American (“Le Scienze”) was the host. She asked some recurring questions that surely was helpful for the public to be answered. A nice moment was when Gian clarified the question of the name “God particle” given to the Higgs boson by Leon Lederman and that Claudia used frequently asking for a reaction. Indeed, Gian explained that he asked the question to Lederman that claimed that the real title of his book was “The God damn particle” but the editor of the book did not like it and removed the word “damn”.

My son was very enthusiastic about Gian’s presentation and, at the end of the talk, I took him to greet Gian. It was also my chance to shake his hand and to cite him Tommaso Dorigo… Gian Giudice represents a great example of what means following a right track and surely he was one of the right people for my son to be known.

Lillie, B., Randall, L., & Wang, L. (2007). The Bulk RS KK-gluon at the LHC Journal of High Energy Physics, 2007 (09), 74-74 DOI: 10.1088/1126-6708/2007/09/074


CUDA: Lattice QCD on a Personal Computer

21/01/2011

ResearchBlogging.org

At the conference “The many faces of QCD” (see here, here and here) I have had the opportunity to talk with people doing lattice computations at large computer facilities. They said to me that this kind of activities imply the use of large computers, user queues (as these resources are generally shared) and months of computations before to see the results. Today the situation is changing for the better due to an important technological shift. Indeed, it is well-known that graphics cards are built with graphical processing units (GPU) made by several computational cores that work in parallel. Such cores do very simple computational tasks but, due to the parallel architecture, very complex operations can be reduced to a set of such small tasks that the parallel architecture executes in an exceptionally short time. This is the reason why, on a PC equipped with such an architecture, very complex video outputs can be obtained with exceptionally good performances.

People at Nvidia have had the idea to use these cores to do just floating point operations and use them for scientific computations. This is the way CUDA (Compute Unified Device Architecture) was born. So, the first Tesla cards without graphics output, but with GPUs, were produced and the development toolkit was made freely available. Nvidia made parallel computation available to the masses. Just mounting a graphics card with CUDA architecture it is possible for everybody to have a desktop computer with Teraflops performances!

As soon as I become aware of the existence of CUDA I decide to mount on this bandwagon opening to me the opportunity to do QCD on the lattice at my home. So, I upgraded my PC at home with a couple of 9800 GX2 cards (2 GPUs for each with 512 MB of DDR3 RAM each one) having CUDA architecture 1.1. This means that these cards can do single precision computations at about 1 Tflops and my PC can express a performance of 2 Tflops. But I have no double precision. I have also changed my motherboard to a Nvidia 790i Ultra that support a 3-way SLI mode and the power supply upgraded to 1 KW (Silent Gold Cooler Master). I have added 4 GB of DDR3 RAM and maintained my CPU, an Intel Core 2 Duo E8500 with 3.16 GHz for each core. The interesting point about this configuration is that I have bought the three Nvidia cards from Ebay as used material at a very low cost. Then, I was in business with very few bucks!

Before this upgrading of my machine I had Windows XP home 32 bit installed. This operating system was only able to address 3 GB of RAM and 1 GB of it was used by the two graphics cards. This revealed a serious drawback to all the matter. In a moment I will explain what I did to overcome it.

The next important step was to obtain CUDA code for QCD. The question is that CUDA technology is going to spread rapidly into academic environment and a lot of code was available. Initially I thougth to MILC code. There is CUDA code available and people of MILC Collaboraion was very helpful. This code is built for Linux and I was not able to make this operating system up and running on my platform. Besides, I would have had needed a lot of time to make all this code working for me and I had to give up despite myself. Meantime, a couple of papers by Pedro Bicudo and Nuno Cardoso appeared (see here and here). Pedro was a nice companion at the conference “The many faces of QCD” where I have had the opportunity to know him. He was not aware I had asked the source code to his student Nuno. Nuno has been very kind to give me the link and I downloaded the code. This has been a sound starting point for the work on my platform. The code has been written for CUDA since the start and so well optimized. Pedro said to me that the optimization phase cost them a lot of work while putting down the initial code was relatively easy. They worked on a Linux platform so he was surprised when I said to him that I intended to port their code under Microsoft Windows. But this is my home PC and all my family uses it and also my attempt to install Ubuntu 64 bit revealed a failure that cost to me the use of Windows installation disk to remove the dual boot.

Then, during my Christmas holidays when I have had a lot of time, I started to port Pedro and Nuno code under Windows XP Home. It was very easy. Their code, entirely written with C++, needed just the insertion of a define. So, setting the path in a DOS mode box and using nvcc with Visual Studio 2008 (the only compiler Nvidia supports under Windows so far) I was able to get a running code but with a glitch. This code was only able to run on my CPU. The reason was that I had not enough memory under Windows XP 32 bit to complete the compilation for the code of the graphics cards. Indeed, Nvidia compiler ptxas stopped with an error and I was not able to get it running on the graphics cards of my computer. But after this step, successful for some aspects, I wrote to Pedro and Nuno informing them of my success on porting the code at least running on my CPU under Windows. The code was written so well that very few was needed to port it! Pedro said to me that something had to be changed in my machine: Mostly the graphics cards should have been taken more powerful. I am aware of this shortcoming but my budget was not so good at that time. This is surely my next upgrade (a couple of 580 GTX with Fermi architecture supporting double precision).

As I have experienced memory problems, the next step was to go to a 64 bit operating system to use all my 4 GB RAM. Indeed, on another disk of my computer, I installed Windows 7 Ultimate 64 bit. Also in this case the porting of Pedro and Nuno’s code was very smooth. In a DOS box I have obtained their code up running again but this time for my graphics cards and not just for CPU only. As I have the time I will do some computations of observables of SU(2) QCD experiencing with the limit of my machine. But this result is from yesterday and I need more time to do some physics.

Pedro informed me that they are working for SU(3) and this is more difficult. Meantime, I have to thank him and his student Nuno very much for the very good job they did and for permitting me to have lattice QCD on my computer at home successfully working. I hope this will represent a good starting point for other people doing this kind of research.

Update: Pedro authorized me to put here the link to download the code. Here it is. Thank you again Pedro!

Nuno Cardoso, & Pedro Bicudo (2010). Lattice SU(2) on GPU’s arxiv arXiv: 1010.1486v1

Nuno Cardoso, & Pedro Bicudo (2010). SU(2) Lattice Gauge Theory Simulations on Fermi GPUs J.Comput.Phys.230:3998-4010,2011 arXiv: 1010.4834v2


Physics of the Riemann Hypothesis

18/01/2011

ResearchBlogging.org

In this blog I discuss frequently about one of the Clay Institute’s Millenium Prize problems: Mass gap and existence of a quantum Yang-Mills theory. Sometime I also used the Perelman’s theorem containing Poincarè’s conjecture to discuss about some properties of quantum gravity and also Cramer-Rao statistical bound. Today on arxiv I have found a beautiful review paper by Daniel Schumayer and David Hutchinson about Riemann hypothesis, another Millenium problem, and physics (see here). This question remained unsolved for almost 150 years since now. The relevance of the understanding of this conjecture relies on the possibility to give a function decribing the distribution of prime numbers.

The formulation of Riemann hypothesis is embarassingly simple. Riemann function is defined in a very simple way as

\zeta(s)=\sum_{n=1}^\infty\frac{1}{n^s}.

This function has a set of trivial zeros at all even negative integers and a set of nontrivial zeros. Riemann hypothesis claims that

All nontrivial zeros of \zeta(s) have the form \rho=\frac{1}{2}+it, being t a real number.

This is the eighth problem of Hilbert that gave also the name we are using today to this question. Simple as may seem the question, it baffled mathematicians efforts since today. But, as happens to most mathematics, it can be found applied in Nature and it is tempting to think to reproduce in a lab what appears a complicated mathematical problem and read the answer directly from experiments. Indeed, such a road was definitely open in 1999 when Michael Berry (the one of the phase) and Jon Keating put forward an important conjecture relating quantum systems and Riemann hypothesis. You can find this cornerstone paper here. But since then the hunt was open to find other connections amenable to a treatment in physics. Schumayer and Hutchinson give an extensive review of them in their paper. This view opens up the possibility of a solution through physics of this fundamental question. Surely, we are assisting again at an interesting interwining between these fundamental disciplines of science.

Daniel Schumayer, & David A. W. Hutchinson (2011). Physics of the Riemann Hypothesis arxiv arXiv: 1101.3116v1

Berry, M., & Keating, J. (1999). The Riemann Zeros and Eigenvalue Asymptotics SIAM Review, 41 (2) DOI: 10.1137/S0036144598347497


What is Science?

16/01/2011

ResearchBlogging.org

Reading this Lubos’ post about a very good site (this one) I entered into the comment area and I have found the following declaration by him:

Science is a meritocracy where answers are determined by objective criteria, and for most of the difficult questions, only one or a few people know the right answer and the scientific method exists to isolate this special right answer…

Of course, I subscribe this that is widely known to people doing research. I would just change the word “meritocracy” by “dictatorship of truth”. But there is an intermediate age where the truth takes time to become acclaimed and this is time for opinions and before to become aware of the people that firstly reached the goal, there is a struggle for the truth to be acquired. I would like to remember here the status of quantum field theory in the sixties when bootstrap and similar failures appeared as a paradigm and very few brave people were doing research in the right directions taking us to the triumph of today. In this kind of dynamics, at a first stage it is very difficult to be able to tell, also for very well trained people, where the right track is lying. In physics our luck resides in experiments. This makes things simpler when technology helps us to perform them otherwise time to decide for the best are increasingly longer. So, merit as claimed by Lubos is something that sets in at the very end of the process.

In my specific field of activity, QCD, we are in a better situation as a lot of laboratories around the World have facilities to perform important measurements to reach the goal. And this situation is even better as we can use powerful computers to solve the theory. My view as a physicist is that, without a sound comparison of the spectrum of the theory with experiments, nobody can claim to have properly solved the mass gap problem. All my present effort is going into this direction because there is nothing more exciting than having hit the right behavior of Nature (our mother not the bitch…). I take this chance to remember here the effort in this direction of Silvio Sorella, that with the help of other fine colleagues, is going to show how his approach indeed fulfill these expectations of glueball masses (see here). These authors give a correct idea about what is the  right approach to be followed for the problem of low-energy QCD.

Finally, I would like to emphasize the relevance of sites like the one pointed out by Lubos. This site has also been posted by Sean Carroll (see here) in his blog. I have pointers to my blog there and in the more successful Mathoverflow. Unfortunately, I have no much time to spend on contributing to these sites but these are very good places to know about science and the right one. So, this is also my invitation for my readers to contribute to them actively.

D. Dudal, M. S. Guimaraes, & S. P. Sorella (2010). Glueball masses from an infrared moment problem and nonperturbative
Landau gauge arxiv arXiv: 1010.3638v3


CERN and Fermilab have blogs!

12/01/2011

A few lines just to let you know that the most important laboratories of high-energy physics around the World have finally their blogs. I have added them to my blogroll and for your help I put the links here too:

CERN

Fermilab

So, stay tuned and enjoy!


A more prosaic explanation

09/01/2011

ResearchBlogging.org

The aftermath of some blogosphere activity about CDF possible finding at Tevatron left no possible satisfactory explanation beyond a massive octet of gluons that was already known in the literature and used by people at Fermilab. In the end we need some exceedingly massive gluons to explain this asymmetry. If you look around in the net, you will find other explanations that go beyond ordinary known physics of QCD. Of course, speaking about known physics of QCD we leave aside what should have been known so far about Yang-Mills theory and mass gap. As far as one can tell, no generally accepted truth is known about otherwise all the trumpets around the World would have already sung.

But let us do some educated guesses using our recent papers (here and here) and a theorem proved by Alexander Dynin (see here). These papers show that the spectrum of a Yang-Mills theory is discrete and the particles have an internal spectrum that is bounded below (the mass gap) but not from above. I can add to this description that there exists a set of spin 0 excitations making the ground state of the theory and ranging to infinite energy. So, if we suppose that the annihilation of a couple of quarks can generate a particle of this with a small chance, having enough energy to decay in a pair t-tbar in a singlet state, we can observe an asymmetry just arising from QCD.

I can understand that this is a really prosaic explanation but it is also true that we cannot live happily forgetting what is going on after a fully understanding of a Yang-Mills theory and that we are not caring too much about. So, before entering into  the framework of very exotic explanations just we have to be sure to have fully understood all the physics of the process and that we have not forgotten anything.

Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6

Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4

Alexander Dynin (2009). Energy-mass spectrum of Yang-Mills bosons is infinite and discrete arxiv arXiv: 0903.4727v2