Nailing down the Yang-Mills problem

22/02/2014

ResearchBlogging.org Millennium problems represent a major challenge for physicists and mathematicians. So far, the only one that has been solved was the Poincaré conjecture (now a theorem) by Grisha Perelman. For people working in strong interactions and quantum chromodynamics, the most interesting of such problems is the Yang-Mills mass gap and existence problem. The solutions of this problem would imply a lot of consequences in physics and one of the most important of these is a deep understanding of confinement of quarks inside hadrons. So far, there seems to be no solution to it but things do not stay exactly in this way. A significant number of researchers has performed lattice computations to obtain the propagators of the theory in the full range of energy from infrared to ultraviolet providing us a deep understanding of what is going on here (see Yang-Mills article on Wikipedia). The propagators to be considered are those for  the gluon and the ghost. There has been a significant effort from theoretical physicists in the last twenty years to answer this question. It is not so widely known in the community but it should because the work of this people could be the starting point for a great innovation in physics. In these days, on arxiv a paper by Axel Maas gives a great recount of the situation of these lattice computations (see here). Axel has been an important contributor to this research area and the current understanding of the behavior of the Yang-Mills theory in two dimensions owes a lot to him. In this paper, Axel presents his computations on large volumes for Yang-Mills theory on the lattice in 2, 3 and 4 dimensions in the SU(2) case. These computations are generally performed in the Landau gauge (propagators are gauge dependent quantities) being the most favorable for them. In four dimensions the lattice is (6\ fm)^4, not the largest but surely enough for the aims of the paper. Of course, no surprise comes out with respect what people found starting from 2007. The scenario is well settled and is this:

  1. The gluon propagator in 3 and 4 dimensions dos not go to zero with momenta but is just finite. In 3 dimensions has a maximum in the infrared reaching its finite value at 0  from below. No such maximum is seen in 4 dimensions. In 2 dimensions the gluon propagator goes to zero with momenta.
  2. The ghost propagator behaves like the one of a free massless particle as the momenta are lowered. This is the dominant behavior in 3 and 4 dimensions. In 2 dimensions the ghost propagator is enhanced and goes to infinity faster than in 3 and 4 dimensions.
  3. The running coupling in 3 and 4 dimensions is seen to reach zero as the momenta go to zero, reach a maximum at intermediate energies and goes asymptotically to 0 as momenta go to infinity (asymptotic freedom).

Here follows the figure for the gluon propagator Gluon Propagators

and for the running coupling

RunningCoupling

There is some concern for people about the running coupling. There is a recurring prejudice in Yang-Mills theory, without any support both theoretical or experimental, that the theory should be not trivial in the infrared. So, the running coupling should not go to zero lowering momenta but reach a finite non-zero value. Of course, a pure Yang-Mills theory in nature does not exist and it is very difficult to get an understanding here. But, in 2 and 3 dimensions, the point is that the gluon propagator is very similar to a free one, the ghost propagator is certainly a free one and then, using the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, the theory is really trivial also in the infrared limit. Currently, there are two people in the World that have recognized a duck here:  Axel Weber (see here and here) using renormalization group and me (see here, here and here). Now, claiming to see a duck where all others are pretending to tell a dinosaur does not make you the most popular guy  in the district. But so it goes.

These lattice computations are an important cornerstone in the search for the behavior of a Yang-Mills theory. Whoever aims to present to the World his petty theory for the solution of the Millennium prize must comply with these results showing that his theory is able to reproduce them. Otherwise what he has is just rubbish.

What appears in the sight is also the proof of existence of the theory. Having two trivial fixed points, the theory is Gaussian in these limits exactly as the scalar field theory. A Gaussian theory is the simplest example we know of a quantum field theory that is proven to exist. Could one recover the missing part between the two trivial fixed points as also happens for the scalar theory? In the end, it is possible that a Yang-Mills theory is just the vectorial counterpart of the well-known scalar field, the workhorse of all the scholars in quantum field theory.

Axel Maas (2014). Some more details of minimal-Landau-gauge Yang-Mills propagators arXiv arXiv: 1402.5050v1

Axel Weber (2012). Epsilon expansion for infrared Yang-Mills theory in Landau gauge Phys. Rev. D 85, 125005 arXiv: 1112.1157v2

Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1

Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6

Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3

Advertisement

Back to CUDA

11/02/2013

It is about two years ago when I wrote my last post about CUDA technology by NVIDIA (see here). At that time I added two new graphic cards to my PC, being on the verge to reach 3 Tflops in single precision for lattice computations.  CUDA LogoIndeed, I have had an unlucky turn of events and these cards went back to the seller as they were not working properly and I was completely refunded. Meantime, also the motherboard failed and the hardware was largely changed  and so, I have been for a lot of time without the opportunity to work with CUDA and performing intensive computations as I planned. As it is well-known, one can find a lot of software exploiting this excellent technology provided by NVIDIA and, during these years, it has been spreading largely, both in academia and industry, making life of researchers a lot easier. Personally, I am using it also at my workplace and it is really exciting to have such a computational capability at your hand at a really affordable price.

Nvidia TeslaNow, I am newly able to equip my personal computer at home with a powerful Tesla card. Some of these cards are currently dismissed as they are at the end of activity, due to upgrades of more modern ones, and so can be found at a really small price in bid sites like ebay. So, I bought a Tesla M1060 for about 200 euros.Tesla M1060 As the name says, this card has not been conceived for a personal computer but rather for servers produced by some OEMs. This can also be realized when we look at the card and see a passive cooler. This means that the card should have a proper physical dimension to enter into a server while the active dissipation through fans should be eventually provided by the server itself. Indeed, I added an 80mm Enermax fan to my chassis (also Enermax Enlobal)  to be granted that the motherboard temperature does not reach too high values. My motherboard is an ASUS P8P67 Deluxe. This is  a very good card, as usual for ASUS, providing three PCIe 2.0 slots and, in principle, one can add up to three video cards together. But if you have a couple of NVIDIA cards in SLI configuration, the slots work at x8. A single video card will work at x16.  Of course, if you plan to work with these configurations, you will need a proper PSU. I have a Cooler Master Silent Pro Gold 1000 W and I am well beyond my needs. This is what remains from my preceding configuration and is performing really well. I have also changed my CPU being this now an Intel i3-2125 with two cores at 3.30 GHz and 3Mb Cache. Finally, I added  16 Gb of Corsair Vengeance DDR3 RAM.

The installation of the card went really smooth and I have got it up and running in a few minutes on Windows 8 Pro 64 Bit,  after the installation of the proper drivers. I checked with Matlab 2011b and PGI compilers with CUDA Toolkit 5.0 properly installed. All worked fine. I would like to spend a few words about PGI compilers that are realized by The Portland Group. PGI Workstation compilersI have got a trial license at home and tested them while at my workplace we have a fully working license. These compilers make the realization of accelerated CUDA code absolutely easy. All you need is to insert into your C or Fortran code some preprocessing directives. I have executed some performance tests and the gain is really impressive without ever writing a single line of CUDA code. These compilers can be easily introduced into Matlab to yield mex-files or S-functions even if they are not yet supported by Mathworks (they should!) and also this I have verified without too much difficulty both for C and Fortran.

Finally, I would like to give you an idea on the way I will use CUDA technology for my aims. What I am doing right now is porting some good code for the scalar field and I would like to use it in the limit of large self-interaction to derive the spectrum of the theory. It is well-known that if you take the limit of the self-interaction going to infinity you recover the Ising model. But I would like to see what happens with intermediate but large values as I was not able to get any hint from literature on this, notwithstanding this is the workhorse for any people doing lattice computations. What seems to matter today is to show triviality at four dimensions, a well-acquired evidence. As soon as the accelerate code will run properly, I plan to share it here as it is very easy to get good code to do lattice QCD but it is very difficult to get good code for scalar field theory as well. Stay tuned!


Large-N gauge theories on the lattice

22/10/2012

ResearchBlogging.org

Today I have found on arXiv a very nice review about large-N gauge theories on the lattice (see here). The authors, Biagio Lucini and Marco Panero, are well-known experts on lattice gauge theories being this their main area of investigation. This review, to appear on Physics Report, gives a nice introduction to this approach to manage non-perturbative regimes in gauge theories. This is essential to understand the behavior of QCD, both at zero and finite temperatures, to catch the behavior of bound states commonly observed. Besides this, the question of confinement is an open problem yet. Indeed, a theoretical understanding is lacking and lattice computations, especially in the very simplifying limit of large number of colors N as devised in the ’70s by ‘t Hooft, can make the scenario clearer favoring a better analysis.

What is seen is that confinement is fully preserved, as one gets an exact linear increasing potential in the limit of N going to infinity, and also higher order corrections are obtained diminishing as N increases. They are able to estimate the string tension obtaining (Fig. 7 in their paper):

\centering{\frac{\Lambda_{\bar{MS}}}{\sigma^\frac{1}{2}}\approx a+\frac{b}{N^2}}.

This is a reference result for whoever aims to get a solution to the mass gap problem for a Yang-Mills theory as the string tension must be an output of such a result. The interquark potential has the form

m(L)=\sigma L-\frac{\pi}{3L}+\ldots

This ansatz agrees with numerical data to distances 3/\sqrt{\sigma}! Two other fundamental results these authors cite for the four dimensional case is the glueball spectrum:

\frac{m_{0^{++}}}{\sqrt{\sigma}}=3.28(8)+\frac{2.1(1.1)}{N^2},
\frac{m_{0^{++*}}}{\sqrt{\sigma}}=5.93(17)-\frac{2.7(2.0)}{N^2},
\frac{m_{2^{++}}}{\sqrt{\sigma}}=4.78(14)+\frac{0.3(1.7)}{N^2}.

Again, these are reference values for the mass gap problem in a Yang-Mills theory. As my readers know, I was able to get them out from my computations (see here). More recently, I have also obtained higher order corrections and the linear rising potential (see here) with the string tension in a closed form very similar to the three-dimensional case. Finally, they give the critical temperature for the breaking of chiral symmetry. The result is

\frac{T_c}{\sqrt{\sigma}}=0.5949(17)+\frac{0.458(18)}{N^2}.

This result is rather interesting because the constant is about \sqrt{3/\pi^2}. This result has been obtained initially by Norberto Scoccola and Daniel Gómez Dumm (see here) and confirmed by me (see here). This result pertains a finite temperature theory and a mass gap analysis of Yang-Mills theory should recover it but here the question is somewhat more complex. I would add to these lattice results also the studies of propagators for a pure Yang-Mills theory in the Landau gauge, both at zero and finite temperatures. The scenario has reached a really significant level of maturity and it is time that some of the theoretical proposals put forward so far compare with it. I have just cited some of these works but the literature is now becoming increasingly vast with other really meaningful techniques beside the cited one.

As usual, I conclude this post on such a nice paper with the hope that maybe time is come to increase the level of awareness of the community about the theoretical achievements on the question of the mass gap in quantum field theories.

Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N arXiv arXiv: 1210.4997v1

Marco Frasca (2008). Yang-Mills Propagators and QCD Nuclear Physics B (Proc. Suppl.) 186 (2009) 260-263 arXiv: 0807.4299v2

Marco Frasca (2011). Beyond one-gluon exchange in the infrared limit of Yang-Mills theory arXiv arXiv: 1110.2297v4

D. Gomez Dumm, & N. N. Scoccola (2004). Characteristics of the chiral phase transition in nonlocal quark models Phys.Rev. C72 (2005) 014909 arXiv: hep-ph/0410262v2

Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature Phys. Rev. C 84, 055208 (2011) arXiv: 1105.5274v4


Today in arXiv (2)

03/05/2011

ResearchBlogging.org

Today I have found some papers in the arXiv daily that makes worthwhile to talk about. The contribution by Attilio Cucchieri and Tereza Mendes at Ghent Conference “The many faces of QCD” is out (see here). They study the gluon propagator in the Landau gauge at finite temperature at a significantly large lattice. The theory is SU(2) pure Yang-Mills. As you know, the gluon propagator in the Landau gauge at finite temperature is assumed to get two contributions: a longitudinal and a transverse one. This situation is quite different form the zero temperature case where such a distinction does not exist. But, of course, such a conclusion could only be drawn if the propagator is not the one of massive excitations and we already know from lattice computations that massive solutions are those supported. In this case we should expect that, at finite temperature, one of the components of the propagator must be suppressed and a massive gluon is seen again. Tereza and Attilio see exactly this behavior. I show you a picture extracted from their paper here

The effect is markedly seen as the temperature is increased. The transverse propagator is even more suppressed while the longitudinal propagator reaches a plateau, as for the zero temperature case, but with the position of the plateau depending on the temperature making it increase. Besides, Attilio and Tereza show how the computation of the longitudinal component is really sensible to the lattice dimensions and they increase them until the behavior settles to a stable one.  In order to perform this computation they used their new CUDA machine (see here). This result is really beautiful and I can anticipate that agrees quite well with computations that I and Marco Ruggieri are performing  but yet to be published. Besides, they get a massive gluon of the right value but with a mass decreasing with temperature as can be deduced from the moving of the plateau of the longitudinal propagator that indeed is the one of the decoupling solution at zero temperature.

As an aside, I would like to point out to you a couple of works for QCD at finite temperature on the lattice from the Portuguese group headed by Pedro Bicudo and participated by Nuno Cardoso and Marco Cardoso. I have already pointed out their fine work on the lattice that was very helpful for  my studies that I am still carrying on (you can find some links at their page). But now they moved to the case of finite temperature (here and here). These papers are worthwhile to read.

Finally, I would like to point out a really innovative paper by Arata Yamamoto (see here). This is again a lattice computation performed at finite temperature with an important modification: The chiral chemical potential. This is an important concept introduced, e.g. here and here, by Kenji Fukushima, Marco Ruggieri and Raoul Gatto. There is a fundamental reason to introduce a chiral chemical potential and this is the sign problem seen in lattice QCD at finite temperature. This problem makes meaningless lattice computations unless some turn-around is adopted and the chiral chemical potential is one of these. Of course, this implies some relevant physical expectations that a lattice computation should confirm (see here). In this vein, this paper by Yamamoto is a really innovative one facing such kind of computations on the lattice using for the first time a chiral chemical potential. Being a pioneering paper, it appears at first a shortcoming the choice of too small volumes. As we already have discussed above for the gluon propagator in a pure Yang-Mills theory, the relevance to have larger volumes to recover the right physics cannot be underestimated. As a consequence the lattice spacing is 0.13 fm corresponding to a physical energy of 1.5 GeV that is high enough to miss the infrared region and so the range of validity of a possible Polyakov-Nambu-Jona-Lasinio model as currently used in literature. So, while the track is open by this paper, it appears demanding to expand the lattice at least to recover the range of validity of infrared models and grant in this way a proper comparison with results in the known literature. Notwithstanding these comments, the methods and the approach used by the author are a fundamental starting point for any future development.

Attilio Cucchieri, & Tereza Mendes (2011). Electric and magnetic Landau-gauge gluon propagators in
finite-temperature SU(2) gauge theory arXiv arXiv: 1105.0176v1

Nuno Cardoso, Marco Cardoso, & Pedro Bicudo (2011). Finite temperature lattice QCD with GPUs arXiv arXiv: 1104.5432v1

Pedro Bicudo, Nuno Cardoso, & Marco Cardoso (2011). The chiral crossover, static-light and light-light meson spectra, and
the deconfinement crossover arXiv arXiv: 1105.0063v1

Arata Yamamoto (2011). Chiral magnetic effect in lattice QCD with chiral chemical potential arXiv arXiv: 1105.0385v1

Fukushima, K., Ruggieri, M., & Gatto, R. (2010). Chiral magnetic effect in the Polyakov–Nambu–Jona-Lasinio model Physical Review D, 81 (11) DOI: 10.1103/PhysRevD.81.114031

Fukushima, K., & Ruggieri, M. (2010). Dielectric correction to the chiral magnetic effect Physical Review D, 82 (5) DOI: 10.1103/PhysRevD.82.054001


CUDA: Upgrading to 3 Tflops

29/03/2011

ResearchBlogging.org

When I was a graduate student I heard a lot about the wonderful performances of a Cray-1 parallel computer and the promises to explore unknown fields of knowledge with this unleashed power. This admirable machine reached a peak of 250 Mflops. Its near parent, Cray-2, performed at 1700 Mflops and for scientists this was indeed a new era in the help to attack difficult mathematical problems. But when you look at QCD all these seem just toys for a kindergarten and one is not even able to perform the simplest computations to extract meaningful physical results. So, physicists started to project very specialized machines to hope to improve the situation.

Today the situation is changed dramatically. The reason is that the increasing need for computation to perform complex tasks on a video output requires extended parallel computation capability for very simple mathematical tasks. But these mathematical tasks is all one needs to perform scientific computations. The flagship company in this area is Nvidia that produced CUDA for their graphic cards. This means that today one can have outperforming parallel computation on a desktop computer and we are talking of some Teraflops capability! All this at a very affordable cost. With few bucks you can have on your desktop a machine performing thousand times better than a legendary Cray machine. Now, a counterpart machine of a Cray-1 is a CUDA cluster breaking the barrier of Petaflops! Something people were dreaming of just a few years ago.  This means that you can do complex and meaningful QCD computations in your office, when you like, without the need to share CPU time with anybody and pushing your machine at its best. All this with costs that are not a concern anymore.

So, with this opportunity in sight, I jumped on this bandwagon and a few months ago I upgraded my desktop computer at home into a CUDA supercomputer. The first idea was just to buy old material from Ebay at very low cost to build on what already was on my machine. On 2008 the top of the GeForce Nvidia cards was a 9800 GX2. This card comes equipped with a couple of GPUs with 128 cores each one, 0.5 Gbyte of ram for each GPU and support for CUDA architecture 1.1. No double precision available. This option started to be present with cards having CUDA architecture 1.3 some time later. You can find a card of this on Ebay for about 100-120 euros. You will also need a proper motherboard. Indeed, again on 2008, Nvidia produced nForce 790i Ultra properly fitted for these aims. This card is fitted for a 3-way SLI configuration and as my readers know, I installed till 3 9800 GX2 cards on it. I have got this card on Ebay for a similar pricing as for the video cards. Also, before to start this adventure, I already had a 750 W Cooler Master power supply. It took no much time to have this hardware up and running reaching the considerable computational power of 2 Tflops in single precision, all this with hardware at least 3 years old! For the operating system I chose Windows 7 Ultimate 64 bit after an initial failure with Linux Ubuntu 64 bit.

There is a wide choice in the web for software to run for QCD. The most widespread is surely the MILC code. This code is written for a multi-processor environment and represents the effort of several people spanning several years of development. It is well written and rather well documented. From this code a lot of papers on lattice QCD have gone through the most relevant archival journals. Quite recently they started to port this code on CUDA GPUs following a trend common to all academia. Of course, for my aims, being a lone user of CUDA and having no much time for development, I had the no much attractive perspective to try the porting of this code on GPUs. But, in the same time when I upgraded my machine, Pedro Bicudo and Nuno Cardoso published their paper on arxiv (see here) and made promptly available their code for SU(2) QCD on CUDA GPUs. You can download their up-to-date code here (if you plan to use this code just let them know as they are very helpful). So, I ported this code, originally written for Linux, to Windows 7  and I have got it up and running obtaining a right output for a lattice till 56^4 working just in single precision as, for this hardware configuration, no double precision was available. The execution time was acceptable to few seconds on GPUs and some more at the start of the program due to CPU and GPUs exchanges. So, already at this stage I am able to be productive at a professional level with lattice computations. Just a little complain is in order here. In the web it is very easy to find good code to perform lattice QCD but nothing is possible to find for post-processing of configurations. This code is as important as the former: Without computation of observables one can do nothing with configurations or whatever else lattice QCD yields on whatever powerful machine. So, I think it would be worthwhile to have both codes available to get spectra, propagators and so on starting by a standard configuration file independently on the program that generated it. Similarly, it appears almost impossible to get lattice code for computations on lattice scalar field theory (thank you a lot to Colin Morningstar for providing me code for 2+1dimensions!). This is a workhorse for people learning lattice computation and would be helpful, at least for pedagogical reasons, to make it available in the same way QCD code is. But now, I leave aside complains and go to the most interesting part of this post: The upgrading.

In these days I made another effort to improve my machine. The idea is to improve in performance like larger lattices and shorter execution times while reducing overheating and noise. Besides, the hardware I worked with was so old that the architecture did not make available double precision. So, I decided to buy a couple of GeForce 580 GTX. This is the top of the GeForce cards (590 GTX is a couple of 580 GTX on a single card) and yields 1.5 Tflops in single precision (9800 GX2 stopped at 1 Tflops in single precision). It has Fermi architecture (CUDA 2.0) and grants double precision at a possible performance of at least 0.5 Tflops. But as happens for all video cards, a model has several producers and these producers may decide to change something in performance. After some difficulties with the dealer, I was able to get a couple of high-performance MSI N580GTX Twin Frozr II/OC at a very convenient price. With respect to Nvidia original card, these come overclocked, with a proprietary cooler system that grants a temperature reduced of 19°C with respect to the original card. Besides, higher quality components were used. I received these cards yesterday and I have immediately installed them. In a few minutes Windows 7 installed the drivers. I recompiled my executable and finally I performed a successful computation to 66^4 with the latest version of Nuno and Pedro code. Then, I checked the temperature of the card with Nvidia System Monitor and I saw a temperature of 60° C for each card and the cooler working at 106%. This was at least 24°C lesser than my 9800 GX2 cards! Execution times were at least reduced to a half on GPUs. This new configuration grants 3 Tflops in single precision and at least 1 Tflops in double precision. My present hardware configuration is the following:

So far, I have had no much time to experiment with the new hardware. I hope to say more to you in the near future. Just stay tuned!

Nuno Cardoso, & Pedro Bicudo (2010). SU(2) Lattice Gauge Theory Simulations on Fermi GPUs J.Comput.Phys.230:3998-4010,2011 arXiv: 1010.4834v2


CUDA: The upgrade

16/02/2011

As promised (see here) I am here to talk again about my CUDA machine. I have done the following upgrade:

  • Added 4 GB of RAM and now I have 8 GB of DDR3 RAM clocked at 1333 MHz. This is the maximum allowed by my motherboard.
  • Added the third 9800 GX2 graphics card. This is a XFX while the other twos that I have already installed are EVGA and Nvidia respectively. These three cards are not perfectly identical as the EVGA is overclocked by the manufacturer and, for all, the firmware could not be the same.

At the start of the upgrade process things were not so straight. Sometime BIOS complained at the boot about the position of the cards in the three PCI express 2.0 slots and the system did not start at all. But after that I have found the right combination in permuting the three cards, Windows 7 recognized all of them, latest Nvidia drivers installed as a charm and the Nvidia system monitor showed the physical situation of all the GPUs. Heat is a concern here as the video cards work at about 70 °C while the rest of the hardware is at about 50 °C. The box is always open and I intend to keep it so to reduce at a minimum the risk of overheating.

The main problem arose when I tried to run my CUDA applications from a command window. I have a simple program the just enumerate GPUs in the system and also the program for lattice computations of Pedro Bicudo and Nuno Cardoso can check the system to identify the exact set of resources to perform its work at best. Both the applications, that I recompiled on the upgraded platform, just saw a single GPU. It was impossible, at first, to get a meaningful behavior from the system. I thought that this could have been a hardware problem and contacted the XFX support for my motherboard. I bought my motherboard by second hand but I was able to register the product thanks to the seller that already did so. People at XFX were very helpful and fast in giving me an answer. The technician said to me essentially that the system should have to work and so he gave me some advices to identify possible problems. I would like to remember that a 9800 GX2 contains two graphics cards and so I have six GPUs to work with. I checked all the system again until I get the nice configuration above with Windows 7 seeing all the cards. Just a point remained unanswered: Why my CUDA applications did not see the right number of GPUs. This has been an old problem for Nvidia and was overcome with a driver revision long before I tried for myself. Currently, my driver is 266.58, the latest one. The solution come out unexpectedly. It has been enough to change a setting in the Performance menu of the Nvidia monitor for the use of multi-GPU and I have got back 5 GPUs instead of just 1. This is not six but I fear that I cannot do better. The applications now work fine. I recompiled them all and I have run successfully the lattice computation till a 76^4 lattice in single precision! With these numbers I am already able to perform professional work in lattice computations at home.

Then I spent a few time to set the development environment through the debugger Parallel Nsight and Visual Studio 2008 for 64 bit applications. So far, I was able to generate the executable of the lattice simulation under VS 2008. My aim is to debug it to understand why some values become zero in the output and they should not. Also I would like to understand why the new version of the lattice simulation that Nuno sent to me does not seem to work properly on my platform. I have taken some time trying to configure Parallel Nsight for my machine. You will need at least two graphics cards to get it run and you have to activate PhysX on the Performance monitor of Nvidia on the card that will not run your application. This was a simple enough task as the online manual of the debugger is well written. Also, enclosed examples are absolutely useful. My next week-end will be spent to fine tuning all the matter and starting doing some work with the lattice simulation.

As far as I will go further with this activity I will inform you on my blog. If you want to initiate such an enterprise by yourself, feel free to get in touch with me to overcome difficulties and hurdles you will encounter. Surely, things proved to be not so much complicated as they appeared at the start.


A striking clue and some more

08/02/2011

ResearchBlogging.org

My colleagues participating to “The many faces of QCD” in Ghent last year keep on publishing their contributions to the proceedings. This conference produced several outstanding talks and so, it is worthwhile to tell about that here. I have already said about this here, here and here and I have spent some words about the fine paper of Oliveira, Bicudo and Silva (see here). Today I would like to tell you about an interesting line of research due to Silvio Sorella and colleagues and a striking clue supporting my results on scalar field theory originating by Axel Maas (see his blog).

Silvio is an Italian physicist that lives and works in Brazil, Rio de Janeiro, since a long time. I met him at Ghent mistaking him with Daniele Binosi. Of course, I was aware of him through his works that are an important track followed to understand the situation of low-energy Yang-Mills theory. I have already cited him in my blog both for Ghent and the Gribov obsession. He, together with David Dudal, Marcelo Guimaraes and Nele Vandersickel (our photographer in Ghent), published on arxiv a couple of contributions (see here and here). Let me explain in a few words why I consider the work of these authors really interesting. As I have said in my short history (see here), Daniel Zwanzinger made some fundamental contributions to our understanding of gauge theories. For Yang-Mills, he concluded that the gluon propagator should go to zero at very low energies. This conclusion is at odds with current lattice results. The reason for this, as I have already explained, arises from the way Gribov copies are managed. Silvio and other colleagues have shown in a series of papers how Gribov copies and massive gluons can indeed be reconciled by accounting for condensates. A gluon condensate can explain a massive gluon while retaining  all the ideas about Gribov copies and this means that they have also find a way to refine the ideas of Gribov and Zwanzinger making them agree with lattice computations. This is a relevant achievement and a serious concurrent theory to our understanding of infrared non-Abelian theories. Last but not least, in these papers they are able to show a comparison with experiments obtaining the masses  of the lightest glueballs. This is the proper approach to be followed to whoever is aimed to understand what is going on in quantum field theory for QCD. I will keep on following the works of these authors being surely a relevant way to reach our common goal: to catch the way Yang-Mills theory behaves.

A real brilliant contribution is the one of Axel Maas. Axel has been a former student of Reinhard Alkofer and Attilio Cucchieri & Tereza Mendes. I would like to remember to my readers that Axel have had the brilliant idea to check Yang-Mills theory on a two-dimensional lattice arising a lot of fuss in our community that is yet on. On a similar line, his contribution to Ghent conference is again a striking one. Axel has thought to couple a scalar field to the gluon field and study the corresponding behavior on the lattice. In these first computations, he did not consider too large lattices (I would suggest him to use CUDA…) limiting the analysis to 14^4, 20^3 and 26^2. Anyhow, also for these small volumes, he is able to conclude that the propagator of the scalar field becomes a massive one deviating from the case of the tree-level approximation. The interesting point is that he sees a mass to appear also for the case of the massless scalar field producing a groundbreaking evidence of what I proved in 2006 in my PRD paper! Besides, he shows that the renormalized mass is greater than the bare mass, again an agreement with my work. But, as also stated by the author, these are only clues due to the small volumes he uses. Anyhow, this is a clever track to be pursued and further studies are needed. It would also be interesting to have a clear idea of the fact that this mass arises directly from the dynamics of the scalar field itself rather than from its interaction with the Yang-Mills field. I give below a figure for the four dimensional case in a quenched approximation

I am sure that this image will convey the right impression to my readers as mine. A shocking result that seems to match, at a first sight, the case of the gluon propagator on the lattice (mapping theorem!). At larger volumes it would be interesting to see also the gluon propagator. I expect a lot of interesting results to come out from this approach.

 

 

Silvio P. Sorella, David Dudal, Marcelo S. Guimaraes, & Nele Vandersickel (2011). Features of the Refined Gribov-Zwanziger theory: propagators, BRST soft symmetry breaking and glueball masses arxiv arXiv: 1102.0574v1

N. Vandersickel,, D. Dudal,, & S.P. Sorella (2011). More evidence for a refined Gribov-Zwanziger action based on an effective potential approach arxiv arXiv : 1102.0866

Axel Maas (2011). Scalar-matter-gluon interaction arxiv arXiv: 1102.0901v1

Frasca, M. (2006). Strongly coupled quantum field theory Physical Review D, 73 (2) DOI: 10.1103/PhysRevD.73.027701


Today on arxiv

21/12/2010

ResearchBlogging.org

I would like to write down a few lines on a paper published today on arxiv by Axel Maas (see here). This author draws an important conclusion about the propagators in Yang-Mills theories: These functions depend very few on the gauge group, keeping  fixed the coupling a la ‘t Hooft as C_Ag^2 being C_A a Casimir parameter of the group that is N for SU(N). The observed changes are just quantitative rather than qualitative as the author states. Axel does his computations on the lattice in 2 and 3 dimensions and gives an in-depth discussion of the way the scaling solution, the one not seen on the lattice except for the two-dimensional case, is obtained and how the propagators are computed on the lattice. This paper opens up a new avenue in this kind of studies and, as far as I can tell, such an extended analysis with respect to different gauge groups was never performed before.  Of course, in d=3 the decoupling solution is obtained instead. Axel also shows the behavior of the running coupling. I would like to remember that a decoupling solution implies a massive gluon propagator and a photon-like ghost propagator while the running coupling is strongly suppressed in the infrared.

The conclusion given in this paper is a strong support to the work all the people is carrying on about the decoupling solution. As you can see from my work (see here), the only dependence seen for my propagators on the gauge group is in the ‘t Hoof t coupling. The same conclusion is true for other authors. It is my conviction that this paper is again an important support to most theoretical work done in these recent years. By his side, Axel confirms again a good nose for the choice of the research avenues to be followed.

Axel Maas (2010). On the gauge-algebra dependence of Landau-gauge Yang-Mills propagators arxiv arXiv: 1012.4284v1


The many faces of QCD (2)

10/11/2010

Back at home, conference ended. A lot of good impressions both from the physics side and other aspects as the city and the company. On Friday I held my talk. All went fine and I was goodly inspired so to express my ideas at best. You can find all the talks here. The pictures are here. Now it should be easier to identify me.

Disclaimer: The talks I will comment on are about results very near my research area. Talks I will not cite are important and interesting as well and the fact that I will not comment about them does not imply merit for good or bad. Anyhow, I will appreciate any comment by any participant to the conference aiming to discuss his/her work.

On Tuesday afternoon started a session about phases in QCD. This field is very active and is a field where some breakthroughs are expected to be seen in the near future. I have had a lot of fun to know Eduardo Fraga that was here with two of his students: Leticia Palhares and Ana Mizher. I invite you to read their talks as this people are doing a real fine work. On the same afternoon I listened to the talk of Pedro Bicudo. Pedro, besides being a nice company for fun, is also a very good physicist performing relevant work in the area of lattice QCD. He is a pioneer in the use of CUDA, parallel computing using graphic processors, and I intend to use his code, produced with his student Nuno Cardoso, on my machine to start doing lattice QCD at very low cost. On his talk you can see a photo of one of my graphic cards. He used lattice computations to understand the phase diagram of QCD. Quite interesting has been the talk of Jan Pawlowski about the phase diagram of two flavor QCD. He belongs to a group of people that produced the so called scaling solution and it is a great moment to see them to recognize the very existence of the decoupling solution, the only one presently seen on lattice computations.

On Wednesday the morning session continued on the same line of the preceding day. I would like to cite the work of Marco Ruggieri because, besides being a fine drinking companion (see below), he faces an interesting problem:  How does the ground state of QCD change in presence of a strong magnetic field? Particularly interesting is to see how the phase diagram gets modified. On the same line were the successive talks of Ana Mizher and Maxim Chernodub. Chernodub presented a claim that in this case vacuum is that of an electromagnetic superconductor due to \rho meson condensation. In this area of research the main approach is to use some phenomenological model. Ana Mizher used a linear sigma model while Marco preferred the Nambu-Jona-Lasinio model. The reason for this is that the low-energy behavior of QCD is not under control and the use of well-supported effective models is the smarter approach we have at our disposal. Of course, this explains why the work of our community is so important: If we are able to model the propagator of the gluon in the infrared, all the parameters of the Nambu-Jona-Lasinio model are properly fixed and we have the true infrared limit of QCD. So, the stake is very high here.

In the afternoon there were some talks that touched very near the question of infrared propagators. Silvio Sorella is an Italian theoretical physicist living in Brazil. He is doing a very good work in this quest for an understanding of the low-energy behavior of QCD. This work is done in collaboration with several other physicists. The idea is to modify the Gribov-Zwanziger scenario, that by itself will produce the scaling solution currently not seen on the lattice, to include the presence of a gluon condensate. This has the effect to produce massive propagators that agree well with lattice computations. In this talk Silvio showed how this approach can give the masses of the lowest states of the glueball spectrum. This has been an important step forward showing how this approach can be used to give experimental forecasts. Daniel Zwanziger then presented a view of the confinement scenario. The conclusion was very frustrating: So far nobody can go to the Clay Institute to claim the prize. More time is needed. Daniel has been the one who proposed the scenario of infrared Yang-Mills theory that produced the scaling solution. The idea is to take into account the problem of Gribov copies and to impose that all the computations must be limited to the first Gribov horizon. If you do this the gluon propagator goes to zero lowering momenta and you get positivity maximally violated obtaining a confining theory. So, this scenario has been called Gribov-Zwanzinger. From lattice computations we learned that the gluon propagator reaches a non zero finite value lowering momenta and this motivated Silvio and others to see if one could maintain the original idea of Gribov horizon and agreement with lattice computations of the Gribov-Zwanzinger scenario. Matthieu Thissier presented a talk with an original view. The idea is to consider QCD with a small perturbation expansion at one loop and a mass term added by hand. He computed the gluon propagator and compared with lattice data till the infrared obtaining a very good agreement. Arlene Aguilar criticized strongly this approach as he worked with a coupling larger than one (a huge one said Arlene) even if he was doing small perturbation theory. I talked about this with Matthieu. My view is that the main thing to learn from this kind of  computations is that if you take a Yukawa-like propagator with a mass going at least as m^2+cq^2 (do you remember Orlando Oliveira talk?) the agreement with lattice data is surely fairly good and so, even if you have done something that is mathematically questionable, surely we apprehend an important fact! The afternoon session was concluded by the talk of Daniele Binosi. With Daniele we spent a nice night in Ghent. He is a student of Joannis Papavassiliou and, together with Arlene Aguilar, this group is doing fine work on numerically solving Dyson-Schwinger equations to get the full propagator of Yang-Mills theory. They get a very good agreement with lattice data and support the view that, on the full range of energies, the Cornwall propagator for the gluon with a logarithmic running mass reaching a constant in the infrared is the right description of the theory. Daniele presented a beautiful computation based on Batalin-Vilkoviski framework that supported the conclusions of his group. It should be said that he presented a different definition of the running coupling that grants a non-trivial fixed point at infrared. This is  a delicate matter as, already a proper definition of the running coupling for the infrared is not a trivial question. Daniele’s definition is quite different from that given by Andre Sternbeck in his talk as the latter has just the trivial fixed point as is emerging from the lattice computations.

On Thursday the first speaker was Attilio Cucchieri. Attilio and his wife, Tereza Mendes, are doing a fine work on lattice computations that reached a breakthrough at Lattice 2007 when they showed, with a volume of (27fm)^4, that the gluon propagator in the Landau gauge reaches a finite non-zero value lowering momenta. This was a breakthrough, confirmed at the same conference by two others groups (Orlando Oliveira by one side and I. Bogolubsky, E.M. Ilgenfritz, M. Muller-Preussker and A. Sternbeck by the other side), as for long time it was believed that the only true solution was the scaling one and the gluon propagator should have gone  to zero lowering momenta. This became a paradigm so that papers have got rejected on the basis that they were claiming a different scenario. Attilio this time was on a very conservative side presenting an interesting technical problem. Tereza’s talk was more impressive showing that, with higher temperatures and increasing volumes, in the Landau gauge the plateau is still there. With Tereza and Attilio we spent some nice time in a pub discussing together with Marco Ruggeri about the history of their community, how they went to change everything about this matter and their fighting for this. I hope one day this people will write down this history because there is a lot to learn from it. In the afternoon session there was a talk by Reinhard Alkofer. Alkofer has been instrumental in transforming the scaling solution into a paradigm for a lot of years in the community. Unfortunately lattice computations talked against it and, as Bob Dylan one time said, times are changing. He helped the community with discovering a lot of smart students that have given an important contribution to it. In his talk he insisted with his view with a proposal for the functional form for the propagator (this was missing until now for the scaling solution) and a computation of the mass of the \eta'. \eta' is a very strange particle. From {\rm DA}\Phi{\rm NE} (KLOE-2) we know that this is not just a composite state of quarks but it contains a large part made of glue: It is like to have to cope with an excited hydrogen atom and so, also its decay is to be understood (you can read my paper here). So, maybe a more involved discussion is needed before to have an idea of how to get the mass of this particle. After Alkofer’s talk followed the talks of Aguilar and Papavassiliou. I would like to emphasize the relevance of the work of this group. Aguilar showed how they get an effective quark mass from Schwinger-Dyson equations when there is no enhancement in the ghost propagator. Papavassiliou proposed to extend the background field method to Schwinger-Dyson equations. I invite you to check the agreement they get for the Cornwall propagator of the gluon with lattice data in Arlene’s talk and how this can give the form m^2+cq^2  at lower momenta. My view is that, combining my recent results on strongly coupled expansions for Yang-Mills and scalar field theories and the results of this group, a meaningful scenario is emerging giving a complete comprehension of what is going on for Yang-Mills theory at lower energies. Joannis gave us an appointment for the next year in Trento. I will do everything I can to be there! Finally, the session was completed with Axel Mass’ talk. Axel has been a student of Alkofer and worked with Attilio and Tereza. He put forward a lattice computation of Yang-Mills propagators in two dimensions that, for me, should have completely settled the question but produced a lot of debate instead. He gave in his talk another bright idea: To study on the lattice a scalar theory interacting with gluons. I think that this is a very smart way to understand the mechanism underlying mass generation in these theories. From the works discussed so far it should appear clear that Schwinger mechanism (also at classical level (see my talk)!) is at work here.  The talk of Axel manifestly shows this. It would be interesting if he could redo the computations taking a massless scalar field to unveil completely the dynamical generation of masses.

On Friday the morning session started with an interesting talk by Hans Dierckx trying to understand cardiac behavior using string theory. A talk by Oliver Rosten followed. Oliver produced a PhD thesis on the exact renormalization group of about 500 pages (see here). His talk was very beautiful and informative and in some way gave a support to mine. Indeed, he showed, discussing on the renormalization group, how a strong coupling expansion could emerge. In some way we are complimentary. I will not discuss my talk here but you are free to ask questions. The conference was concluded by a talk of Peter van Baal. Peter has a terrible story about him and I will not discuss it here. I can only wish to him the best of the possible lucks.

Finally, I would like to thank the organizers for the beautiful conference they gave me the chance to join. The place was very nice (thanks Nele!) and city has an incredible beauty. I think these few lines do not do justice to them and all the participants for what they have given. See you again folks!


The many faces of QCD

02/11/2010

After a long silence, due to technical impediments as many of you know, I turn back to you from Ghent (Belgium). I am participating at the conference “The many faces of QCD”. You can find the program here. The place is really beautiful as the town that I had the chance to look out yesterday evening. Organizers programmed a visit downtown tomorrow and I hope to see this nice town also at the sun light. The reason why this conference is so relevant is that it gathers almost all the people working on this matter of Green functions of Yang-Mills theory and QCD whose works I cited widely in my blog and in my papers. Now, I have the chance to meet them and speak to them. I am writing after the second day ended. The atmosphere is really exciting and discussion is always alive and it happens quite often that speakers are interrupted during their presentations. The situation this field is living is simply unique in the scientific community. They are at the very start of a possible scientific revolution as they are finally obtaining results of non-perturbative physics in a crucial field as that of QCD.

Disclaimer: The talks I will comment on are about results very near my research area. Talks I will not cite are important and interesting as well and the fact that I will not comment about them does not imply merit for good or bad. Anyhow, I will appreciate any comment by any participant to the conference aiming to discuss his/her work.

I would like to cite some names here but I fear to forget somebody surely worthwhile to be named. From my point of view, there have been a couple of talks that caught my attention more strongly than others, concerning computations on the lattice. This happened with the talk of Tereza Mendes yesterday and the one of Orlando Oliveira today. Tereza just studied the gluon propagator at higher temperatures obtaining again striking and unexpected results.  There is this plateau in the gluon propagator appearing again and again when lattice volume is increased. It would have been interesting to have also a look to the ghost and the running coupling. Orlando, by his side, showed for the first time an attempt to fit with the function G(p)=\sum_n\frac{Z_n}{p^2+m^2_n} that you can recognize as the one I proposed since my first analysis to explain the infrared behavior of Yang-Mills theory. But Orlando went further and found the next to leading order correction to the mass appearing in a Yukawa-like propagator.  The idea is to see if the original hypothesis of Cornwall can agree with the current lattice computations. So, he shows that for the sum of propagators one can get even better agreement in the fitting increasing the number of masses (at least 4)  and for the Cornwall propagator you will need a mass corrected as M^2+\alpha p^2. Shocking as may seem, I computed this term this summer and you can find it in this paper of mine. Indeed, this is a guess I put forward after a referee asked to me an understanding of the next-to-leading corrections to my propagator and, as you can read from my paper, I guessed it would have produced a Cornwall-like propagator. Indeed, this is just a first infrared correction that can arise by expanding the logarithm in the Cornwall’s formula.

The question of the gluon condensate, that I treated in my blog extensively thanks to the help of Stephan Narison, has been presented today by Olivier Péne through a lattice computation. Olivier works in the group of Philippe Boucaud and contributed to the emerging of the now called decoupling solution for the gluon propagator. The importance of this work relies on the fact that a precise determination of the gluon condensate from lattice is fundamental for our understanding of low-energy behavior of QCD. For this analysis is important to have a precise determination of the constant \Lambda_{QCD}. Boucaud’s group produced an approach to this aim. Similarly, Andre Sternbeck showed how this important constant could be obtained by a proper definition of the running coupling and he showed a very fine agreement with the result of Boucaud’s group.

Finally, I would like to remember the talk of Valentine Zakharov. I talked extensively about Valentine in my previous blog’s entries. His discoveries in this area of physics are really fundamental and so it is important to have a particular attention to his talks. Substantially, he mapped scalar fields and Yang-Mills fields to get an understanding of confinement! As I am a strong supporter of this view, as my readers may know from my preceding posts, I was quite excited to see such a an idea puts forward by Valentine.

As conference’s program unfolds I will take you updated with an eyes toward the aspects that are relevant to my work. Meantime, I hope to have given to you the taste of the excitement this area of research conveys to us that pursue it.


%d bloggers like this: