Nailing down the Yang-Mills problem

22/02/2014

ResearchBlogging.org Millennium problems represent a major challenge for physicists and mathematicians. So far, the only one that has been solved was the Poincaré conjecture (now a theorem) by Grisha Perelman. For people working in strong interactions and quantum chromodynamics, the most interesting of such problems is the Yang-Mills mass gap and existence problem. The solutions of this problem would imply a lot of consequences in physics and one of the most important of these is a deep understanding of confinement of quarks inside hadrons. So far, there seems to be no solution to it but things do not stay exactly in this way. A significant number of researchers has performed lattice computations to obtain the propagators of the theory in the full range of energy from infrared to ultraviolet providing us a deep understanding of what is going on here (see Yang-Mills article on Wikipedia). The propagators to be considered are those for  the gluon and the ghost. There has been a significant effort from theoretical physicists in the last twenty years to answer this question. It is not so widely known in the community but it should because the work of this people could be the starting point for a great innovation in physics. In these days, on arxiv a paper by Axel Maas gives a great recount of the situation of these lattice computations (see here). Axel has been an important contributor to this research area and the current understanding of the behavior of the Yang-Mills theory in two dimensions owes a lot to him. In this paper, Axel presents his computations on large volumes for Yang-Mills theory on the lattice in 2, 3 and 4 dimensions in the SU(2) case. These computations are generally performed in the Landau gauge (propagators are gauge dependent quantities) being the most favorable for them. In four dimensions the lattice is (6\ fm)^4, not the largest but surely enough for the aims of the paper. Of course, no surprise comes out with respect what people found starting from 2007. The scenario is well settled and is this:

  1. The gluon propagator in 3 and 4 dimensions dos not go to zero with momenta but is just finite. In 3 dimensions has a maximum in the infrared reaching its finite value at 0  from below. No such maximum is seen in 4 dimensions. In 2 dimensions the gluon propagator goes to zero with momenta.
  2. The ghost propagator behaves like the one of a free massless particle as the momenta are lowered. This is the dominant behavior in 3 and 4 dimensions. In 2 dimensions the ghost propagator is enhanced and goes to infinity faster than in 3 and 4 dimensions.
  3. The running coupling in 3 and 4 dimensions is seen to reach zero as the momenta go to zero, reach a maximum at intermediate energies and goes asymptotically to 0 as momenta go to infinity (asymptotic freedom).

Here follows the figure for the gluon propagator Gluon Propagators

and for the running coupling

RunningCoupling

There is some concern for people about the running coupling. There is a recurring prejudice in Yang-Mills theory, without any support both theoretical or experimental, that the theory should be not trivial in the infrared. So, the running coupling should not go to zero lowering momenta but reach a finite non-zero value. Of course, a pure Yang-Mills theory in nature does not exist and it is very difficult to get an understanding here. But, in 2 and 3 dimensions, the point is that the gluon propagator is very similar to a free one, the ghost propagator is certainly a free one and then, using the duck test: If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck, the theory is really trivial also in the infrared limit. Currently, there are two people in the World that have recognized a duck here:  Axel Weber (see here and here) using renormalization group and me (see here, here and here). Now, claiming to see a duck where all others are pretending to tell a dinosaur does not make you the most popular guy  in the district. But so it goes.

These lattice computations are an important cornerstone in the search for the behavior of a Yang-Mills theory. Whoever aims to present to the World his petty theory for the solution of the Millennium prize must comply with these results showing that his theory is able to reproduce them. Otherwise what he has is just rubbish.

What appears in the sight is also the proof of existence of the theory. Having two trivial fixed points, the theory is Gaussian in these limits exactly as the scalar field theory. A Gaussian theory is the simplest example we know of a quantum field theory that is proven to exist. Could one recover the missing part between the two trivial fixed points as also happens for the scalar theory? In the end, it is possible that a Yang-Mills theory is just the vectorial counterpart of the well-known scalar field, the workhorse of all the scholars in quantum field theory.

Axel Maas (2014). Some more details of minimal-Landau-gauge Yang-Mills propagators arXiv arXiv: 1402.5050v1

Axel Weber (2012). Epsilon expansion for infrared Yang-Mills theory in Landau gauge Phys. Rev. D 85, 125005 arXiv: 1112.1157v2

Axel Weber (2012). The infrared fixed point of Landau gauge Yang-Mills theory arXiv arXiv: 1211.1473v1

Marco Frasca (2007). Infrared Gluon and Ghost Propagators Phys.Lett.B670:73-77,2008 arXiv: 0709.2042v6

Marco Frasca (2009). Mapping a Massless Scalar Field Theory on a Yang-Mills Theory: Classical
Case Mod. Phys. Lett. A 24, 2425-2432 (2009) arXiv: 0903.2357v4

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory PoS FacesQCD:039,2010 arXiv: 1011.3643v3


Back to CUDA

11/02/2013

It is about two years ago when I wrote my last post about CUDA technology by NVIDIA (see here). At that time I added two new graphic cards to my PC, being on the verge to reach 3 Tflops in single precision for lattice computations.  CUDA LogoIndeed, I have had an unlucky turn of events and these cards went back to the seller as they were not working properly and I was completely refunded. Meantime, also the motherboard failed and the hardware was largely changed  and so, I have been for a lot of time without the opportunity to work with CUDA and performing intensive computations as I planned. As it is well-known, one can find a lot of software exploiting this excellent technology provided by NVIDIA and, during these years, it has been spreading largely, both in academia and industry, making life of researchers a lot easier. Personally, I am using it also at my workplace and it is really exciting to have such a computational capability at your hand at a really affordable price.

Nvidia TeslaNow, I am newly able to equip my personal computer at home with a powerful Tesla card. Some of these cards are currently dismissed as they are at the end of activity, due to upgrades of more modern ones, and so can be found at a really small price in bid sites like ebay. So, I bought a Tesla M1060 for about 200 euros.Tesla M1060 As the name says, this card has not been conceived for a personal computer but rather for servers produced by some OEMs. This can also be realized when we look at the card and see a passive cooler. This means that the card should have a proper physical dimension to enter into a server while the active dissipation through fans should be eventually provided by the server itself. Indeed, I added an 80mm Enermax fan to my chassis (also Enermax Enlobal)  to be granted that the motherboard temperature does not reach too high values. My motherboard is an ASUS P8P67 Deluxe. This is  a very good card, as usual for ASUS, providing three PCIe 2.0 slots and, in principle, one can add up to three video cards together. But if you have a couple of NVIDIA cards in SLI configuration, the slots work at x8. A single video card will work at x16.  Of course, if you plan to work with these configurations, you will need a proper PSU. I have a Cooler Master Silent Pro Gold 1000 W and I am well beyond my needs. This is what remains from my preceding configuration and is performing really well. I have also changed my CPU being this now an Intel i3-2125 with two cores at 3.30 GHz and 3Mb Cache. Finally, I added  16 Gb of Corsair Vengeance DDR3 RAM.

The installation of the card went really smooth and I have got it up and running in a few minutes on Windows 8 Pro 64 Bit,  after the installation of the proper drivers. I checked with Matlab 2011b and PGI compilers with CUDA Toolkit 5.0 properly installed. All worked fine. I would like to spend a few words about PGI compilers that are realized by The Portland Group. PGI Workstation compilersI have got a trial license at home and tested them while at my workplace we have a fully working license. These compilers make the realization of accelerated CUDA code absolutely easy. All you need is to insert into your C or Fortran code some preprocessing directives. I have executed some performance tests and the gain is really impressive without ever writing a single line of CUDA code. These compilers can be easily introduced into Matlab to yield mex-files or S-functions even if they are not yet supported by Mathworks (they should!) and also this I have verified without too much difficulty both for C and Fortran.

Finally, I would like to give you an idea on the way I will use CUDA technology for my aims. What I am doing right now is porting some good code for the scalar field and I would like to use it in the limit of large self-interaction to derive the spectrum of the theory. It is well-known that if you take the limit of the self-interaction going to infinity you recover the Ising model. But I would like to see what happens with intermediate but large values as I was not able to get any hint from literature on this, notwithstanding this is the workhorse for any people doing lattice computations. What seems to matter today is to show triviality at four dimensions, a well-acquired evidence. As soon as the accelerate code will run properly, I plan to share it here as it is very easy to get good code to do lattice QCD but it is very difficult to get good code for scalar field theory as well. Stay tuned!


Large-N gauge theories on the lattice

22/10/2012

ResearchBlogging.org

Today I have found on arXiv a very nice review about large-N gauge theories on the lattice (see here). The authors, Biagio Lucini and Marco Panero, are well-known experts on lattice gauge theories being this their main area of investigation. This review, to appear on Physics Report, gives a nice introduction to this approach to manage non-perturbative regimes in gauge theories. This is essential to understand the behavior of QCD, both at zero and finite temperatures, to catch the behavior of bound states commonly observed. Besides this, the question of confinement is an open problem yet. Indeed, a theoretical understanding is lacking and lattice computations, especially in the very simplifying limit of large number of colors N as devised in the ’70s by ‘t Hooft, can make the scenario clearer favoring a better analysis.

What is seen is that confinement is fully preserved, as one gets an exact linear increasing potential in the limit of N going to infinity, and also higher order corrections are obtained diminishing as N increases. They are able to estimate the string tension obtaining (Fig. 7 in their paper):

\centering{\frac{\Lambda_{\bar{MS}}}{\sigma^\frac{1}{2}}\approx a+\frac{b}{N^2}}.

This is a reference result for whoever aims to get a solution to the mass gap problem for a Yang-Mills theory as the string tension must be an output of such a result. The interquark potential has the form

m(L)=\sigma L-\frac{\pi}{3L}+\ldots

This ansatz agrees with numerical data to distances 3/\sqrt{\sigma}! Two other fundamental results these authors cite for the four dimensional case is the glueball spectrum:

\frac{m_{0^{++}}}{\sqrt{\sigma}}=3.28(8)+\frac{2.1(1.1)}{N^2},
\frac{m_{0^{++*}}}{\sqrt{\sigma}}=5.93(17)-\frac{2.7(2.0)}{N^2},
\frac{m_{2^{++}}}{\sqrt{\sigma}}=4.78(14)+\frac{0.3(1.7)}{N^2}.

Again, these are reference values for the mass gap problem in a Yang-Mills theory. As my readers know, I was able to get them out from my computations (see here). More recently, I have also obtained higher order corrections and the linear rising potential (see here) with the string tension in a closed form very similar to the three-dimensional case. Finally, they give the critical temperature for the breaking of chiral symmetry. The result is

\frac{T_c}{\sqrt{\sigma}}=0.5949(17)+\frac{0.458(18)}{N^2}.

This result is rather interesting because the constant is about \sqrt{3/\pi^2}. This result has been obtained initially by Norberto Scoccola and Daniel Gómez Dumm (see here) and confirmed by me (see here). This result pertains a finite temperature theory and a mass gap analysis of Yang-Mills theory should recover it but here the question is somewhat more complex. I would add to these lattice results also the studies of propagators for a pure Yang-Mills theory in the Landau gauge, both at zero and finite temperatures. The scenario has reached a really significant level of maturity and it is time that some of the theoretical proposals put forward so far compare with it. I have just cited some of these works but the literature is now becoming increasingly vast with other really meaningful techniques beside the cited one.

As usual, I conclude this post on such a nice paper with the hope that maybe time is come to increase the level of awareness of the community about the theoretical achievements on the question of the mass gap in quantum field theories.

Biagio Lucini, & Marco Panero (2012). SU(N) gauge theories at large N arXiv arXiv: 1210.4997v1

Marco Frasca (2008). Yang-Mills Propagators and QCD Nuclear Physics B (Proc. Suppl.) 186 (2009) 260-263 arXiv: 0807.4299v2

Marco Frasca (2011). Beyond one-gluon exchange in the infrared limit of Yang-Mills theory arXiv arXiv: 1110.2297v4

D. Gomez Dumm, & N. N. Scoccola (2004). Characteristics of the chiral phase transition in nonlocal quark models Phys.Rev. C72 (2005) 014909 arXiv: hep-ph/0410262v2

Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature Phys. Rev. C 84, 055208 (2011) arXiv: 1105.5274v4


Today in arXiv (2)

03/05/2011

ResearchBlogging.org

Today I have found some papers in the arXiv daily that makes worthwhile to talk about. The contribution by Attilio Cucchieri and Tereza Mendes at Ghent Conference “The many faces of QCD” is out (see here). They study the gluon propagator in the Landau gauge at finite temperature at a significantly large lattice. The theory is SU(2) pure Yang-Mills. As you know, the gluon propagator in the Landau gauge at finite temperature is assumed to get two contributions: a longitudinal and a transverse one. This situation is quite different form the zero temperature case where such a distinction does not exist. But, of course, such a conclusion could only be drawn if the propagator is not the one of massive excitations and we already know from lattice computations that massive solutions are those supported. In this case we should expect that, at finite temperature, one of the components of the propagator must be suppressed and a massive gluon is seen again. Tereza and Attilio see exactly this behavior. I show you a picture extracted from their paper here

The effect is markedly seen as the temperature is increased. The transverse propagator is even more suppressed while the longitudinal propagator reaches a plateau, as for the zero temperature case, but with the position of the plateau depending on the temperature making it increase. Besides, Attilio and Tereza show how the computation of the longitudinal component is really sensible to the lattice dimensions and they increase them until the behavior settles to a stable one.  In order to perform this computation they used their new CUDA machine (see here). This result is really beautiful and I can anticipate that agrees quite well with computations that I and Marco Ruggieri are performing  but yet to be published. Besides, they get a massive gluon of the right value but with a mass decreasing with temperature as can be deduced from the moving of the plateau of the longitudinal propagator that indeed is the one of the decoupling solution at zero temperature.

As an aside, I would like to point out to you a couple of works for QCD at finite temperature on the lattice from the Portuguese group headed by Pedro Bicudo and participated by Nuno Cardoso and Marco Cardoso. I have already pointed out their fine work on the lattice that was very helpful for  my studies that I am still carrying on (you can find some links at their page). But now they moved to the case of finite temperature (here and here). These papers are worthwhile to read.

Finally, I would like to point out a really innovative paper by Arata Yamamoto (see here). This is again a lattice computation performed at finite temperature with an important modification: The chiral chemical potential. This is an important concept introduced, e.g. here and here, by Kenji Fukushima, Marco Ruggieri and Raoul Gatto. There is a fundamental reason to introduce a chiral chemical potential and this is the sign problem seen in lattice QCD at finite temperature. This problem makes meaningless lattice computations unless some turn-around is adopted and the chiral chemical potential is one of these. Of course, this implies some relevant physical expectations that a lattice computation should confirm (see here). In this vein, this paper by Yamamoto is a really innovative one facing such kind of computations on the lattice using for the first time a chiral chemical potential. Being a pioneering paper, it appears at first a shortcoming the choice of too small volumes. As we already have discussed above for the gluon propagator in a pure Yang-Mills theory, the relevance to have larger volumes to recover the right physics cannot be underestimated. As a consequence the lattice spacing is 0.13 fm corresponding to a physical energy of 1.5 GeV that is high enough to miss the infrared region and so the range of validity of a possible Polyakov-Nambu-Jona-Lasinio model as currently used in literature. So, while the track is open by this paper, it appears demanding to expand the lattice at least to recover the range of validity of infrared models and grant in this way a proper comparison with results in the known literature. Notwithstanding these comments, the methods and the approach used by the author are a fundamental starting point for any future development.

Attilio Cucchieri, & Tereza Mendes (2011). Electric and magnetic Landau-gauge gluon propagators in
finite-temperature SU(2) gauge theory arXiv arXiv: 1105.0176v1

Nuno Cardoso, Marco Cardoso, & Pedro Bicudo (2011). Finite temperature lattice QCD with GPUs arXiv arXiv: 1104.5432v1

Pedro Bicudo, Nuno Cardoso, & Marco Cardoso (2011). The chiral crossover, static-light and light-light meson spectra, and
the deconfinement crossover arXiv arXiv: 1105.0063v1

Arata Yamamoto (2011). Chiral magnetic effect in lattice QCD with chiral chemical potential arXiv arXiv: 1105.0385v1

Fukushima, K., Ruggieri, M., & Gatto, R. (2010). Chiral magnetic effect in the Polyakov–Nambu–Jona-Lasinio model Physical Review D, 81 (11) DOI: 10.1103/PhysRevD.81.114031

Fukushima, K., & Ruggieri, M. (2010). Dielectric correction to the chiral magnetic effect Physical Review D, 82 (5) DOI: 10.1103/PhysRevD.82.054001


CUDA: Upgrading to 3 Tflops

29/03/2011

ResearchBlogging.org

When I was a graduate student I heard a lot about the wonderful performances of a Cray-1 parallel computer and the promises to explore unknown fields of knowledge with this unleashed power. This admirable machine reached a peak of 250 Mflops. Its near parent, Cray-2, performed at 1700 Mflops and for scientists this was indeed a new era in the help to attack difficult mathematical problems. But when you look at QCD all these seem just toys for a kindergarten and one is not even able to perform the simplest computations to extract meaningful physical results. So, physicists started to project very specialized machines to hope to improve the situation.

Today the situation is changed dramatically. The reason is that the increasing need for computation to perform complex tasks on a video output requires extended parallel computation capability for very simple mathematical tasks. But these mathematical tasks is all one needs to perform scientific computations. The flagship company in this area is Nvidia that produced CUDA for their graphic cards. This means that today one can have outperforming parallel computation on a desktop computer and we are talking of some Teraflops capability! All this at a very affordable cost. With few bucks you can have on your desktop a machine performing thousand times better than a legendary Cray machine. Now, a counterpart machine of a Cray-1 is a CUDA cluster breaking the barrier of Petaflops! Something people were dreaming of just a few years ago.  This means that you can do complex and meaningful QCD computations in your office, when you like, without the need to share CPU time with anybody and pushing your machine at its best. All this with costs that are not a concern anymore.

So, with this opportunity in sight, I jumped on this bandwagon and a few months ago I upgraded my desktop computer at home into a CUDA supercomputer. The first idea was just to buy old material from Ebay at very low cost to build on what already was on my machine. On 2008 the top of the GeForce Nvidia cards was a 9800 GX2. This card comes equipped with a couple of GPUs with 128 cores each one, 0.5 Gbyte of ram for each GPU and support for CUDA architecture 1.1. No double precision available. This option started to be present with cards having CUDA architecture 1.3 some time later. You can find a card of this on Ebay for about 100-120 euros. You will also need a proper motherboard. Indeed, again on 2008, Nvidia produced nForce 790i Ultra properly fitted for these aims. This card is fitted for a 3-way SLI configuration and as my readers know, I installed till 3 9800 GX2 cards on it. I have got this card on Ebay for a similar pricing as for the video cards. Also, before to start this adventure, I already had a 750 W Cooler Master power supply. It took no much time to have this hardware up and running reaching the considerable computational power of 2 Tflops in single precision, all this with hardware at least 3 years old! For the operating system I chose Windows 7 Ultimate 64 bit after an initial failure with Linux Ubuntu 64 bit.

There is a wide choice in the web for software to run for QCD. The most widespread is surely the MILC code. This code is written for a multi-processor environment and represents the effort of several people spanning several years of development. It is well written and rather well documented. From this code a lot of papers on lattice QCD have gone through the most relevant archival journals. Quite recently they started to port this code on CUDA GPUs following a trend common to all academia. Of course, for my aims, being a lone user of CUDA and having no much time for development, I had the no much attractive perspective to try the porting of this code on GPUs. But, in the same time when I upgraded my machine, Pedro Bicudo and Nuno Cardoso published their paper on arxiv (see here) and made promptly available their code for SU(2) QCD on CUDA GPUs. You can download their up-to-date code here (if you plan to use this code just let them know as they are very helpful). So, I ported this code, originally written for Linux, to Windows 7  and I have got it up and running obtaining a right output for a lattice till 56^4 working just in single precision as, for this hardware configuration, no double precision was available. The execution time was acceptable to few seconds on GPUs and some more at the start of the program due to CPU and GPUs exchanges. So, already at this stage I am able to be productive at a professional level with lattice computations. Just a little complain is in order here. In the web it is very easy to find good code to perform lattice QCD but nothing is possible to find for post-processing of configurations. This code is as important as the former: Without computation of observables one can do nothing with configurations or whatever else lattice QCD yields on whatever powerful machine. So, I think it would be worthwhile to have both codes available to get spectra, propagators and so on starting by a standard configuration file independently on the program that generated it. Similarly, it appears almost impossible to get lattice code for computations on lattice scalar field theory (thank you a lot to Colin Morningstar for providing me code for 2+1dimensions!). This is a workhorse for people learning lattice computation and would be helpful, at least for pedagogical reasons, to make it available in the same way QCD code is. But now, I leave aside complains and go to the most interesting part of this post: The upgrading.

In these days I made another effort to improve my machine. The idea is to improve in performance like larger lattices and shorter execution times while reducing overheating and noise. Besides, the hardware I worked with was so old that the architecture did not make available double precision. So, I decided to buy a couple of GeForce 580 GTX. This is the top of the GeForce cards (590 GTX is a couple of 580 GTX on a single card) and yields 1.5 Tflops in single precision (9800 GX2 stopped at 1 Tflops in single precision). It has Fermi architecture (CUDA 2.0) and grants double precision at a possible performance of at least 0.5 Tflops. But as happens for all video cards, a model has several producers and these producers may decide to change something in performance. After some difficulties with the dealer, I was able to get a couple of high-performance MSI N580GTX Twin Frozr II/OC at a very convenient price. With respect to Nvidia original card, these come overclocked, with a proprietary cooler system that grants a temperature reduced of 19°C with respect to the original card. Besides, higher quality components were used. I received these cards yesterday and I have immediately installed them. In a few minutes Windows 7 installed the drivers. I recompiled my executable and finally I performed a successful computation to 66^4 with the latest version of Nuno and Pedro code. Then, I checked the temperature of the card with Nvidia System Monitor and I saw a temperature of 60° C for each card and the cooler working at 106%. This was at least 24°C lesser than my 9800 GX2 cards! Execution times were at least reduced to a half on GPUs. This new configuration grants 3 Tflops in single precision and at least 1 Tflops in double precision. My present hardware configuration is the following:

So far, I have had no much time to experiment with the new hardware. I hope to say more to you in the near future. Just stay tuned!

Nuno Cardoso, & Pedro Bicudo (2010). SU(2) Lattice Gauge Theory Simulations on Fermi GPUs J.Comput.Phys.230:3998-4010,2011 arXiv: 1010.4834v2


CUDA: The upgrade

16/02/2011

As promised (see here) I am here to talk again about my CUDA machine. I have done the following upgrade:

  • Added 4 GB of RAM and now I have 8 GB of DDR3 RAM clocked at 1333 MHz. This is the maximum allowed by my motherboard.
  • Added the third 9800 GX2 graphics card. This is a XFX while the other twos that I have already installed are EVGA and Nvidia respectively. These three cards are not perfectly identical as the EVGA is overclocked by the manufacturer and, for all, the firmware could not be the same.

At the start of the upgrade process things were not so straight. Sometime BIOS complained at the boot about the position of the cards in the three PCI express 2.0 slots and the system did not start at all. But after that I have found the right combination in permuting the three cards, Windows 7 recognized all of them, latest Nvidia drivers installed as a charm and the Nvidia system monitor showed the physical situation of all the GPUs. Heat is a concern here as the video cards work at about 70 °C while the rest of the hardware is at about 50 °C. The box is always open and I intend to keep it so to reduce at a minimum the risk of overheating.

The main problem arose when I tried to run my CUDA applications from a command window. I have a simple program the just enumerate GPUs in the system and also the program for lattice computations of Pedro Bicudo and Nuno Cardoso can check the system to identify the exact set of resources to perform its work at best. Both the applications, that I recompiled on the upgraded platform, just saw a single GPU. It was impossible, at first, to get a meaningful behavior from the system. I thought that this could have been a hardware problem and contacted the XFX support for my motherboard. I bought my motherboard by second hand but I was able to register the product thanks to the seller that already did so. People at XFX were very helpful and fast in giving me an answer. The technician said to me essentially that the system should have to work and so he gave me some advices to identify possible problems. I would like to remember that a 9800 GX2 contains two graphics cards and so I have six GPUs to work with. I checked all the system again until I get the nice configuration above with Windows 7 seeing all the cards. Just a point remained unanswered: Why my CUDA applications did not see the right number of GPUs. This has been an old problem for Nvidia and was overcome with a driver revision long before I tried for myself. Currently, my driver is 266.58, the latest one. The solution come out unexpectedly. It has been enough to change a setting in the Performance menu of the Nvidia monitor for the use of multi-GPU and I have got back 5 GPUs instead of just 1. This is not six but I fear that I cannot do better. The applications now work fine. I recompiled them all and I have run successfully the lattice computation till a 76^4 lattice in single precision! With these numbers I am already able to perform professional work in lattice computations at home.

Then I spent a few time to set the development environment through the debugger Parallel Nsight and Visual Studio 2008 for 64 bit applications. So far, I was able to generate the executable of the lattice simulation under VS 2008. My aim is to debug it to understand why some values become zero in the output and they should not. Also I would like to understand why the new version of the lattice simulation that Nuno sent to me does not seem to work properly on my platform. I have taken some time trying to configure Parallel Nsight for my machine. You will need at least two graphics cards to get it run and you have to activate PhysX on the Performance monitor of Nvidia on the card that will not run your application. This was a simple enough task as the online manual of the debugger is well written. Also, enclosed examples are absolutely useful. My next week-end will be spent to fine tuning all the matter and starting doing some work with the lattice simulation.

As far as I will go further with this activity I will inform you on my blog. If you want to initiate such an enterprise by yourself, feel free to get in touch with me to overcome difficulties and hurdles you will encounter. Surely, things proved to be not so much complicated as they appeared at the start.


A striking clue and some more

08/02/2011

ResearchBlogging.org

My colleagues participating to “The many faces of QCD” in Ghent last year keep on publishing their contributions to the proceedings. This conference produced several outstanding talks and so, it is worthwhile to tell about that here. I have already said about this here, here and here and I have spent some words about the fine paper of Oliveira, Bicudo and Silva (see here). Today I would like to tell you about an interesting line of research due to Silvio Sorella and colleagues and a striking clue supporting my results on scalar field theory originating by Axel Maas (see his blog).

Silvio is an Italian physicist that lives and works in Brazil, Rio de Janeiro, since a long time. I met him at Ghent mistaking him with Daniele Binosi. Of course, I was aware of him through his works that are an important track followed to understand the situation of low-energy Yang-Mills theory. I have already cited him in my blog both for Ghent and the Gribov obsession. He, together with David Dudal, Marcelo Guimaraes and Nele Vandersickel (our photographer in Ghent), published on arxiv a couple of contributions (see here and here). Let me explain in a few words why I consider the work of these authors really interesting. As I have said in my short history (see here), Daniel Zwanzinger made some fundamental contributions to our understanding of gauge theories. For Yang-Mills, he concluded that the gluon propagator should go to zero at very low energies. This conclusion is at odds with current lattice results. The reason for this, as I have already explained, arises from the way Gribov copies are managed. Silvio and other colleagues have shown in a series of papers how Gribov copies and massive gluons can indeed be reconciled by accounting for condensates. A gluon condensate can explain a massive gluon while retaining  all the ideas about Gribov copies and this means that they have also find a way to refine the ideas of Gribov and Zwanzinger making them agree with lattice computations. This is a relevant achievement and a serious concurrent theory to our understanding of infrared non-Abelian theories. Last but not least, in these papers they are able to show a comparison with experiments obtaining the masses  of the lightest glueballs. This is the proper approach to be followed to whoever is aimed to understand what is going on in quantum field theory for QCD. I will keep on following the works of these authors being surely a relevant way to reach our common goal: to catch the way Yang-Mills theory behaves.

A real brilliant contribution is the one of Axel Maas. Axel has been a former student of Reinhard Alkofer and Attilio Cucchieri & Tereza Mendes. I would like to remember to my readers that Axel have had the brilliant idea to check Yang-Mills theory on a two-dimensional lattice arising a lot of fuss in our community that is yet on. On a similar line, his contribution to Ghent conference is again a striking one. Axel has thought to couple a scalar field to the gluon field and study the corresponding behavior on the lattice. In these first computations, he did not consider too large lattices (I would suggest him to use CUDA…) limiting the analysis to 14^4, 20^3 and 26^2. Anyhow, also for these small volumes, he is able to conclude that the propagator of the scalar field becomes a massive one deviating from the case of the tree-level approximation. The interesting point is that he sees a mass to appear also for the case of the massless scalar field producing a groundbreaking evidence of what I proved in 2006 in my PRD paper! Besides, he shows that the renormalized mass is greater than the bare mass, again an agreement with my work. But, as also stated by the author, these are only clues due to the small volumes he uses. Anyhow, this is a clever track to be pursued and further studies are needed. It would also be interesting to have a clear idea of the fact that this mass arises directly from the dynamics of the scalar field itself rather than from its interaction with the Yang-Mills field. I give below a figure for the four dimensional case in a quenched approximation

I am sure that this image will convey the right impression to my readers as mine. A shocking result that seems to match, at a first sight, the case of the gluon propagator on the lattice (mapping theorem!). At larger volumes it would be interesting to see also the gluon propagator. I expect a lot of interesting results to come out from this approach.

 

 

Silvio P. Sorella, David Dudal, Marcelo S. Guimaraes, & Nele Vandersickel (2011). Features of the Refined Gribov-Zwanziger theory: propagators, BRST soft symmetry breaking and glueball masses arxiv arXiv: 1102.0574v1

N. Vandersickel,, D. Dudal,, & S.P. Sorella (2011). More evidence for a refined Gribov-Zwanziger action based on an effective potential approach arxiv arXiv : 1102.0866

Axel Maas (2011). Scalar-matter-gluon interaction arxiv arXiv: 1102.0901v1

Frasca, M. (2006). Strongly coupled quantum field theory Physical Review D, 73 (2) DOI: 10.1103/PhysRevD.73.027701


Follow

Get every new post delivered to your Inbox.

Join 70 other followers

%d bloggers like this: