Back to CUDA

11/02/2013

It is about two years ago when I wrote my last post about CUDA technology by NVIDIA (see here). At that time I added two new graphic cards to my PC, being on the verge to reach 3 Tflops in single precision for lattice computations.  CUDA LogoIndeed, I have had an unlucky turn of events and these cards went back to the seller as they were not working properly and I was completely refunded. Meantime, also the motherboard failed and the hardware was largely changed  and so, I have been for a lot of time without the opportunity to work with CUDA and performing intensive computations as I planned. As it is well-known, one can find a lot of software exploiting this excellent technology provided by NVIDIA and, during these years, it has been spreading largely, both in academia and industry, making life of researchers a lot easier. Personally, I am using it also at my workplace and it is really exciting to have such a computational capability at your hand at a really affordable price.

Nvidia TeslaNow, I am newly able to equip my personal computer at home with a powerful Tesla card. Some of these cards are currently dismissed as they are at the end of activity, due to upgrades of more modern ones, and so can be found at a really small price in bid sites like ebay. So, I bought a Tesla M1060 for about 200 euros.Tesla M1060 As the name says, this card has not been conceived for a personal computer but rather for servers produced by some OEMs. This can also be realized when we look at the card and see a passive cooler. This means that the card should have a proper physical dimension to enter into a server while the active dissipation through fans should be eventually provided by the server itself. Indeed, I added an 80mm Enermax fan to my chassis (also Enermax Enlobal)  to be granted that the motherboard temperature does not reach too high values. My motherboard is an ASUS P8P67 Deluxe. This is  a very good card, as usual for ASUS, providing three PCIe 2.0 slots and, in principle, one can add up to three video cards together. But if you have a couple of NVIDIA cards in SLI configuration, the slots work at x8. A single video card will work at x16.  Of course, if you plan to work with these configurations, you will need a proper PSU. I have a Cooler Master Silent Pro Gold 1000 W and I am well beyond my needs. This is what remains from my preceding configuration and is performing really well. I have also changed my CPU being this now an Intel i3-2125 with two cores at 3.30 GHz and 3Mb Cache. Finally, I added  16 Gb of Corsair Vengeance DDR3 RAM.

The installation of the card went really smooth and I have got it up and running in a few minutes on Windows 8 Pro 64 Bit,  after the installation of the proper drivers. I checked with Matlab 2011b and PGI compilers with CUDA Toolkit 5.0 properly installed. All worked fine. I would like to spend a few words about PGI compilers that are realized by The Portland Group. PGI Workstation compilersI have got a trial license at home and tested them while at my workplace we have a fully working license. These compilers make the realization of accelerated CUDA code absolutely easy. All you need is to insert into your C or Fortran code some preprocessing directives. I have executed some performance tests and the gain is really impressive without ever writing a single line of CUDA code. These compilers can be easily introduced into Matlab to yield mex-files or S-functions even if they are not yet supported by Mathworks (they should!) and also this I have verified without too much difficulty both for C and Fortran.

Finally, I would like to give you an idea on the way I will use CUDA technology for my aims. What I am doing right now is porting some good code for the scalar field and I would like to use it in the limit of large self-interaction to derive the spectrum of the theory. It is well-known that if you take the limit of the self-interaction going to infinity you recover the Ising model. But I would like to see what happens with intermediate but large values as I was not able to get any hint from literature on this, notwithstanding this is the workhorse for any people doing lattice computations. What seems to matter today is to show triviality at four dimensions, a well-acquired evidence. As soon as the accelerate code will run properly, I plan to share it here as it is very easy to get good code to do lattice QCD but it is very difficult to get good code for scalar field theory as well. Stay tuned!


Confinement revisited

27/09/2012

ResearchBlogging.org

Today it is appeared a definitive updated version of my paper on confinement (see here). I wrote this paper last year after a question put out to me by Owe Philipsen at Bari. The point is, given a decoupling solution for the gluon propagator in the Landau gauge, how does confinement come out? I would like to remember that a decoupling solution at small momenta for the gluon propagator is given by a function reaching a finite non-zero value at zero. All the fits carried out so far using lattice data show that a sum of few Yukawa-like propagators gives an accurate representation of these data. To see an example see this paper. Sometime, this kind of propagator formula is dubbed Stingl-Gribov formula and has the property to have a fourth order polynomial in momenta at denominator and a second order one at the numerator. This was firstly postulated by Manfred Stingl on 1995 (see here). It is important to note that, given the presence of a fourth power of momenta, confinement is granted as a linear rising potential can be obtained in agreement with lattice evidence. This is also in agreement with the area law firstly put forward by Kenneth Wilson.

At that time I was convinced that a decoupling solution was enough and so I pursued my analysis arriving at the (wrong) conclusion, in a first version of the paper, that screening could be enough. So, strong force should have to saturate and that, maybe, moving to higher distances such a saturation would have been seen also on the lattice. This is not true as I know today and I learned this from a beautiful paper by Vicente Vento, Pedro González and Vincent Mathieu. They thought to solve Dyson-Schwinger equations in the deep infrared to obtain the interquark potential. The decoupling solution appears at a one-gluon exchange level and, with this approximation, they prove that the potential they get is just a screening one, in close agreement with mine and any other decoupling solution given in a close analytical form. So, the decoupling solution does not seem to agree with lattice evidence that shows a linearly rising potential, perfectly confining and in agreement with what Wilson pointed out in his classical work on 1974. My initial analysis about this problem was incorrect and Owe Philipsen was right to point out this difficulty in my approach.

This question never abandoned my mind and, with the opportunity to go to Montpellier this year to give a talk (see here), I presented for the first time a solution to this problem. The point is that one needs a fourth order term in the denominator of the propagator. This can happen if we would be able to get higher order corrections to the simplest one-gluon exchange approximation (see here). In my approach I can get loop corrections to the gluon propagator. The next-to-leading one is a two-loop term that gives rise to the right term in the denominator of the propagator. Besides, I am able to get the renormalization constant to the field and so, I also get a running mass and coupling. I gave an idea of the way this computation should be performed at Montpellier but in these days I completed it.

The result has been a shocking one. Not only one gets the linear rising potential but the string tension is proportional to the one obtained in d= 2+1 by V. Parameswaran Nair, Dimitra Karabali and Alexandr Yelnikov (see here)! This means that, apart from numerical factors and accounting for physical dimensions, the equation for the string tension in 3 and 4 dimensions is the same. But we would like to note that the result given by Nair, Karabali and Yelnikov is in close agreement with lattice data. In 3 dimensions the string tension is a pure number and can be computed explicitly on the lattice. So, we are supporting each other with our conclusions.

These results are really important as they give a strong support to the ideas emerging in these years about the behavior of the propagators of a Yang-Mills theory at low energies. We are even more near to a clear understanding of confinement and the way mass emerges at macroscopic level. It is important to point out that the string tension in a Yang-Mills theory is one of the parameters that any serious theoretical approach, pretending to go beyond a simple phenomenological one,  should be able to catch. We can say that the challenge is open.

Marco Frasca (2011). Beyond one-gluon exchange in the infrared limit of Yang-Mills theory arXiv arXiv: 1110.2297v4

Kenneth G. Wilson (1974). Confinement of quarks Phys. Rev. D 10, 2445–2459 (1974) DOI: 10.1103/PhysRevD.10.2445

Attilio Cucchieri, David Dudal, Tereza Mendes, & Nele Vandersickel (2011). Modeling the Gluon Propagator in Landau Gauge: Lattice Estimates of Pole Masses and Dimension-Two Condensates arXiv arXiv: 1111.2327v1

M. Stingl (1995). A Systematic Extended Iterative Solution for QCD Z.Phys. A353 (1996) 423-445 arXiv: hep-th/9502157v3

P. Gonzalez, V. Mathieu, & V. Vento (2011). Heavy meson interquark potential Physical Review D, 84, 114008 arXiv: 1108.2347v2

Marco Frasca (2012). Low energy limit of QCD and the emerging of confinement arXiv arXiv: 1208.3756v2

Dimitra Karabali, V. P. Nair, & Alexandr Yelnikov (2009). The Hamiltonian Approach to Yang-Mills (2+1): An Expansion Scheme and Corrections to String Tension Nucl.Phys.B824:387-414,2010 arXiv: 0906.0783v1


An interesting review

14/09/2011

ResearchBlogging.org

It is some time I am not writing posts but the good reason is that I was in Leipzig to IRS 2011 Conference, a very interesting event in a beautiful city.  It was inspiring to be in the city where Bach spent a great part of his life. Back to home, I checked as usual my dailies from arxiv and there was an important review by Boucaud, Leroy, Yaouanc, Micheli, Péne and Rodríguez-Quintero. This is the French group that produced striking results in the analysis of Green functions for Yang-Mills theory.

In this paper they do a great work by reviewing the current situation and clarifying  the main aspects of the analysis carried out using Dyson-Schwinger equations. These are a tower of equations for the n-point functions of a quantum field theory that can be generally solved by some truncation (with an exception, see here) that cannot be completely controlled. The reason is that the equation of lower order depends on n-point functions of higher orders and so, at some point, we have to decide the behavior of some of these higher order functions truncating the hierarchy. But this choice is generally not under control.

About these techniques there is a main date, Reigensburg 2007, when some kind of wall just went down. Since then, the common wisdom was a scenario with a gluon propagator going to zero when momenta go to zero while, in the same limit, the ghost propagator should go to infinity faster than the free case: So, the gluon propagator was suppressed and the ghost propagator enhanced at infrared. On the lattice, such a behavior was never explicitly observed but was commented that the main reason was the small volumes considered in these computations. On 2007, volumes reached a huge extension in lattice computations, till (27fm)^4, and so the inescapable conclusion was  that lattice produced another solution: A gluon propagator reaching a finite non-zero value and the ghost propagator behaving exactly as that of a free particle. This was also the prevision of the French group together with other researchers as Cornwall, Papavassiliou, Aguilar, Binosi and Natale. So, this new solution entered into the mainstream of the analysis of Yang-Mills theory in the infrared and was dubbed “decoupling solution” to distinguish it from the former one, called instead “scaling solution”.

In this review, the authors point out an important conclusion: The reason why authors missed the decoupling solution and just identified the scaling one was that their truncation forced the Schwinger-Dyson equation to a finite non-zero value of the strong coupling constant. This is a crucial point as this means that authors that found the scaling solution were admitting a non-trivial fixed point in the infrared for Yang-Mills equations. This was also the recurring idea in that days but, of course, while this is surely true for QCD, a world without quarks does not exist and, a priori, nothing can be said about Yang-Mills theory, a theory with only gluons and no quarks. Quarks change dramatically the situation as can also be seen for the asymptotic freedom. We are safe because there are only six flavors. But about Yang-Mills theory nothing can be said in the infrared as such a theory is not seen in the reality if not interacting with fermionic fields.

Indeed, as pointed out in the review, the running coupling was seen to behave as in the following figure (this was obtained by the German group, see here)

Running coupling of a pure Yang-Mills theory as computed on the lattice

This result is quite shocking and completely counterintuitive. It is pointing out, even if not yet confirming, that a pure Yang-Mills theory could have an infrared trivial fixed point! This is something that defies common wisdom and can explain why former researchers using the Dyson-Schwinger approach could have missed the decoupling solution. Indeed, this solution seems properly consistent with a trivial fixed point and this can also be inferred by the goodness of the fit of the gluon propagator with a Yukawa-like propagator if we content ourselves with the best agreement just in the deep infrared and the deep ultraviolet where asymptotic freedom sets in. In fact, with a trivial fixed point the theory is free in this limit but you cannot pretend agreement on all the range of energies with a free propagator.

Currently, the question of the right infrared behavior of the two-point functions for Yang-Mills theory is hotly debated yet and the matter that is at stake here is the correct understanding and management of low-energy QCD. This is one of the most fundamental physics problem and something I would like to know the answer.

Ph. Boucaud, J. P. Leroy, A. Le Yaouanc, J. Micheli, O. Péne, & J. Rodríguez-Quintero (2011). The Infrared Behaviour of the Pure Yang-Mills Green Functions arXiv arXiv: 1109.1936v1

Marco Frasca (2009). Exact solution of Dyson-Schwinger equations for a scalar field theory arXiv arXiv: 0909.2428v2

I. L. Bogolubsky, E. -M. Ilgenfritz, M. Müller-Preussker, & A. Sternbeck (2009). Lattice gluodynamics computation of Landau-gauge Green’s functions in
the deep infrared Phys.Lett.B676:69-73,2009 arXiv: 0901.0736v3

QQBD6R52YD2K


It was twenty years ago today . . .

16/08/2011

ResearchBlogging.org

With these beautiful words starts a recollection paper by the founder of arXiv, Paul Ginsparg. This is worth the reading as this history spans a number of years exactly overlapping the computer revolution that definitely changed our lives. What Paul also changed through these new information tools was the way researchers should approach scientific communication. It is a revolution that is not stopped yet and all the journals I submit my papers have a link to arXiv for direct uploading of the preprint. This change has had also a great impact on the way these same journals should present to authors, readers and referees as well at their website.

For my readers I would like just to point out how relevant was all this for our community with the Grisha Perelman’s case. I think all of you are well aware that Perelman never published his papers on a journal: You can find both of them on arXiv. Those preprints paid as much as a Fields medal and a Millenium prize. Not bad I should say for a couple of unpublished papers. Indeed, it is common matter to have a paper largely discussed well before its publication and often a preprint becomes a case in the community without not even seeing the light of a publication. It is quite common for us doing research to console colleagues complaining about the harsh peer-review procedure by saying that today exists arXiv and that is enough to make your work widely known.

I was a submitter since 1994, almost at the very start, and I wish that the line of successes of this idea will never end.

Finally, to prove how useful is arXiv for our community, I would like to point out to you, for your summer readings a couple of papers. The first one is this from R. Aouane, V. Bornyakov, E.-M. Ilgenfritz, V. Mitrjushkin, M. Müller-Preussker, A. Sternbeck. My readers should know that these researchers always do a fine work and get important results on their lattice computations. The same happens here where they study the gluon and ghost propagators at finite temperature in the Landau gauge. Their conclusion about Gribov copies is really striking, comforting my general view on this matter (see here), that Gribov copies are not essential not even when one rises the temperature. Besides, they discuss the question of a proper order parameter to identify the phase transition that we know exists in this case.

The next paper is authored by Tereza Mendes, Axel Maas and Stefan Olejnik (see here). The idea in this work is to consider a gauge, the \lambda-gauge, with a free parameter interpolating between different gauges to see the smoothness of the transition and the way of change of the propagators. They reach a volume of 70^4 but Tereza told me that the errors are too large yet for a neat comparison with smaller volumes. In any case, this is a route to be pursued and I am curious about the way the interpolated propagator behaves at the deep infrared with larger lattices.

Discussions on Higgs identification are well alive yet ( you can see here). take a look and enjoy!

Paul Ginsparg (2011). It was twenty years ago today … arXiv arXiv: 1108.2700v1

R. Aouane, V. Bornyakov, E. -M. Ilgenfritz, V. Mitrjushkin, M. Müller-Preussker, & A. Sternbeck (2011). Landau gauge gluon and ghost propagators at finite temperature from
quenched lattice QCD arXiv arXiv: 1108.1735v1

Axel Maas, Tereza Mendes, & Stefan Olejnik (2011). Yang-Mills Theory in lambda-Gauges arXiv arXiv: 1108.2621v1


Evidence of a QCD critical endpoint at RHIC

21/06/2011

ResearchBlogging.org

A critical endpoint in QCD is a kind of holy grail in nuclear physics. It has been theorized as a point where deconfinement occurs and hadronic matter leaves place to some kind of plasma of quarks and gluons. We know that the breaking of chiral symmetry is something that people has proposed several years ago and we recently gave a proof of existence of such a transition (see here). But here the situation is more complex: We have essentially two physical variables to describe the phase diagram and these are temperature and chemical potential. This makes lattice computations a kind of nightmare. The reason is the sign problem. Some years ago Zoltan Fodor and Sandor Katz come out with a pioneering paper (see here) doing lattice computation and seeing the chemical potential taking an imaginary factor: The infamous sign problem. Discretization implies it but a theoretical physicist can happily lives just ignoring it. Fodor and Katz evaded the problem just taking an absolute value but this approach was criticized casting doubt on their results at chemical potential different from zero. It should be said that they gave evidence of existence for the critical point and surely their results are unquestionably correct with zero chemical potential in close agreement with my and others findings. A lucid statement of the problems of lattice computations for finite temperatures and densities was recently given by Philippe de Forcrand (see here).

So far, people has produced several results just working around with phenomenological model like a Nambu-Jona-Lasinio or sigma model. This way of work arises from our current impossibility to manage QCD at very low energies but, on the other side, we are well aware that these models seem to represent reality quite well. The reason is that a Nambu-Jona-Lasinio is really the low-energy limit for QCD but I will not discuss this matter here having done this before (see here). Besides, the sigma model arises naturally in the low-energy limit interacting with quarks. The sigma field is a true physical field that drives the phase transitions in low-energy QCD.

While the hunt for the critical point in the lattice realm is already open since the paper by Fodor and Katz, the experimental side is somewhat more difficult to exploit. The only facility we have at our disposal is RHIC and no much proposals are known to identify the critical point from the experimental data were available since a fine proposal by Misha Stephanov a few years ago (see here and here). The idea runs as follows.  At the critical point, fluctuations are no more expected to be Gaussian and all the correlations are extended to all the hadronic matter  as the correlation length is diverging. Non-Gaussianity implies that if we compute cumulants, linked to higher order moments of the probability distribution, these will depend on the correlation length with some power and, particularly, moments like skewness and kurtosis, that are a measure of deviation from Gaussianity, start to change. Particularly, kurtosis is expected to change sign. So, if we are able to measure such a deviation in a laboratory facility we are done and we get evidence for a critical point and critical behavior of hadronic matter. We just note that Stephanov accomplishes his computations using a sigma model and this is a really brilliant hindsight.

At RHIC a first evidence of this has been obtained by STAR Collaboration (see here). These are preliminary results but further data are expected this year. The main result is given in the following figureWe see comparison with data from lattice as red balls for Au+Au collisions and the kurtosis goes down to negative values! The agreement with lattice data is striking and this is already evidence for a critical endpoint. But this is not enough as can be seen from the large error bar. Indeed further data are needed to draw a definitive conclusion and, as said, these are expected for this year. Anyhow, this is already a shocking result. Surely, we stay tuned for this mounting evidence of a critical endpoint. This will represent a major discovery for nuclear physics and, in some way, it will make easier lattice computations with a proper understanding of the way the sign problem should be settled.

Marco Frasca (2011). Chiral symmetry in the low-energy limit of QCD at finite temperature arXiv arXiv: 1105.5274v2

Z. Fodor, & S. D. Katz (2001). Lattice determination of the critical point of QCD at finite T and  \mu JHEP 0203 (2002) 014 arXiv: hep-lat/0106002v2

Philippe de Forcrand (2010). Simulating QCD at finite density PoS (LAT2009)010, 2009 arXiv: 1005.0539v2

M. A. Stephanov (2008). Non-Gaussian fluctuations near the QCD critical point Phys.Rev.Lett.102:032301,2009 arXiv: 0809.3450v1

Christiana Athanasiou, Krishna Rajagopal, & Misha Stephanov (2010). Using Higher Moments of Fluctuations and their Ratios in the Search for
the QCD Critical Point Physical review D arXiv: 1006.4636v2

Xiaofeng Luo (2011). Probing the QCD Critical Point with Higher Moments of Net-proton
Multiplicity Distributions arXiv arXiv: 1106.2926v1


Chiral condensates in a magnetic field: Accepted!

20/04/2011

ResearchBlogging.org

As my readers could know, I have had a paper written in collaboration with Marco Ruggieri (see here). Marco is currently working at Yukawa Institute in Kyoto (Japan). The great news is that our paper has been accepted for publication on Physical Review D. I am really happy for this very good result of a collaboration that I hope will endure. Currently, we are working on a proof of existence of the critical endpoint for QCD using the Nambu-Jona-Lasinio model. This is an open question that has serious difficulties to get an answer also for a fundamental problem encountered on the lattice: The so-called sign problem. So, a mathematical proof will make a breakthrough in the field with the possibility to be experimentally confirmed at laboratory facilities.

Marco himself proposed a novel approach to get a proof of the critical endpoint to bypass the sign problem (see here).

So, I hope to have had the chance to transmit to my readership the excitement of this line of research and how it is strongly entangled with the understanding of low-energy QCD and the more deep question of the mass gap in the Yang-Mills theory. Surely, I will keep posting on this. Just stay tuned!

Marco Frasca, & Marco Ruggieri (2011). Magnetic Susceptibility of the Quark Condensate and Polarization from
Chiral Models arXiv arXiv: 1103.1194v1

Philippe de Forcrand (2010). Simulating QCD at finite density PoS (LAT2009)010, 2009 arXiv: 1005.0539v2


QCD at finite temperature: Does a critical endpoint exist?

05/04/2011

ResearchBlogging.org

Marco Ruggieri is currently a post-doc fellow at Yukawa Institute for theoretical physics in Kyoto (Japan). Marco has got his PhD at University of Bari in Italy and spent a six months period at CERN. Currently, his main research areas are QCD at finite temperature and high density, QCD behavior in strong magnetic fields and effective models for QCD but you can find a complete CV at his site. So, in view of his expertize I asked him a guest post in  my blog to give an idea of the current situation of these studies. Here it is.

It is well known that Quantum Chromodynamics (QCD) is the most accredited theory describing strong interactions. One of the most important problems of modern QCD is to understand how color confinement and chiral symmetry breaking are affected by a finite temperature and/or a finite baryon density. For what concerns the former, Lattice simulations convince ourselves that both deconfinement and (approximate) chiral symmetry restoration take place in a narrow range of temperatures, see the recent work for a review. On the other hand, it is problematic to perform Lattice simulations at finite quark chemical potential in true QCD, namely with number of color equal to three, because of the so-called sign problem, see here for a recent review on this topic. It is thus very difficult to access the high density region of QCD starting from first principles calculations.

Despite this difficulty, several work has been made to avoid the sign problem, and make quantitative predictions about the shape of the phase diagram of three-color-QCD in the temperature-chemical potential plane, see here again for a review. One of the most important theoretical issues in along this line is the search for the so-called critical endpoint of the QCD phase diagram, namely the point where a crossover and a first order transition line meet. Its existence was suggested by Asakawa and Yazaki (AY) several years ago (see here) using an effective chiral model; in the 2002, Fodor and Katz (FK) performed the first Lattice simulation (see here) in which it was shown that the idea of AY could be realized in QCD with three colors. However, the estimate by FK is affected seriously by the sign problem. Hence, nowadays it is still under debate if the critical endpoint there exists in QCD or not.

After referring to this for a comprehensive review of some of the techniques adopted by the Lattice community to avoid the sign problem and detect the critical endpoint, it is worth to cite an article by Marco Ruggieri, which appeared few days ago on arXiv, in which an exotic possibility to detect the critical endpoint by virtue of Lattice simulations avoiding the sign problem has been detected, see here . We report, after the author permission, the abstract here below:

We suggest the idea, supported by concrete calculations within chiral models, that the critical endpoint of the phase diagram of Quantum Chromodynamics with three colors can be detected, by means of Lattice simulations of grand-canonical ensembles with a chiral chemical potential, \mu_5, conjugated to chiral charge density. In fact, we show that a continuation of the critical endpoint of the phase diagram of Quantum Chromodynamics at finite chemical potential, \mu, to a critical end point in the temperature-chiral chemical potential plane, is possible. This study paves the way of the mapping of the phases of Quantum Chromodynamics at finite \mu, by means of the phases of a fictitious theory in which \mu is replaced by \mu_5.

Rajan Gupta (2011). Equation of State from Lattice QCD Calculations arXiv arXiv: 1104.0267v1

Philippe de Forcrand (2010). Simulating QCD at finite density PoS (LAT2009)010, 2009 arXiv: 1005.0539v2

M. Asakawa, & K. Yazaki (1989). Chiral restoration at finite density and temperature Nuclear Physics A, 504 (4), 668-684 DOI: 10.1016/0375-9474(89)90002-X

Z. Fodor, & S. D. Katz (2001). Lattice determination of the critical point of QCD at finite T and \mu JHEP 0203 (2002) 014 arXiv: hep-lat/0106002v2

Marco Ruggieri (2011). The Critical End Point of Quantum Chromodynamics Detected by Chirally
Imbalanced Quark Matter arXiv arXiv: 1103.6186v1


Follow

Get every new post delivered to your Inbox.

Join 68 other followers

%d bloggers like this: