Back to CUDA

11/02/2013

It is about two years ago when I wrote my last post about CUDA technology by NVIDIA (see here). At that time I added two new graphic cards to my PC, being on the verge to reach 3 Tflops in single precision for lattice computations.  CUDA LogoIndeed, I have had an unlucky turn of events and these cards went back to the seller as they were not working properly and I was completely refunded. Meantime, also the motherboard failed and the hardware was largely changed  and so, I have been for a lot of time without the opportunity to work with CUDA and performing intensive computations as I planned. As it is well-known, one can find a lot of software exploiting this excellent technology provided by NVIDIA and, during these years, it has been spreading largely, both in academia and industry, making life of researchers a lot easier. Personally, I am using it also at my workplace and it is really exciting to have such a computational capability at your hand at a really affordable price.

Nvidia TeslaNow, I am newly able to equip my personal computer at home with a powerful Tesla card. Some of these cards are currently dismissed as they are at the end of activity, due to upgrades of more modern ones, and so can be found at a really small price in bid sites like ebay. So, I bought a Tesla M1060 for about 200 euros.Tesla M1060 As the name says, this card has not been conceived for a personal computer but rather for servers produced by some OEMs. This can also be realized when we look at the card and see a passive cooler. This means that the card should have a proper physical dimension to enter into a server while the active dissipation through fans should be eventually provided by the server itself. Indeed, I added an 80mm Enermax fan to my chassis (also Enermax Enlobal)  to be granted that the motherboard temperature does not reach too high values. My motherboard is an ASUS P8P67 Deluxe. This is  a very good card, as usual for ASUS, providing three PCIe 2.0 slots and, in principle, one can add up to three video cards together. But if you have a couple of NVIDIA cards in SLI configuration, the slots work at x8. A single video card will work at x16.  Of course, if you plan to work with these configurations, you will need a proper PSU. I have a Cooler Master Silent Pro Gold 1000 W and I am well beyond my needs. This is what remains from my preceding configuration and is performing really well. I have also changed my CPU being this now an Intel i3-2125 with two cores at 3.30 GHz and 3Mb Cache. Finally, I added  16 Gb of Corsair Vengeance DDR3 RAM.

The installation of the card went really smooth and I have got it up and running in a few minutes on Windows 8 Pro 64 Bit,  after the installation of the proper drivers. I checked with Matlab 2011b and PGI compilers with CUDA Toolkit 5.0 properly installed. All worked fine. I would like to spend a few words about PGI compilers that are realized by The Portland Group. PGI Workstation compilersI have got a trial license at home and tested them while at my workplace we have a fully working license. These compilers make the realization of accelerated CUDA code absolutely easy. All you need is to insert into your C or Fortran code some preprocessing directives. I have executed some performance tests and the gain is really impressive without ever writing a single line of CUDA code. These compilers can be easily introduced into Matlab to yield mex-files or S-functions even if they are not yet supported by Mathworks (they should!) and also this I have verified without too much difficulty both for C and Fortran.

Finally, I would like to give you an idea on the way I will use CUDA technology for my aims. What I am doing right now is porting some good code for the scalar field and I would like to use it in the limit of large self-interaction to derive the spectrum of the theory. It is well-known that if you take the limit of the self-interaction going to infinity you recover the Ising model. But I would like to see what happens with intermediate but large values as I was not able to get any hint from literature on this, notwithstanding this is the workhorse for any people doing lattice computations. What seems to matter today is to show triviality at four dimensions, a well-acquired evidence. As soon as the accelerate code will run properly, I plan to share it here as it is very easy to get good code to do lattice QCD but it is very difficult to get good code for scalar field theory as well. Stay tuned!

Advertisements

The XV Workshop on Statistical Mechanics and nonperturbative Field Theory

25/09/2011

ResearchBlogging.org

This week I was in Bari as the physics department of that university organized a major event: SM&FT 2011. This is a biennial conference having the aims to discuss recent achievements in fields as statistical mechanics and quantum field theory that have a lot of commonalities. The organizers are well-known physicists and so it was a pleasure for me to see my contribution accepted. Leonardo Cosmai wrote to me confirming my partecipation. Leonardo, together with Paolo Cea, Alessandro Papa and Massimo d’Elia produced a lot of significant works in quantum field theory and a recent paper by Cosmai and Cea arose some fuzz also in the blogosphere (see here). Their forecast for the Higgs boson agrees quite well with my view about this matter. They were also part of the organizing committee. Of course, I was in Bari with my friend Marco Ruggieri that lived there for more than twelve years gaining a PhD in that university.

The scientific content was really interesting an I have had the chance to learn something more about lattice field theory. You can find all the talks here. About this, it should be said that people work with small lattices yet. While this has been a natural way to manage the QCD on the lattice due to missing computational resources, things are rapidly changing due to CUDA as I discussed a lot in my blog and was presented in some talks at this conference. Small groups will be able, with very few bucks of their budgets, to reach a significant ability to analyze increasingly lattice volumes. Besides, also large scale projects in this direction, mostly due to INFN and extending the APE project originated by Nicola Cabibbo and Giorgio Parisi, were presented (see talks by Francesco di Renzo e Piero Vicini). A typical situation in this kind of lattice analysis, improved using CUDA,  was also pointed out by Massimo D’Elia in his talk.  Thanks to this new technology they are increasing significantly the volumes. You can compare the content of his talk with that of his collaborator Francesco Negro, discussing a really interesting problem on the lattice (and a promise for the future with CUDA), with smaller volumes due to reduced computational resources. The interest for the activity of this group and Francesco’s work is strongly linked to a paper that I and Marco Ruggieri wrote together about the QCD vacuum in presence of a magnetic field (see here). The work by Francesco, even if for small volumes, provides interesting conclusions. It should be said that the Nambu-Jona-Lasinio model is there well alive and kicking.

Petruzzelli Theater

From a strictly theoretical side, I would like to point out the talks by Giuseppe Mussardo, with which I have had a nice mail exchange and is author of some beautiful books (e.g. see here), and the ones by Adriano Di Giacomo and Valentin Zakharov that seem to have some relevant contact points with my work. There was also a talk by Edward Shuryak, one of the proponents of the instantons liquid for the QCD vacuum that is strongly supported by lattice simulations and theoretical works like mine.

At the end of the social dinner, we have had some interesting discussions with Di Giacomo and Cosmai. There was some excitation about the announced seminar about neutrinos by CERN and INFN. In a pub after the dinner, I have had some interesting discussions about a proposal by Michele Pepe and others (see his talk) that holds the promises to improve significantly lattice computations removing artifacts. It was also the chance to hear the point of view of Owe Philipsen (see his talk) about the current situation on lattice simulations on QCD at finite temperature. As I have discussed in some posts in this blog, this kind of simulations are plagued by the infamous sign problem and most of the work turns back to try to get rid of it. My friend Marco expressed the somewhat pessimistic view that a critical endpoint will never be seen on lattice computations. Indeed, he is the proponent of the use of a chiral chemical potential that does not display this stumbling block on the lattice (see his talk). This approach holds the promises to reach the goal as he showed in a recent paper. His proposal is under scrutiny by the lattice community. The QCD critical endpoint is a Holy Grail for all of us working in this area as QCD displays a quite rich phase diagram and we have also a lot of experimental data in heavy ion collisions to understand. You should take a look both at the talks of Marco and Alessandro Papa.

I would like to have cited all the talks and I apologize for omissions. If my readers have some time to spend usefully just read it all, as the conference was well organized and with very interesting contents in a really nice atmosphere somehow excited by neutrino news in the last two days.

P. Cea, & L. Cosmai (2011). The trivial Higgs boson: first evidences from LHC arXiv arXiv: 1106.4178v1

Marco Frasca, & Marco Ruggieri (2011). Magnetic Susceptibility of the Quark Condensate and Polarization from Chiral Models Phys.Rev.D83:094024,2011 arXiv: 1103.1194v1

Marco Ruggieri (2011). The Critical End Point of Quantum Chromodynamics Detected by Chirally
Imbalanced Quark Matter Phys.Rev.D84:014011,2011 arXiv: 1103.6186v2


%d bloggers like this: