A day as a physicist

28/02/2011

ResearchBlogging.org

I speak often in my blog about technical matters as I believe these represent the most important part of my intellectual activity. Indeed, my work is so extended in so much areas of physics due to the fact that I am largely involved about problematics of theoretical physics. So, I would like to tell you about my main activities today as a physicist. In these days I am involved in a collaboration with Marco Ruggieri (see here). Marco is working on the understanding on how QCD vacuum gets modified by strong magnetic fields. This question is relevant as such physical effects could be observed at LHC and so, having a prevision for them is paramount. Hurdles here arise from the fact that we currently miss a low-energy full understanding of QCD and one is forced to use phenomenological models that are more or less useful in this case. Typical choices are Nambu-Jona-Lasinio model and the \sigma model with a pion field coupled. In this way one can arrange some theoretical previsions about magnetic susceptibility and other observables. Strange as may seem, these two models can be made to coincide and so, what one can predict with Nambu-Jona-Lasinio model is there also with the  \sigma model.

The reason why these two largely known models can coincide arise with bosonization techniques (see here). If you start with a linear \sigma model assuming just a mass term and you consider its interaction with a fermion field (Yukawa model), you can integrate away the fermion field. This integration will make the potential of the scalar field absolutely not trivial and the vacuum is no more the standard 0 but you will get a mass gap equation. The old Yukawa model gives a really non trivial physics and chiral symmetry is broken. Now, one can takes the Nambu-jona-Lasinio model and, using a transformation taken from condensed matter physics (Hubbard-Stratonovich transformation), one can change a quartic fermion interaction into a quadratic scalar term with a scalar and a pseudoscalar field (the pion) interacting with the quark fields, the same terms taken into the \sigma model. At this stage one can generate a mass gap equation, the one well-known in literature and the same that can be obtained form the \sigma model. One can make the two models identical at this order. Now, going to higher orders, loop expansion produces kinematic terms in the Nambu-Jona-Lasinio model and higher order corrections to the potential of the  \sigma model. I would like to check these higher order corrections between the two models. But one can see that the Yukawa model can be reduced to a contact interaction between fermions as I have proved quite recently (see here). So, there is a deep relation betwenn the Nambu-Jona-Lasinio model and the Yukawa model. I would like to prove a theorem about but, for the moment, this is just an on-going work with Marco.

Finally, a beautiful way a physicist has to contribute to our community is acting as a referee for journals. I am doing this work since 1996 when American Physical Society hired me through the good help of an associate editor of Physical Review A. This is an important way to help science as progress is achieved through the cooperation of several people in the community and if today we see such a great understanding of the World around us is just because such a cooperation worked satisfactorily well. Indeed, this is a honor for a scientist.

Last but not least, I like to write as I am doing now to let things widely known. This is what I hope I am doing better.

D. Ebert (1997). Bosonization in Particle Physics arxiv arXiv: hep-ph/9710511v1

Marco Frasca (2010). Glueball spectrum and hadronic processes in low-energy QCD Nucl.Phys.Proc.Suppl.207-208:196-199,2010 arXiv: 1007.4479v2


Igor Suslov and the beta function of the scalar field

21/02/2011

ResearchBlogging.org

I think that blogs are a very good vehicle for a scientist to let his/her work widely known and can be really helpful also for colleagues doing research in the same field. This is the case of Igor Suslov at Kapitza Institute in Moscow. Igor is doing groundbreaking research in quantum field theory and, particularly, his main aim is to obtain the beta function of the scalar field in the limit of a very large coupling. This means that the field of research of Igor largely overlaps mine. Indeed, I have had some e-mail exchange with him and we cited our works each other. Our conclusions agree perfectly and he was able to obtain the general result that, for very large bare coupling \lambda one has

\beta(\lambda)=d\lambda

where d is the number of dimensions. This means that for d=4 Igor recovers my result. More important is the fact that from this result one can draw the conclusion that the scalar theory is indeed trivial in four dimensions, a long sought result. This should give an idea of the great quality of the work of this author.

On the same track, today  on arxiv Igor posted another important paper (see here). The aim of this paper is to get higher order corrections to the aforementioned result. So, he gives a sound initial explanation on why one could meaningfully take the bare coupling running from 0 to infinity and then, using a lattice formulation of the n components scalar field theory, he performs a high temperature expansion.  He is able to reach the thirteenth order correction! This is an expansion of \beta(\lambda)/\lambda in powers of \lambda^{-\frac{2}{d}} and so, for d=4, one gets an expansion in 1/\sqrt{\lambda}. Again, this Igor’s result is in agreement with mine in a very beautiful manner. As my readers could know, I have been able to go to higher orders with my expansion technique in the large coupling limit (see here and here). This means that my findings and this result of Igor must agree. This is exactly what happens! I was able to get the next to leading order correction for the two-point function and, from this, with the Callan-Symanzik equation, I can derive the next to leading order correction for \beta(\lambda)/\lambda that goes like 1/\sqrt{\lambda} with an opposite sign with respect to the previous one. This is Igor’s table with the coefficients of the expansion:

So, from my point of view, Igor’s computations are fundamental for all the understanding of infrared physics that I have developed so far. It would be interesting if he could verify the mapping with Yang-Mills theory obtaining the beta function also for this case. He did some previous attempt on this direction but now, with such important conclusions reached, it would be absolutely interesting to see some deepening. Thank you for this wonderful work, Igor!

I. M. Suslov (2011). Renormalization Group Functions of \phi^4 Theory from High-Temperature
Expansions J.Exp.Theor.Phys., v.112, p.274 (2011); Zh.Eksp.Teor.Fiz., v.139, p.319 (2011) arXiv: 1102.3906v1

Marco Frasca (2008). Infrared behavior of the running coupling in scalar field theory arxiv arXiv: 0802.1183v4

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory arxiv arXiv: 1011.3643v2


A sad news

19/02/2011

It is with deep sadness  that I give the news that the President of Institute of Physics, Professor Marshall Stoneham (see here), is dead yesterday. The immediate past President Dame Jocelyn Bell Burnell will continue to act as President of the Institute.

I am presently involved as a member of the Institute, chartered physicist since 2002 and participate at the Institute activities for career assessment.


Ashtekar and the BKL conjecture

18/02/2011

ResearchBlogging.org

Abhay Ashtekar is a well-known Indian physicist working at Pennsylvania State University. He has produced a fundamental paper in general relativity that has been the cornerstone of all the field of research of loop quantum gravity. Beyond the possible value that loop quantum gravity may have, we will see in the future, this result of Ashtekar will stand as a fundamental contribution to general relativity. Today on arxiv he, Adam Henderson and David Sloan posted a beautiful paper where the Ashtekar’s approach is used to reformulate the Belinski-Khalatnikov-Lifshitz (BKL) conjecture.

Let me explain why this conjecture is important in general relativity. The question to be answered is the behavior of gravitational fields near singularities. About this, there exist some fundamental theorems due to Roger Penrose and Stephen Hawking. These theorems just prove that singularities are an unavoidable consequence of the Einstein equations but are not able to state the exact form of the solutions near such singularities. Vladimir Belinski, Isaak Markovich Khalatnikov and Evgeny Lifshitz put forward a conjecture that gave them the possibility to get the exact analytical behavior of the solutions of the Einstein equations near a singularity: When a gravitational field is strong enough, as near a singularity, the spatial derivatives in the Einstein equations can be safely neglected and only derivatives with respect to time should be retained. With this hypothesis, these authors were able to reduce the Einstein equations to a set of ordinary differential equations, that are generally more treatable, and to draw important conclusions about the gravitational field in these situations. As you may note, they postulated a gradient expansion in a regime of a strong perturbation!

Initially, this conjecture met with skepticism. People simply have no reason to believe to it and, apparently, there was no reason why spatial variations in a solution of a non-linear equation with a strong non-linearity should have to be neglected. I had the luck to meet Vladimir Belinski at the University of Rome “La Sapienza”. I was there to follow some courses after my Laurea and Vladimir was teaching a general relativity course that I took. The course showed the BKL approach and gravitational solitons (another great contribution of Vladimir to general relativity). Vladimir is also known to have written some parts of the second volume of the books of Landau and Lifshitz on theoretical physics. After the lesson on the BKL approach I talked to him about the fact that I was able to get their results as their approach was just the leading order of a strong coupling expansion. It was on 1992 and I had just obtained the gradient expansion for the Schroedinger equation, also known in literature as the Wigner-Kirkwood expansion, through my approach to strong coupling expansion. The publication of my proof happened just on 2006 (see here), 14 years after our colloquium.

Back to Ashtekar, Henderson and Sloan’s paper, this contribution is relevant for a couple of reasons that go beyond application to quantum gravity. Firstly, they give a short but insightful excursus on the current situation about this conjecture and how computer simulations are showing that it is right (a gradient expansion is a strong coupling expansion!). Secondly, they provide a sound formulation using Ashtekar variables of the Einstein equations that is better suited for its study. In my proof too I use a Hamiltonian formulation but through ADM formalism. These authors have in mind quantum gravity instead and so ADM formalism could not be the best for this aim. In any case, such a different approach could also reveal useful for numerical simulations.

Finally, all this matter is a strong support to my view started with my paper on 1992 on Physical Review A. Since then, I have produced a lot of work with a multitude of applications in almost all areas of physics. I hope that the current trend of confirmations of the goodness of my ideas about perturbation theory will keep on. As a researcher, it is a privilege to be part of this adventure of humankind.

Ashtekar, A. (1986). New Variables for Classical and Quantum Gravity Physical Review Letters, 57 (18), 2244-2247 DOI: 10.1103/PhysRevLett.57.2244

Abhay Ashtekar, Adam Henderson, & David Sloan (2011). A Hamiltonian Formulation of the BKL Conjecture arxiv arXiv: 1102.3474v1

Marco Frasca (2005). Strong coupling expansion for general relativity Int.J.Mod.Phys. D15 (2006) 1373-1386 arXiv: hep-th/0508246v3

Frasca, M. (1992). Strong-field approximation for the Schrödinger equation Physical Review A, 45 (1), 43-46 DOI: 10.1103/PhysRevA.45.43


CUDA: The upgrade

16/02/2011

As promised (see here) I am here to talk again about my CUDA machine. I have done the following upgrade:

  • Added 4 GB of RAM and now I have 8 GB of DDR3 RAM clocked at 1333 MHz. This is the maximum allowed by my motherboard.
  • Added the third 9800 GX2 graphics card. This is a XFX while the other twos that I have already installed are EVGA and Nvidia respectively. These three cards are not perfectly identical as the EVGA is overclocked by the manufacturer and, for all, the firmware could not be the same.

At the start of the upgrade process things were not so straight. Sometime BIOS complained at the boot about the position of the cards in the three PCI express 2.0 slots and the system did not start at all. But after that I have found the right combination in permuting the three cards, Windows 7 recognized all of them, latest Nvidia drivers installed as a charm and the Nvidia system monitor showed the physical situation of all the GPUs. Heat is a concern here as the video cards work at about 70 °C while the rest of the hardware is at about 50 °C. The box is always open and I intend to keep it so to reduce at a minimum the risk of overheating.

The main problem arose when I tried to run my CUDA applications from a command window. I have a simple program the just enumerate GPUs in the system and also the program for lattice computations of Pedro Bicudo and Nuno Cardoso can check the system to identify the exact set of resources to perform its work at best. Both the applications, that I recompiled on the upgraded platform, just saw a single GPU. It was impossible, at first, to get a meaningful behavior from the system. I thought that this could have been a hardware problem and contacted the XFX support for my motherboard. I bought my motherboard by second hand but I was able to register the product thanks to the seller that already did so. People at XFX were very helpful and fast in giving me an answer. The technician said to me essentially that the system should have to work and so he gave me some advices to identify possible problems. I would like to remember that a 9800 GX2 contains two graphics cards and so I have six GPUs to work with. I checked all the system again until I get the nice configuration above with Windows 7 seeing all the cards. Just a point remained unanswered: Why my CUDA applications did not see the right number of GPUs. This has been an old problem for Nvidia and was overcome with a driver revision long before I tried for myself. Currently, my driver is 266.58, the latest one. The solution come out unexpectedly. It has been enough to change a setting in the Performance menu of the Nvidia monitor for the use of multi-GPU and I have got back 5 GPUs instead of just 1. This is not six but I fear that I cannot do better. The applications now work fine. I recompiled them all and I have run successfully the lattice computation till a 76^4 lattice in single precision! With these numbers I am already able to perform professional work in lattice computations at home.

Then I spent a few time to set the development environment through the debugger Parallel Nsight and Visual Studio 2008 for 64 bit applications. So far, I was able to generate the executable of the lattice simulation under VS 2008. My aim is to debug it to understand why some values become zero in the output and they should not. Also I would like to understand why the new version of the lattice simulation that Nuno sent to me does not seem to work properly on my platform. I have taken some time trying to configure Parallel Nsight for my machine. You will need at least two graphics cards to get it run and you have to activate PhysX on the Performance monitor of Nvidia on the card that will not run your application. This was a simple enough task as the online manual of the debugger is well written. Also, enclosed examples are absolutely useful. My next week-end will be spent to fine tuning all the matter and starting doing some work with the lattice simulation.

As far as I will go further with this activity I will inform you on my blog. If you want to initiate such an enterprise by yourself, feel free to get in touch with me to overcome difficulties and hurdles you will encounter. Surely, things proved to be not so much complicated as they appeared at the start.


QCD at strong magnetic fields

10/02/2011

ResearchBlogging.org

Today on arxiv it is appeared the contribution to the conference “The many faces of QCD” of my friend Marco Ruggieri. Marco is currently a postdoc student at Yukawa Institute in Tokyo and has been a former student of Raoul Gatto. Gatto is one of the most known Italian physicists that had as students also Gabriele Veneziano and Luciano Maiani. With Marco we have had a lot of fun in Ghent and several interesting discussions about physics. One of the main interests of Marco is to study QCD vacuum under the effect of a strong magnetic field and he pursue this line with Gatto. This is a very rich field of research producing several results that can be compared with lattice computations and LHC findings at last. Marco’s contribution (see here) approaches the question using Nambu-Jona-Lasinio model. Before to enter is some details about Marco’s work, let me explain briefly what is the question here.

As my readers know, there has been so far no widely accepted low-energy limit of QCD rigorously derived from it. Simply, we can do computations of low-energy phenomenology just using some models that we hope, in some approximation, will describe correctly what is going on in this limit. Of course, there have been a number of successful models and Nambu-Jona-Lasinio model is one of this. This model, taken from the original formulation, is not renormalizable and not confining. But it describes fairly well the breaking of chiral symmetry and the way bound states can form from quark fields. Indeed, one is able to get a fine description of the low-energy behavior of QCD notwithstanding the aforementioned shortcomings of this model. In the course of time, this model has been refined and some of its defects have been corrected and today appears a serious way to see the behavior of QCD at very low-energy. But all this success appears somewhat incomprehensible unless someone is able to prove that this model is indeed a low-energy approximation to the QCD quantum  field theory. A couple of proofs are around: One is due to Kei-Ichi Kondo (see here) and the other one is due to your humble writer (see here). Kondo’s work does not reach a value for the NJL coupling while I get one through my gluon propagator that I know in a closed form. Anyhow, I was able to get a fully quantum formulation quite recently and this was published in QCD08 and QCD10 proceedings. But, notwithstanding these achievements, I keep my view that, until the community at large does not recognize these results as acquired, we have to continue to take not proved the fact that NJL is obtainable from QCD.

Given this situation, Marco’s approach is to consider a couple of modified NJL models and applies to them a constant magnetic field. Dirac equation with a constant magnetic field is well-known and exactly solvable producing a set of Landau levels and a closed form fermion propagator. This means that, given the mean field approximation, Marco is able to give well defined conclusions through analytical computations. NJL models Marco is considering have been both tuned to agree with lattice computations. What he finds is that the magnetic field has indeed important effects on the temperature of chiral symmetry restoration and for the deconfining phase. But he claims as a weak point a proper determination of the coupling that appears in the NJL models through the Polyakov loop that enters in the way the NJL models are formulated here. This is work for the future. I would like to emphasize the relevance of this kind of research for our understanding of the low-energy behavior of QCD. I will keep my readers up-to-date about this and I will keep on asking to Marco to clarify what the issues are for his research. What I find really striking here is to see the interplay between a magnetic field and strong force vacuum so entangled to produce really non-trivial results. Other groups around the World are working on this and accelerator facilities as LHC can produce important clues for our understanding of the vacuum of QCD. It will be really interesting to see how the results in this area will reach their maturity.

Marco Ruggieri (2011). Chiral symmetry restoration and deconfinement in strong magnetic fields arxiv arXiv: 1102.1832v1

Kondo, K. (2010). Toward a first-principle derivation of confinement and chiral-symmetry-breaking crossover transitions in QCD Physical Review D, 82 (6) DOI: 10.1103/PhysRevD.82.065024

FRASCA, M. (2009). INFRARED QCD International Journal of Modern Physics E, 18 (03) DOI: 10.1142/S0218301309012781


A striking clue and some more

08/02/2011

ResearchBlogging.org

My colleagues participating to “The many faces of QCD” in Ghent last year keep on publishing their contributions to the proceedings. This conference produced several outstanding talks and so, it is worthwhile to tell about that here. I have already said about this here, here and here and I have spent some words about the fine paper of Oliveira, Bicudo and Silva (see here). Today I would like to tell you about an interesting line of research due to Silvio Sorella and colleagues and a striking clue supporting my results on scalar field theory originating by Axel Maas (see his blog).

Silvio is an Italian physicist that lives and works in Brazil, Rio de Janeiro, since a long time. I met him at Ghent mistaking him with Daniele Binosi. Of course, I was aware of him through his works that are an important track followed to understand the situation of low-energy Yang-Mills theory. I have already cited him in my blog both for Ghent and the Gribov obsession. He, together with David Dudal, Marcelo Guimaraes and Nele Vandersickel (our photographer in Ghent), published on arxiv a couple of contributions (see here and here). Let me explain in a few words why I consider the work of these authors really interesting. As I have said in my short history (see here), Daniel Zwanzinger made some fundamental contributions to our understanding of gauge theories. For Yang-Mills, he concluded that the gluon propagator should go to zero at very low energies. This conclusion is at odds with current lattice results. The reason for this, as I have already explained, arises from the way Gribov copies are managed. Silvio and other colleagues have shown in a series of papers how Gribov copies and massive gluons can indeed be reconciled by accounting for condensates. A gluon condensate can explain a massive gluon while retaining  all the ideas about Gribov copies and this means that they have also find a way to refine the ideas of Gribov and Zwanzinger making them agree with lattice computations. This is a relevant achievement and a serious concurrent theory to our understanding of infrared non-Abelian theories. Last but not least, in these papers they are able to show a comparison with experiments obtaining the masses  of the lightest glueballs. This is the proper approach to be followed to whoever is aimed to understand what is going on in quantum field theory for QCD. I will keep on following the works of these authors being surely a relevant way to reach our common goal: to catch the way Yang-Mills theory behaves.

A real brilliant contribution is the one of Axel Maas. Axel has been a former student of Reinhard Alkofer and Attilio Cucchieri & Tereza Mendes. I would like to remember to my readers that Axel have had the brilliant idea to check Yang-Mills theory on a two-dimensional lattice arising a lot of fuss in our community that is yet on. On a similar line, his contribution to Ghent conference is again a striking one. Axel has thought to couple a scalar field to the gluon field and study the corresponding behavior on the lattice. In these first computations, he did not consider too large lattices (I would suggest him to use CUDA…) limiting the analysis to 14^4, 20^3 and 26^2. Anyhow, also for these small volumes, he is able to conclude that the propagator of the scalar field becomes a massive one deviating from the case of the tree-level approximation. The interesting point is that he sees a mass to appear also for the case of the massless scalar field producing a groundbreaking evidence of what I proved in 2006 in my PRD paper! Besides, he shows that the renormalized mass is greater than the bare mass, again an agreement with my work. But, as also stated by the author, these are only clues due to the small volumes he uses. Anyhow, this is a clever track to be pursued and further studies are needed. It would also be interesting to have a clear idea of the fact that this mass arises directly from the dynamics of the scalar field itself rather than from its interaction with the Yang-Mills field. I give below a figure for the four dimensional case in a quenched approximation

I am sure that this image will convey the right impression to my readers as mine. A shocking result that seems to match, at a first sight, the case of the gluon propagator on the lattice (mapping theorem!). At larger volumes it would be interesting to see also the gluon propagator. I expect a lot of interesting results to come out from this approach.

 

 

Silvio P. Sorella, David Dudal, Marcelo S. Guimaraes, & Nele Vandersickel (2011). Features of the Refined Gribov-Zwanziger theory: propagators, BRST soft symmetry breaking and glueball masses arxiv arXiv: 1102.0574v1

N. Vandersickel,, D. Dudal,, & S.P. Sorella (2011). More evidence for a refined Gribov-Zwanziger action based on an effective potential approach arxiv arXiv : 1102.0866

Axel Maas (2011). Scalar-matter-gluon interaction arxiv arXiv: 1102.0901v1

Frasca, M. (2006). Strongly coupled quantum field theory Physical Review D, 73 (2) DOI: 10.1103/PhysRevD.73.027701


CUDA: An update

04/02/2011

ResearchBlogging.org

My activity with CUDA technology by Nvidia and parallel computing is going on (see here).  I was able to get up and running the code made available by Pedro Bicudo and Nuno Cardoso (see here) on my machine. This is a code for SU(2) QCD and, currently, these colleagues are working on the SU(3) version. The code has been written directly for a machine supporting GPU computing with CUDA architecture.

Initially, I was able to get link configurations for lattices as large as 14^4, not very large but useful for some simple analysis. After a suggestion by Nuno, I have modified a parameter in the code (number of threads per block) from 16 to 8 and the simulation reached the impressive lattice volume of 64^4! I am only able to do computations in single precision as my graphics cards were built on 2008 when double precision was yet to come. But now I am in a position to do professional analysis of lattice simulations.

I would like to remember here the current configuration of my machine:

  • CPU: Intel Core 2 duo E8500 with 3.16 GHz for core, 6 MB cache.
  • 4 GB of DDR3 RAM.
  • 2 graphics cards 9800 GX2 with two GPUs for each and 512 MB of DDR3 RAM for each GPU. So, I have 4 GPUs at work.
  • Motherboard XFX 790i Ultra (3-way SLI).
  • PSU Cooler Master Silent Pro Gold 1000 W.
  • Windows 7 Ultimate 64 bit
  • CUDA Toolkit 3.2
  • Visual Studio 2008 SP1
  • Parallel Nsight (Nvidia debugger for CUDA)

This configuration performs at 2Tflops in single precision and I have reached the performance declared above for lattice QCD. The output file for a single run was about 4 GB. The simulation needs some debugging after porting as some values in the output file are zeros and they should not. Plaquette  values are good instead. Nuno produced new code from the old one but I was not able to get it running properly even if it compiled correctly.

During the week-end I am planning to further upgrade the machine. I will install another card 9800 GX2 (this one is a XFX while the others are EVGA and Nvidia respectively but are identical as the only producer is Nvidia) and 4 GB of RAM reaching the maximum value of 8 GB of RAM for my motherboard. The aim of this upgrade is to get an evaluation of both the gluon propagator and the spectrum at very large volumes, comparable with the works of the cornerstone date of Regensburg 2007. I would also like to get some code to solve \lambda\phi^4 theory to check my mapping theorem in four dimensions. I would like to emphasize that Rafael Frigori proved it correct in 2+1 dimensions (see here).

After the upgrade I will report on the blog. As I will get more time for this I will be able to produce some useful results that I hope to put here.

Frigori, R. (2010). Screening masses in quenched (2+1)d(2+1)d Yang–Mills theory: Universality from dynamics? Nuclear Physics B, 833 (1-2), 17-27 DOI: 10.1016/j.nuclphysb.2010.02.021


The Gribov obsession

03/02/2011

ResearchBlogging.org

I have treated the question of Yang-Mills propagators in-depth in my blog being one of my main concerns. There is an important part of the scientific community aimed to understand how these functions behave both at lower energies and overall on the whole energy range. The motivation to write down these few lines today arises from a number of interesting comments that an anonymous reader yielded to this post. If you already read it you know the main history about this matter otherwise you are urged to do so. The competitors in this arena are a pair of different solutions to the question of the propagators: The scaling solution and the decoupling solution. In the former case one expects the gluon propagator to go to zero as momenta lower and the ghost propagator should run to infinity faster than the free case. Similarly, one should have the running coupling to reach a finite value in the same limit. In the other case, the gluon propagator reaches a finite non-zero value toward zero momenta, the ghost propagator behaves as that of a free massless particle and the running coupling seems not to reach any finite value but rather bends significantly toward zero signaling a trivial infrared fixed point for Yang-Mills theory. In this post I would like to analyze the question of the genesis of the scaling solution. It arises from the Gribov obsession.

So, what is the Gribov obsession? Let us consider the case of electromagnetism. This does not give full reason to all this matter but just a hint about what is going on. The question bothering people is gauge fixing. To do computations in quantum field theory you need the gauge properly fixed and this is done in different ways. In the Lorenz gauge for example you will be able to do explicitly covariant computations but states have not all positive norm. But if you fix your gauge in the usual way, there is a residual as you can always add a solution of the wave equation for the gauge function and the physics does not change. This residual freedom is just harmless and, indeed, quantum electrodynamics is one of the most successful theories in the history of physics.

In non-Abelian gauge theories, Lorenz gauge is also called Landau gauge and the situation is well richer for residual gauge freedom that gauge fixing does not appear to be enough to grant consistent computations. This question was put forward firstly by Gribov and one has to cope with Gribov copies. Gribov copies should be renamed Gribov obsession as I did. If you want a fine description of the problem you can read this paper by Alfred Actor, appendix H or also the beautiful paper by Silvio Sorella and Rodrigo Sobreiro (see here). Now, we all know that when people is doing perturbation theory in QCD and uncover asymptotic freedom, there is no reason to worry about Gribov copies. They are simply harmless. So, the question is how much are important in the low energy (infrared) case.

This question transformed the original Gribov obsession in the obsession of many. Gribov himself proposed a solution limiting solutions to the so called first Gribov horizon as Gribov pointed out that the set of gauge orbits can be subdivided in regions with the first one having the Fadeed-Popov determinant with all positive eigenvalues and the next ones with eigenvalues becoming zero and then going to negative. In this way he was able to get a confining propagator that unfortunately is not causal. The question is then if limiting in this way the solutions of Yang-Mills theory gives again meaningful physical results. We should consider that this was a conjecture by Gribov and, while surely Gribov copies exist, it could be that imposing such a constraint is simply wrong as could be imposing any other constraint at all. One can also assert with the same right that Gribov copies can be ignored and starting to do physics from this. Now, the point is that the scaling solution arises from the Gribov obsession.

Of course, in my papers I showed (see here and refs therein), through perturbation theory, that in the deep infrared we can completely forget about Gribov copies. This is due to the appearance of an infrared trivial fixed point that makes the theory free in this limit reducing the case to the same of the ultraviolet limit. Starting perturbation theory from this point makes all the matter simply harmless. This scenario has been shown correct by lattice computations that recover the infrared fixed point and so are surely sound. The decoupling solution, now found by many researchers, is there to testify the goodness of the work researchers working with lattices and computers have done so far.

Finally, let me repeat my bet:

I bet 10 euros, or two rounds of beer at the next conference after the result is made manifestly known, that Gribov copies are not important in Yang-Mills theory at very low energies.

Nobody interested?

Actor, A. (1979). Classical solutions of SU(2) Yang—Mills theories Reviews of Modern Physics, 51 (3), 461-525 DOI: 10.1103/RevModPhys.51.461

R. F. Sobreiro, & S. P. Sorella (2005). Introduction to the Gribov Ambiguities In Euclidean Yang-Mills Theories arxiv arXiv: hep-th/0504095v1

Marco Frasca (2010). Mapping theorem and Green functions in Yang-Mills theory arxiv arXiv: 1011.3643v1


A wonderful confirmation

01/02/2011

ResearchBlogging.org

Contributions to proceedings to Ghent conference “The many faces of QCD” are starting to appear on arxiv and today appeared one of the most striking one I have heard of at that conference: Orlando Oliveira, Pedro Bicudo and Paulo Silva published their paper (see here). This paper represents a true cornerstone for people doing computations of propagators as the authors for the first time try to connect a gauge-dependent quantity as the gluon propagator is to a gauge-independent one as is the spectrum of Yang-Mills theory, mostly in the way I advocated here and in my papers. The results are given in the following figure

and the data are the following

[0.57{3.535(64),0.5907(86)}1.4]
[1.52{17(3),0.797(17)}{−17(3),1.035(31)}1.5]
[6.46{31(6),0.851(16)}{−52(11),1.062(26)}{22(9),1.257(40)}1.6]
[7.77{33(9),0.900(26)}{−54(12),1.163(49)}{33(14),1.65(12)}{−11(11),2.11(24)}1.1]

for one, two, three and four masses respectively. The form of the propagator they consider is the following one

D(p)=\sum_{n=0}^N\frac{Z_n}{p^2+m^2_n}

and so the first number above is the maximum momentum considered in the fit, then you have the pairs \{m_n,Z_n\} and the last number is the goodness of the fit as \chi^2/d.o.f.. As you can see from the picture above, the fit goes excellently well on all the range with four masses! The masses they obtain are values that are consistent with hadronic physics and can represent true glueball masses. The series has alternating signs signaling that the match with a true Källén-Lehmann spectral representation is not exact. Finally, the authors show how all the lattice computations performed so far agree well with a value D(0)\approx 8.3-8.5.

Why have I reasons to be really happy? Because all this is my scenario! The paper you should refer to are this and this. The propagator I derive from Yang-Mills theory is exactly the one of the fit of these authors. Besides, this is a confirmation from the lattice that a tower of masses seems to exist for these glue excitations as I showed. The volumes used by these authors are quite large, 80^4, and will be soon accessible also from my CUDA machine (so far I reached 64^4 thanks to a suggestion by Nuno Cardoso), after I will add a third graphics card. Last but not least the value of D(0). I get a value of about 4, just a factor 2 away from the value computed on the lattice, for a string tension of 440 MeV. As my propagator is obtained in the deep infrared, I would expect a better fit in this region.

The other beautiful result these authors put forward is the dependence of the mass on momentum. I have showed that the functional form they obtain is to be seen in the next to leading order of my expansion (see here). Indeed, they show that the fit with a single Yukawa propagator improves neatly with a mass going like m^2=m^2_0-ap^2 and this is what must be in the deep infrared from my computations.

I have already said in my blog about the fine work of these authors. I hope that others will follow these tracks shortly. For all my readers I just suggest to stay tuned as what is coming out from this research field is absolutely exciting.

O. Oliveira, P. J. Silva, & P. Bicudo (2011). What Lattice QCD tell us about the Landau Gauge Infrared Propagators arxiv arXiv: 1101.5983v1

FRASCA, M. (2008). Infrared gluon and ghost propagators Physics Letters B, 670 (1), 73-77 DOI: 10.1016/j.physletb.2008.10.022

FRASCA, M. (2009). MAPPING A MASSLESS SCALAR FIELD THEORY ON A YANG–MILLS THEORY: CLASSICAL CASE Modern Physics Letters A, 24 (30) DOI: 10.1142/S021773230903165X

Marco Frasca (2008). Infrared behavior of the running coupling in scalar field theory arxiv arXiv: 0802.1183v4