The question of the arrow of time

A recent paper by Lorenzo Maccone on Physical Review Letters (see here) has produced some fuss around. He tries to solve the question of the arrow of time from a quantum standpoint. Lorenzo is currently a visiting researcher at MIT and, together with Vittorio Giovannetti and Seth Lloyd, he produced several important works in the area of quantum mechanics and its foundations. I have had the luck to meet him in a conference at Gargnano on the Garda lake together with Vittorio. So, it is not a surprise to see this paper of him in an attempt to solve one of the outstanding problems of physics.

The question of the arrow of time is open yet. Indeed, one can think that Boltzmann’s H-theorem closed this question definitely but this is false. This theorem has been the starting point for a question yet to be settled. Indeed, Boltzmann presented a first version of his theorem that showed one of the most beautiful laws in physics: the relation between entropy and probability. This proof was criticized by Loschmidt (see here) and this criticism was sound. Indeed, Boltzmann had to modifiy his proof by introducing the so called Stosszahlansatz or molecular chaos hypothesis introducing in this way time asymmetry by hand.  Of course, we know for certain that this theorem is true and so, also the hypothesis of molecular chaos must be true. So, the question of the arrow of time will be solved only when we will know where molecular chaos comes from. This means that we need a mechanism, a quantum one, to explain Boltzmann’s hypothesis. It is important to emphasize that, till today, a proof does not exist of the H-theorem that removes such an assumption.

Quantum mechanics is the answer to this situation and this can be so if we knew how reality forms. An important role in this direction could be given by environmental decoherence and how it relates to the question of the collapse. A collapse grants immediately asymmetry in time and here one has to cope with many-body physics with a very large number of components. In this respect there exists a beautiful theorem by Elliot Lieb and Barry Simon, two of the most prominent living mathematical-physicists, that says:

Thomas-Fermi model is the limit of quantum theory when the number of particles goes to infinity.

For a more precise statement you can look at Review of Modern Physics page 620ff. Thomas-Fermi model is just a semi-classical model and this just means that this fundamental theorem can be simply restated as saying that the limit of a very large number of particles in quantum mechanics is the classical world. In some way, there exists a large number of Hamiltonians in quantum mechanics that are not stable with  respect to such a particle limit losing quantum coherence. For certain we know that there exist other situations where quantum coherence is kept at a large extent in many-body systems. This would mean that exist situations where quantum fluctuations are not damped out with increasing number of particles.  But the very existence of this effect implied in the Lieb and Simon theorem means that quantum mechanics has an internal mechanism producing time-asymmetry. This, together with environmental decoherence (e.g. the box containing a gas is classical and so on), should grant a fully understanding of the situation at hand.

Finally, we can say that Maccone’s attempt, being on this line of thought, is a genuine way to understand from quantum mechanics the origin of time-asymmetry. I hope his ideas will meet with luck.

Update: In Cosmic Variance you will find an interesting post and worthwhile to read discussion involving Sean Carroll, Lorenzo Maccone and others on the questions opened with Lorenzo’s paper.

23 Responses to The question of the arrow of time

  1. Guy Gur-Ari says:

    The arrow of time doesn’t require time asymmetry of the fundamental laws. Consider the following system: Take white liquid and blue liquid in adjacent containers, and remove the barrier. After a while they will mix and you’ll get a homogeneous light-blue liquid. So you get an arrow of time in the intuitive sense. It’s also easy to show that entropy increases, and if you want to call this an arrow of time that’s also fine (although it’s not quite the same thing). But either way, it’s there.

    If you insist that what’s really going beneath the surface is some quantum phenomenon, don’t trust nature: Run a simulation with time symmetric, deterministic rules for the collisions, representing the liquid by lots of small balls. You control the simulation — so you know the rules are time symmetric. And of course you get the same result, ending up with a homogeneous mixture of white/blue balls.

    The point is that entropy is not a fundamental quantity of nature, but rather a measure of our own ignorance about the state. This ignorance can come from not following every detail of the deterministic, time-symmetric dynamics (as in classical collisions), or it can come from quantum collapse, or from decoherence. In any case, the mathematics are the same, entropy increases, and you get an arrow of time.

    The Second Law of thermodynamics, and the appearance of an arrow of time, are general phenomena that extend beyond physics. And it’s very easy to see (as in the example above), that they have absolutely nothing to do with the time symmetry (or lack thereof) of the dynamics. They are purely statistical phenomena.

    There is still an unanswered question regarding the arrow of time, and this is why we experience each moment separately, and why time seems to ‘advance’. But: Taking the fact that time advances as an axiom, plus deterministic time-symmetric dynamics, plus some ignorance, gives you the second law and an arrow of time automatically.

    I doubt that we will ever be able to answer the real question of why time ‘advances’ in the first place. At this point it is not even well defined. But again, it has nothing to do with seemingly irreversible process and with the increase of entropy.

    • mfrasca says:

      Dear Guy Gur-Ari,

      The point here is that laws of physics are time symmetric and any behavior seen at higher level should be understood from these fundamental laws. This is what drove Boltzmann in his analysis and what we are trying to do still today. In your controlled simulation, I should suppose you are using at least Newton’s laws of motion. Then, if I revert all the trajectories, and I can always do that being just a computer program and Newton’s laws time-symmetric, I will be able to get as a final state the one you chose as the initial one. From the Newton’s laws also this is a plausible solution of the equation of motions. We would like to understand why such states are never observed in Nature.

      Of course, there is an experimental way to see it and this is done using NMR. You should check literature about Loschmidt’s echo and you will find that people is trying to understand this missing reversibility using decoherence (e.g. Zurek et al.) but one can also think about other ways to get understanding.

      So, entropy is a really fundamental quantity and is obtained from more fundamental time-symmetric laws of physics. It represents the information missed through a many-body effect acting at a lower level. This effect can be decoherence or something else: It does not matter what, entropy represents the way the Nature codes irreversibility.

      Cheers,

      Marco

      • Guy Gur-Ari says:

        >> If I revert all the trajectories … I will be able to get as a final state the one you chose as the initial one. … We would like to understand why such states are never observed in Nature.

        Let me first point out that the discussion has advanced to the next level. Instead of talking about nature, where things may or may not be affected by quantum mechanics, we are discussing a simulation where everything is known, i.e. the laws are exactly Newton’s time symmetric laws. So let us please stick to analyzing the simple simulation, and not regress to talking about complicated nature. Once we are done with this, we can try to see what our conclusions mean for nature.

        By your silence on the matter I gather that you agree with my prediction for the simulation: It will show an arrow of time, in the sense that the solution will become homogeneous. Is this correct?

        If so, it means an arrow of time clearly arises in a situation with time reversible laws. These laws are not the laws of nature, because nature is quantum mechanical. But what you tried to argue, as far as I understand, is this: Nature is seemingly irreversible, and therefore the fundamental laws must be irreversible. My example proves you wrong. It doesn’t prove that the laws of nature are time symmetric, it just proves that macroscopic irreversibility does not imply microscopic irreversibility. Do you agree?

        Now, you raise another question: How come, in the simulation, we will not reach a situation where the solutions are separated? Very good question. I will give the answer in a minute. But one thing is clear: The answer *cannot be* that the fundamental laws are time asymmetric, because we know what the fundamental laws of the simulation are and they are symmetric. There must be some other explanation.

        Now, to answer your question. What we call the ‘initial state’ where the white/blue solutions are separated is in fact a large set of microscopic states: There are many configurations of our balls that all look the same to us, i.e. white liquid on one side, blue on the other.

        What we call the ‘state of homogeneous solution’ is also a large set of microscopic states. However, this set is vastly larger than the set describing the separated white/blue liquids. You can give a good estimate of just how much larger it is.

        In addition, entropy increases until it reaches the maximum, where all microscopic states have the same probability. Because the homogeneous macroscopic state is composed of vastly more microstates than any other macroscopic state, and all probabilities for microstates are equal, it means that a homogeneous mixture is much more likely to occur. For a realistic number of particles, in fact, the chances of the state being anything else are completely negligible.

        This doesn’t mean that the initial macro state cannot occur as a final state. Of course it can, and you showed how: Go to the final microscopic state, reverse all the velocities, and presto! we get back our initial state. What you did here is fine tune the initial microscopic conditions to get a specific macroscopic final state. You can always do that as a thought experiment, but in reality we don’t have such control over our microscopic state. Our initial conditions, namely the initial microscopic state, are generic: We can only control the macro state. As a result we have very little control over the final microscopic state. Therefore the final state is controlled, in effect, by the laws of probability. And probability says that the odds are completely in favor of the homogeneous solution.

        Now I suppose you can see what these conclusions mean for the fundamental laws of nature, which is to say absolutely nothing. Nature could be deterministic, time reversible, probabilistic, not time reversible, it doesn’t matter one bit. The Second Law would always be there, and so would the arrow of time.

        • mfrasca says:

          Dear Guy Gur-Ari,

          Let me show were your argument fails. The crucial point are initial conditions and we cannot agree on the behavior of the simulation. In your successive arguing you already implied entropy and probability connection but this must be proved.

          Let me show why your conclusions are flawed. In your simulation, as in Nature of course, one can always choose a set of initial conditions such that mixing cannot happen. For example, I can take all the blue and red particles remaining forever in a small volume by their side and this is a plausible set of initial conditions for Newton laws. The fact that the behavior is ruled by probability rules is something that you are taking for granted from the start but, the very essence of Boltzmann’s work was just to prove that starting from the fundamental laws of dynamics and this forced him to introduce the hypothesis of molecular chaos. This hypothesis does the job and fix the connection but where does this come from?

          Experimentally, as I have said above, such kind of reverting of initial conditions is done in several systems where Loschmidt’s echo is studied
          (see e.g arXiv ). What they find is a lack of irreversibility and, as I have already said, the correct answer is something people is looking for. So, it is not just a matter of ideal experiments but is something seen in laboratory with very controlled situations.

          What I would like to emphasize is that the relation between entropy and probability and the fact that a macroscopic system evolves toward a more probable state is something that must be proved starting from more fundamental laws and cannot be given for granted at the very outset.

          Cheers,

          Marco

  2. Guy Gur-Ari says:

    >> Let me show why your conclusions are flawed.

    Please do.

    >> one can always choose a set of initial conditions such that mixing cannot happen. For example, I can take all the blue and red particles remaining forever in a small volume by their side

    I don’t think you can do this (please explain how), but suppose you could. What part of what I said does this refute? Please explain.

    And I must insist — let us analyze this simple simulation. I think the situation I presented is simple enough that we don’t need to talk about Boltzmann, or throw references around. If you like, I can make it even simpler. So please, let’s think rather than link.

    >> The fact that the behavior is ruled by probability rules is something that you are taking for granted from the start

    Certainly not. Quite the opposite in fact: the behavior is completely deterministic since these are Newton’s laws. The simulation is completely deterministic.

    The probabilities only come in when we model the problem in order to understand the results of the simulation. This has nothing to do with how the simulation works: The laws in the simulation are Newton’s laws.

    To model the problem we basically have two options: One, model the problem using Newton’s rules, but then we are stuck because we can’t solve anything. And Two, build a statistical model that replaces complicated deterministic rules with simple probabilities. We can argue on how to do this, but it would not affect the simulation in any way. It has nothing to do with the fundamental laws, and it would not affect the results of the simulation.

    But I see you are uncomfortable with this. Very well, let’s take a step back and see what we can agree on. Here are my basic claims, without any reference to probabilities or entropy:

    1. Write a deterministic simulation that works according to Newton’s laws. As an initial state, place white balls at random positions on one side, and blue balls at random positions on the other side. Let the system evolve. I predict the result: Balls of different colors will be mixed.

    Do you agree that this will be the result?

    Is the randomness of the initial state troubling you?

    2. This shows that this system has an arrow of time, and that the process it describes is irreversible.

    Do you agree?

  3. mfrasca says:

    >>>> Let me show why your conclusions are flawed.

    >> Please do.

    Quite simple. Let us consider a perfect gas, anyhow a gas dilute enough. I can always take all the positions to be constant and all the velocities to be zero in such a way to have the particles where I like. These conditions solve Newton’s equations of motion and can be put in your simulation.

    • Guy Gur-Ari says:

      >> Quite simple. Let us consider a perfect gas, anyhow a gas dilute enough. I can always take all the positions to be constant and all the velocities to be zero in such a way to have the particles where I like. These conditions solve Newton’s equations of motion and can be put in your simulation.

      I stand corrected. Now what does this prove?

      • mfrasca says:

        You said:

        1. Write a deterministic simulation that works according to Newton’s laws. As an initial state, place white balls at random positions on one side, and blue balls at random positions on the other side. Let the system evolve. I predict the result: Balls of different colors will be mixed.

        Of course, this is false as there exists a non-trivial set of initial conditions for which your assertion fails. Balls can stay forever in their positions and no mixing occurs. And the following cannot always taken for granted:

        2. This shows that this system has an arrow of time, and that the process it describes is irreversible.

        But I can also think to some other kind of motions that proves the same. I can have my particles passing their time bouncing on perfectly reflecting walls of the box without ever mixing. There is no limit to these possibilities simply with rectilinear motions.

        Indeed, you have to assume also random velocities. But this cannot grant you enough that, after a certain time and a lot of scattering, the system will turn back to an improbable state. Indeed, we know that this will surely happen after the Poincare’ time, that may be surely long, but its very existence proves again that your arguing is flawed.

        So, we must prove that thing goes straight and we must turn back to the main point: you must prove that a system always evolves toward a more probable state: Boltzmann did just this with all the problems I pointed out in my post.

        • Guy Gur-Ari says:

          Yes, I assume random velocities. Yes, after some time the system will pass through an improbable macrostate. That’s why it’s `improbable’, not `impossible’.

          The basic, practical fact remains: If you take the simulation, choose a random initial state, and evolve in time according to Newton’s laws for some time, you will, with likelihood that is almost 1 (but is not exactly 1), find yourself in a macrostate where the fluid is mixed. I think this is quite obvious. You can try it if you don’t believe me.

          My statement is statistical, not exact. I am predicting what will happen almost all of the time. Not all the time. What you are doing is pointing out that this doesn’t happen all the time. I agree with you. But since the probabilities for the events you suggest are so small in comparison to the likely event, it doesn’t matter.

          >> you must prove that a system always evolves toward a more probable state

          I don’t need to prove this, since I didn’t argue this is true. In fact it is obviously wrong: A macrostate that is `more probable’ is more likely to occur, not guaranteed to occur.

          So let’s see if I understand your point of view. You are claiming that if there is even the *slightest chance* that a system returns to an improbable state, then this means there is no arrow of time?

          And you think that in nature this can never happen, because we see an arrow of time? e.g. you think that, in nature, a homogeneous mixture of blue/white will never ever become separated again?

        • mfrasca says:

          Of course my examples are correct and based on your statements. But even if you put an interacting force, until you prove it, you are not granted that your system will reach always the most probable state. I repeat again as you systematically miss this:

          You must prove it.

          So, unless you give me such a proof I cannot agree with you about the evolution of a system that you seem to give for sure or granted. Indeed, science does not work this way.

          Finally, you stated:

          Nature could be deterministic, time reversible, probabilistic, not time reversible, it doesn’t matter one bit. The Second Law would always be there, and so would the arrow of time.

          This is a matter of proof. It is this what I am saying since the start. As I showed to you there are a lot of situations where your simulation can fail and a priori cannot be said as you pretend to do. The second principle of thermodynamics must be derived and also the connection between probability and entropy.

          Science is not done with a priori. Statements should be proved while you are pretending that I digest yours without a proof. So, the following is plainly wrong as I showed:

          1. Write a deterministic simulation that works according to Newton’s laws. As an initial state, place white balls at random positions on one side, and blue balls at random positions on the other side. Let the system evolve. I predict the result: Balls of different colors will be mixed.

          Anyhow, you have my appreciation for your attempt to prove Boltzmann wrong.

        • Guy Gur-Ari says:

          >> you are not granted that your system will reach always the most probable state.

          >> I repeat again as you systematically miss this:

          >> You must prove it.

          I keep repeating that this is not what I’m claiming…

          I’m not claiming the system will *always* reach the most probable state. It will not. So you can stop using this straw man.

          I am saying is that it is much more probable that it will end up in the most probable state. This is obvious, because the state is `most probable’…

          Perhaps you want me to prove that the most probable state is the one where the balls are mixed? This can be proved, by the usual methods in this field. But I get the feeling you are not convinced by these proofs.

          So let me offer, instead of a proof, evidence. Are you convinced by evidence?

          Here is the evidence: Write the simulation, and run it many times. Choose random, not fine tuned, initial conditions. Count how many times the system ends up in a completely mixed state. Divide this by the total number of runs.

          If the result is 1, or very close to it, do you accept this as evidence (not a proof) that the most probable macrostate is the completely mixed state?

          Well this is fairly obvious I think, since it’s the way all experiments are conducted…

          Now, how is this different from a proof? The difference is that this run might be a fluke — a statistical fluctuation. But the more runs you perform, the less likely this becomes. This is quite elementary stuff…

          Then, do you accept that this is evidence (not a proof) that this deterministic, time symmetric system has an arrow of time?

          If not, please explain what you mean by an `arrow of time’.

        • mfrasca says:

          As always happens in these kind of discussions we are finally lost in definitions. So, you now state:

          I am saying is that it is much more probable that it will end up in the most probable state. This is obvious, because the state is `most probable’…

          and nonsense definitively enters. You should know that your system is strictly deterministic. I thought that your statement was “simpler” to prove than this (maybe with Boltzmann’s help) . Here you have a lot of work to do in defining probabilities for a deterministic system and what the most probable state should be.

          Good luck!

        • Guy Gur-Ari says:

          >> As always happens in these kind of discussions we are finally lost in definitions

          Come, we are physicists, we don’t need an exact definition. I suggest this example: Take white liquid on one side, blue on the other. Remove the barrier, see what happens. Repeat experiment 1000 times and count how many times the liquid becomes mixed.

          Start with mixed liquid, see what happens. Repeat 1000 times, count how many times liquid separates.

          If in all the first 1000 experiments the liquid became mixed, and in all the second 1000 experiments that liquid stayed mixed, I say that the process is irreversible, and I say the system exhibits an arrow of time.

          We can test whether this happens in reality, and we can test it in a simulation (see below *exactly* how to do this).

          So, I suggest this as a simple test of whether a system exhibits an arrow of time or not. Do you accept this as a good test of an arrow of time?

          >> Here you have a lot of work to do in defining probabilities for a deterministic system and what the most probable state should be.

          It’s actually very simple. Since the laws are deterministic, the only probability enters when choosing the initial conditions. If you need help with those, let me be painfully specific about how I choose them:

          Choose the initial positions by a homogeneous distribution (left container for white balls, right container for blue).

          Choose the initial velocities by a gaussian distribution around zero velocity, with some reasonable second moment (I’ll let you pick).

          Run the simulation many times, and count how many times each final macrostate occurs. Divide this number but the total number of runs, and the result is the probability for this macrostate.

          Simple, isn’t it?

          If you think the probabilities depend on the initial distribution, change it and see. You don’t have to believe anything — you can check for yourself.

          But you seem uncomfortable admitting a very simple point.

          So let’s be clear now: I *challenge you* to run the simulation I suggested (like I did, by the way) and see the result for yourself.

          Once you do this, you will find out whether the deterministic system has an arrow of time or not, by the definition I gave above.

        • mfrasca says:

          Dear Guy Gur-Ari,

          Of course, even if we are physicists (are you?), definitions are important. For one reason, when one goes to do measurements, vagueness can be of no help and our main tool is mathematics. Sloppiness should be rejected on any ground.

          I do not need to do your simulation. Your initial state is something like: The liquid is homogeneous and has a Gaussian (Maxwellian) distribution of velocities. Substantially you are saying: I take two liquids both in equilibrium and I make them mix. You can change these distributions as you like, making them unphysical if you want, but the problem will remain.

          The question should be: Who puts such deterministic systems with such initial probability distributions? This is exactly the work of kinetic theory and here is where the problem of arrow of time comes in. Your point is interesting because makes us think on when probabilities slips in many-body physics. Indeed, a Gaussian distribution of velocities comes out naturally as the most probable state from H-theorem for a perfect gas.

          In conclusion, you introduce an arrow of time since the start and you are a step below Boltzmann.

          Cheers,

          Marco

        • Kupervasser Oleg says:

          1) I completely agree with the main argument about observer’s memory erasing. It is not the ORIGINAL argument of Maccone. It is formulated in my previous papers in Arxiv (2004, 2005,2009)
          O. Kupervasser, arXiv:nlin/0407033v2 (2004).
          O. Kupervasser, arXiv:nlin/0508025v1 (2005).
          O. Kupervasser, arXiv:0911.2076v3 (2009).
          It is also formulated without any mistakes of Maccone. In my Comment you can find short formulation of main ideas of my previous big paper.

          2) This argument can explain why we can not see entropy decrease, but it can not explain a low entropy initial state of Universe. Maccone write about it in his reply. But the low entropy initial state can be easily explained by anthropic principle.

          3) The sourse of all arguments against Maccone’s theory is based on his physical mistakes, described in my Comment. http://arxiv.org/abs/0911.2610

          4) The time is really symmetric. “The future” and “the past” are depend on the choose of time direction. So we have freedom to choose any direction of time. Optimal choose of this direction – in the direction of the entropy increase. So the entropy increase is a almost trivial law. But we have two problems to do it:

          a) Why direction of time in all parts of Universe is the same?
          Answer:
          Very small interaction beetwen all part of Universe is enough for synchronization of all time arrow directions for all subsystems. It happens, because the entropy decrease process is unstable and the entropy increase one is stable. (See my Comment and references for it)

          b) Poincare theorem about returns tells us that direction of time must change to opposite one. Why the observer doesn’t see such change?
          Answer:
          Small interaction beetwen observer and observed system gives synchronization of time arrows of the observer and the observed system. It gives the observer’s memory erasing. We can put this another way: directions of eigen time arrows of the observer and the observed system (I remind that these directions are chosen in direction of entropy increase!!!) are always the same, so observer can see only entropy increase. Directions of the two eigen time arrows really change to opposite ones but they make it TOGETHER!!!! (See my Comment and references for it)

          So the entropy increase law is not always correct. For some NON PRACTICAL situations (unperturbative observation of a very isolated system ) IN PRINCIPAL we can see entropy decrease. Maccone doesn’t understand it.

          The entropy increase law is also not correct for not macroscopical system (see Comment of David Jennings, Terry Rudolph). It is not big surprise! We knows that this law is correct only for macrosystems. Maccone doesn’t understand it.

          Maccone also uses a not correct definition of the thermodynamic entropy. His entropy is not a function of sytem state, but also depends on observer! It is information entropy , but not thermodynamic entropy used in the second law formulation.

    • Guy Gur-Ari says:

      Umm… sorry I retract my comment (saying ‘I stand corrected’) since your example is not quite correct. In the simulation the particles are supposed to collide. In order to collide there must be some force acting between them, and a realistic force wouldn’t completely vanish at large distances. So the particles would start moving.

      But let’s help you and suppose the force has some cutoff and vanishes at some distance, and you place particles farther apart than this distance. How does this show that what I said is wrong?

      I must say this is the least interesting point in this entire discussion, since you didn’t explain how this example proves that anything I said is wrong. So I’m waiting patiently for an answer to my more pertinent questions.

  4. carlbrannen says:

    From quantum mechanics, the clue to getting an arrow of time is to look for violations of Hermiticity. The problem with finding these is that the usual quantum mechanics is Hermitian by construction. To get non Hermiticity you can replace spinors / state vectors with pure density matrices / operators.

    • mfrasca says:

      Hi Carl,

      Indeed, this is the point and arises from a deep analysis of John von Neumann in his classical book on quantum mechanics. It appears as measurement is a non-Hermitian process and somehow far from the main formalism of the theory.

      My view is that there are two ways to evade this conclusion. The first one, very elegant, comes from environmental decoherence and the other, as I cite in my post with Lieb and Simon theorem, taking the limit of number of particles going to infinity.

      Marco

      • carlbrannen says:

        I think there’s another way. The propagators for “tripled Pauli spin” are inherently non Hermitian, but you get regular spin-1/2 when you consider the long time stable propagators.

        The idea is that reality is basically completely non Hermitian, and the Hermiticity we see appears as an emergent property due to our blurry limitations in measurement. This is in my recent paper (now in review at Foundations of Physics).

  5. jr says:

    consider a bubble chamber photo with an electron
    spiraling until captured. Is the arrow of time
    that we do not see electrons hopping out of atoms
    and spiraling up to high velocity and heading into
    the accelerator tube – or is the arrow that as
    it enters is undergoes one interaction after another
    along its path ?

    • mfrasca says:

      Dear jr,

      This is a really interesting example as it involves the question of how a track in a bubble chamber forms. Indeed, these bubbles are an inherent thermodynamic effect under the spell of the second principle and we cannot expect to see a fully reversed process.

      Marco

    • Guy Gur-Ari says:

      The arrow of time is the first phenomenon — the electrons going in but not coming out. At least, this is what people usually mean by this term — the existence of irreversible processes.

      The second phenomenon is the fact that time `flows’ in some sense. You can call this an arrow of time, but it’s not the usual meaning.

      Regardless of how you call them, my claim is that they are related as follows: If you accept that time flows (2nd phenomenon) as an axiom, you get an arrow of time (irreversible processes, 1st phenomenon) for free in any reasonable physical modeling of the collisions. Including deterministic, time-reversible Newton’s laws.

  6. […] Maccone’s argument (see here)  is on the hot list yet. Today, a paper by David Jennings and Terry Rudolph (Imperial College, […]

Leave a reply to Guy Gur-Ari Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.