Thermodynamic Asymmetry in Time

First published Thu Nov 15, 2001; substantive revision Fri Jul 29, 2011

Macroscopic processes appear to be temporally “directed” in some sense. Systems spontaneously evolve to future equilibrium states, but they do not spontaneously evolve away from equilibrium states. The nature of this directedness raises many questions in the foundations of philosophy and science.

Thermodynamics is the science that describes much of the time-asymmetric behavior found in the world. This entry's first task, consequently, is to show how thermodynamics treats temporally ‘directed’ behavior. It then concentrates on the following two questions. (1) What is the origin of the thermodynamic asymmetry in time? In a world possibly governed by time-symmetric laws, how should we understand the time-asymmetric laws of thermodynamics? (2) Does the thermodynamic time asymmetry explain the other temporal asymmetries? Does it account, for instance, for the fact that we know more about the past than the future? The discussion thus divides between thermodynamics being an explanandum or explanans. In the former case the answer will be found in philosophy of physics; in the latter case it will be found in metaphysics, epistemology, and other fields, though in each case there will be blurring between the disciplines.

1. Thermodynamic Time Asymmetry: A Brief Guide

Consider the following.

Place some chlorine gas in a small closed flask into the corner of a room. Set it up so that an automaton will remove its cover in 1 minute. Now we know what to do: run. Chlorine is poison, and furthermore, we know the gas will spread reasonably quickly through its available volume. The chlorine originally in equilibrium in the flask will, upon being freed, ‘relax’ to a new equilibrium.

Or less dramatically:

Place an iron bar over a flame for half an hour. Place another one in a freezer for the same duration. Remove them and place them against one another. Within a short time the hot one will ‘lose its heat’ to the cold one. The new combined two-bar system will settle to a new equilibrium, one intermediate between the cold and hot bar's original temperatures. Eventually the bars will together settle to roughly room temperature.

These are two examples of a tendency of systems to spontaneously evolve to equilibrium; but there are indefinitely more examples in all manner of substance. The physics first used to systematically describe such processes is thermodynamics.

First developed in S. Carnot's Reflections on the Motive Power of Fire 1824, the science of classical thermodynamics is intimately associated with the industrial revolution. Most of the results responsible for the science originated from the practice of engineers trying to improve steam engines. Begun in France and England in the late eighteeth and early nineteenth centuries, the science quickly spread throughout Europe. By the mid-nineteenth century, Clausius in Germany and Thomson (later Lord Kelvin) in England had developed the theory in great detail.

Thermodynamics is a ‘phenomenal’ science, in the sense that the variables of the science range over macroscopic parameters such as temperature and volume. Whether the microphysics underlying these variables are motive atoms in the void or an imponderable fluid is largely irrelevant to this science. The developers of the theory both prided themselves on this fact and at the same time worried about it. Clausius, for instance, was one of the first to speculate that heat consisted solely of the motion of particles (without an ether), for it made the equivalence of heat with mechanical work less surprising. However, as was common, he kept his “ontological” beliefs separate from his statement of the principles of thermodynamics because he didn't wish to (in his words) “taint” the latter with the speculative character of the former.[1]

A treatment of thermodynamics naturally begins with the statements it takes to be laws of nature. These laws are founded upon observations of relationships between particular macroscopic parameters and they are justified by the fact they are empirically adequate. No further justification of these laws is to be found — at this stage — from the details of microphysics. Rather, stable, counterfactual-supporting generalizations about macroscopic features are enshrined as law. The typical textbook treatment of thermodynamics describes some basic concepts, states the laws in a more or less rough way and then proceeds to derive the concepts of temperature and entropy and the various thermodynamic equations of state. It is worth remarking, however, that in the last fifty years the subject has been presented with a degree of mathematical rigor not previously achieved. Originating from the early axiomatization by Caratheodory in 1909, the development of ‘rational thermodynamics’ has clarified the concepts and logic of classical thermodynamics to a degree not generally appreciated. There now exist many quite different, mathematically exact approaches to thermodynamics, each starting with different primitive kinds and/or observational regularities as axioms. (For a popular presentation of a recent axiomatization, see Lieb and Yngvason 2000.)

In the traditional approach classical thermodynamics has two laws, the second of which is our main focus. (Readers may have heard of a ’third law’ as well, but it was added later and is not relevant to the present discussion.) The first law expresses the conservation of energy. The law uses the concept of the internal energy of a system, U, which is a function of variables such as volume. For thermally isolated (adiabatic) systems—think of systems such as coffee in a thermos—the law states that this function, U, is such that the work W delivered to a system's surroundings is compensated by a loss of internal energy, i.e., dW = -dU. When Joule and others showed that mechanical work and heat were interconvertible, consistency with the principle of energy conservation demanded that heat, Q, considered as a different form of energy, be taken into account. For non-isolated systems we extend the law as dQ = dU + dW, where dQ is the differential of the amount of heat added to the system (in a reversible manner).

The conservation of energy tells us nothing about temporally asymmetric behavior. In particular, it doesn't follow from the first law that interacting systems quickly tend to approach equilibrium (a state where the values of the macroscopic variables remain approximately stable), and once achieved, never leave this state. It is perfectly consistent with the first law that systems in equilibrium leave equilibrium. Since this tendency of systems cannot be derived from the First Law, another law is needed. Although Carnot was the first to state it, the formulations of Kelvin and Clausius are standard:

Kelvin's Second Law: There is no thermodynamic process whose sole effect is to transform heat extracted from a source at uniform temperature completely into work.

Clausius' Second Law: There is no thermodynamic process whose sole effect is to extract a quantity of heat from a colder reservoir and deliver it to a hotter reservoir.

Kelvin's version is essentially the same as the version arrived at by both Carnot and Planck, whereas Clausius' version differs from these in a few ways.[2]

Clausius' version transparently rules out anti-thermodynamic behavior such as a hot iron bar extracting heat from a neighboring cold iron bar. The cool bar cannot give up a quantity of heat to the warmer bar (without something else happening). Kelvin's statement is perhaps less obvious. It originates in an observation about steam engines, namely, that heat energy is a ‘poor’ grade of energy. Consider a gas-filled cylinder with a frictionless piston holding the gas down at one end. If we put a flame under the cylinder, the gas will expand and the piston can perform work, e.g., it might move a ball. However, we can never convert the heat energy straight into work without some other effect occurring. In this case, the gas occupies a larger volume.

In 1854 Clausius introduced the notion of the ‘equivalence value’ of a transformation, a concept that is the ancestor of the modern day concept of entropy. Later in 1865 Clausius coined the term ‘entropy’ for a similiar concept (the word derives from the Greek word for transformation). The entropy of a state A, S(A) is defined as the integral S(A) = image dQ/T over a reversible transformation, where O is some arbitrary fixed state. For A to have an entropy, the transformation from O to A must be quasi-static, i.e., a succession of equilibrium states. Continuity considerations then imply that the initial and final states O and A must also be equilibrium states. In terms of entropy, the Second Law states that in a transformation from equilibrium state A to equilibrium state B, the inequality S(B) − S(A) is greater than or equal to the image dQ/T. Loosely put, for realistic systems, this implies that in the spontaneous evolution of a thermally closed system the entropy can never decrease and that it attains its maximum value for states at equilibrium. We are invited to think of the Second Law as driving the gas to its new, higher entropy equilibrium state. Using this concept of entropy, thermodynamics is able to capture an extraordinary range of phenomena under one simple law. Remarkably, whether they are gases filling their available volumes, two iron bars in contact coming to the same temperature, or milk mixing in your coffee, they all have an observable property in common: their entropy increases. Coupled with the First Law, the Second Law is remarkably powerful. It appears that all classical thermodynamical behavior can be derived from these two simple statements (Penrose 1970).[3]

There are a number of philosophical questions one might ask about the the laws of thermodynamics. For instance, where exactly is time-asymmetry found in the above statement of the Second Law? If Uffink 2001 is correct, then this “static” Second Law does not encode any time asymmetry; the spontaneous movement from non-equilibrium to equilibrium must be described as a new thermodynamic law. Another question is whether the Second Law is universal? That is, does it apply to the universe as a whole, so that we can say the universe's entropy is increasing, or does it only apply to select sub-systems of the universe? (See Uffink 2001 (Other Internet Resources) for an interesting historical discussion of this topic, too.) How are these laws framed in a relativistic universe? Do Lorentz boosted gases appear hotter or colder in the new frame? Surprisingly, the correct (special) relativistic transformation rules for thermodynamic quantities, and thus the relativistic understanding of thermodynamic time asymmetry, is still controversial. Einstein himself answered the question differently throughout his life! With all the current activity of physicists being focused on the thermodynamics of black holes in general relativity and quantum gravity, it is amusing to note that special relativistic thermodynamics is still a field with many open questions, both physically and philosophically. (See Earman 1981 and Liu 1994.)

Another important question concerns the reduction of thermodynamic concepts such as entropy to their mechanical, or statistical mechanical, basis. As even a cursory glance at statistical mechanics reveals, there are many candidates for the statistical mechanical entropy, each the center of a different program in the foundations of the field. Surprisingly, there is no consensus as to which entropy is best suited to be the reduction basis of the thermodynamic entropy (see, for example, Sklar 1993; Callender 1999; Lavis 2005). Consequently, there is little agreement about what the Second Law looks like in statistical mechanics. Despite the worthiness of these issues, this article will focus on the particularly important problem of the direction of time (though as we'll see, many issues go by this name.)

2. The Problem of the Direction of Time I

This ‘problem of the direction of time’ has its source in the debates over the status of the second law of thermodynamics between L. Boltzmann and some of his contemporaries, notably, J. Loschmidt, E. Zermelo and E. Culverwell. Boltzmann sought the mechanical underpinning of the second law.   He devised a particularly ingenious explanation for why systems tend toward equilibrium. Consider an isolated gas of N particles in a box, where N is large enough to make the system macroscopic (N ≈ 1023+). For the sake of familiarity we will work with classical mechanics. We can characterize the gas by the coordinates and momenta xin, pin of each of its particles and represent the whole system by a point X = (q,p) in a 6N-dimensional phase space known as Γ, where q = (q1q3N) and p = (p1p3N).

Boltzmann's great insight was to see that the thermodynamic entropy arguably “reduced” to the volume in Γ picked out by the macroscopic parameters of the system. The key ingredient is partitioning Γ into compartments, such that all of the microstates X in a compartment are macroscopically (and thus thermodynamically) indistinguishable. To each macrostate M, there corresponds a volume of Γ, |ΓM|, whose size will depend on the macrostate in question. For combinatorial reasons, almost all of Γ corresponds to a state of thermal equilibrium. There are simply many more ways to be distributed with uniform temperature and pressure than ways to be distributed with nonuniform temperature and pressure. There is a vast numerical imbalance in Γ between the states in thermal equilibrium and the states in thermal nonequilibrium.

We can now introduce Boltzmann's famous entropy formula (up to an additive constant):

SB(M(X)) = k log |ΓM|

where |ΓM| is the volume in Γ associated with the macrostate M, and k is Boltzmann's constant. SB provides a relative measure of the amount of Γ corresponding to each M. Given the mentioned asymmetry in Γ, almost all microstates are such that their entropy value is overwhelmingly likely to increase with time. When the constraints are released on systems initially confined to small sections of Γ, typical systems will evolve into larger compartments. Since the new equilibrium distribution occupies almost all of the newly available phase space, nearly all of the microstates originating in the smaller volume will tend toward equilibrium. Except for those incredibly rare microstates conspiring to stay in small compartments, microstates will evolve in such a way as to have SB increase. Though substantial questions can be raised about the details of this approach, and philosophers can rightly worry about the justification of the standard probability measure on Γ, this explanation seems to offer a plausible framework for understanding why the entropy of systems tends to increase with time. (For further explanation and discussion see Bricmont 1995, Callender 1999, Frigg 2008, 2009, Goldstein 2001, Klein 1973, Lavis 2005 and Lebowitz 1993, Uffink 2006.)

Before Boltzmann explained entropy increase as described above, he proposed a now notorious “proof” known as the “H-theorem” to the effect that entropy must always increase (see Brown, Myrvold and Uffink 2009). Loschmidt and Zermelo launched objections to the H-theorem. But an objection in their spirit can also be advanced against Boltzmann's later view sketched above. Loosely put, because the classical equations of motion are time reversal invariant (TRI), nothing in the original explanation necessarily referred to the direction of time. (See Hurley 1985.) Though I just stated the Boltzmannian account of entropy increase in terms of entropy increasing into the future, the explanation can be turned around and made for the past temporal direction as well. Given a gas in a box that is in a nonequilibrium state, the vast majority of microstates that are antecedents of the dynamical evolution leading to the present macrostate correspond to a macrostate with higher entropy than the present one. Therefore, not only is it highly likely that typical microstates corresponding to a nonequilibrium state will evolve to higher entropy states, but it is also highly likely that they evolved from higher entropy states.

Concisely put, the problem is that given a nonequilibrium state at time t2, it is overwhelmingly likely that

(1) the nonequilibrium state at t2 will evolve to one closer to equilibrium at t3

but that due to the reversibility of the dynamics it is also overwhelmingly likely that

(2) the nonequilibrium state at t2 has evolved from one closer to equilibrium at t1

where t1 < t2 < t3. However, transitions described by (2) do not seem to occur; or phrased more carefully, not both (1) and (2) occur. However we choose to use the terms ‘earlier’ and ‘later,’ clearly entropy doesn't increase in both temporal directions. For ease of exposition let us dub (2) the culprit.

The traditional problem is not merely that nomologically possible (anti-thermodynamic) behavior does not occur when it could. That is not straightforwardly a problem: all sorts of nomologically allowed processes do not occur. Rather, the problem is that statistical mechanics seems to make a prediction that is falsified, and that is a problem according to anyone's theory of confirmation.

Many solutions to this problem have been proposed. Generally speaking, there are two ways to solve the problem: eliminate transitions of type (2) either with special boundary conditions or with laws of nature. The former method works if we assume that earlier states of the universe are of comparatively low-entropy and that (relatively) later states are not also low-entropy states. There are no high-to-low-entropy processes simply because earlier entropy was very low. Alternatively, the latter method works if we can somehow restrict the domain of physically possible worlds to those admitting only low-to-high transitions. The laws of nature are the straightjacket on what we deem physically possible. Since we need to eliminate transitions of type (2) while keeping those of type (1) (or vice versa), a necessary condition of the laws doing this job is that they be time reversal noninvariant.   Our choice of strategy boils down to either assuming temporally asymmetric boundary conditions or of adding (or changing to) time reversal noninvariant laws of nature that make entropy increase likely. Many approaches to this problem have thought to avoid this dilemma, but a little analysis of any proposed ‘third way’ arguably proves this to be false.

2.1 Past Hypothesis

Without changing the TRI laws of nature, there is no way to eliminate transition (2) in favor of (1). Nevertheless, appealing to temporally asymmetric boundary conditions, as we've seen, allow us to describe a world wherein (1) but not (2) occur. A cosmological hypothesis claiming that in the very distant past entropy was much lower will work. Boltzmann, as well as many of this century's greatest scientists, e.g., Einstein, Feynman, and Schroedinger, saw that this hypothesis is necessary given our laws. (Boltzmann, however, explained this low-entropy condition by treating the observable universe as a natural statistical fluctuation away from equilibrium in a vastly larger universe.)  Earlier states do not have higher entropy than present states because we make the cosmological posit that the universe began in an extremely tiny section of its available phase space. Albert 2000 calls this the “Past Hypothesis” and argues that it solves both this problem of the direction of time and also the one to be discussed below. Note that classical mechanics is also compatible with a “Future Hypothesis”: the claim that entropy is very low in the distant future. The restriction to “distant” is needed, for if the near future were of low-entropy, we would not expect the thermodynamic behavior that we see — see Cocke 1967, Price 1996 and Schulman 1997 for discussion of two-time boundary conditions.

The main dissatisfaction with this solution is that many do not find it sufficiently explanatory of thermodynamic behavior. That a gas in the lab last Wednesday filled its available volume due to special initial conditions may be credible. But that gases everywhere for all time should expand through their available volumes due to special initial conditions is, for some, incredible. The common cause of these events is viewed as unlikely. Expressing this feeling, Penrose 1989 estimates that the probability, given the standard measure on phase space, of the universe starting in the requisite state is astronomically small. Callender 1997, however, assimilates the problem to the general one facing the special sciences — all special science laws require conspiratorial initial conditions for their generalizations to hold. If the problem really is a problem, according to Callender, it is not necessarily one specific to thermodynamics and time's direction. Echoes of this debate resonate through another distant question of interest to philosophers: is the Past Hypothesis itself needy of explanation? Price 2004 argues that it is, whereas Callender 2004a argues that it does not.

While not denying that temporally asymmetric boundary conditions are needed to solve the problem, Earman 2006 is very critical of many other claims made on behalf of the Past Hypothesis. The Past Hypothesis is sometimes said to be independently confirmed by modern cosmology, but is that really so? How does the Past Hypothesis sit with modern inflationary cosmology, especially with the probability measures it places over the initial conditions of the universe? On these questions see also Callender 2004b; 2011.

Another persistent line of criticism might be labelled the “subsystem” worry. It's consistent with the Past Hypothesis, after all, that none of the subsystems on Earth ever display thermodynamically asymmetric behavior. How exactly does the global entropy increase of the universe imply local entropy increase among the subsystems (especially, among the subsystems which gave rise to us positing the Second Law anyway)? See Callender 200x, Frisch 2010, North 2010, and Winsberg 2004 for further discussion.

2.2 Electromagnetism

The physicist E. Ritz and others have claimed that electromagnetism accounts for the thermodynamic arrow. The wave equation for both mechanical and electromagnetic processes is well-known to include both ‘advanced’ and ‘retarded’ solutions. The retarded solution

equation

gives the field amplitude φret at r,t by finding the source density r at r′ at earlier times. The advanced solution

equation

gives the field amplitude in terms of the source density at r′ at later times. Despite this symmetry nature seems to contain only processes obeying the retarded solutions. (This popular way of stating the electromagnetic asymmetry is actually misleading. The advanced solutions describe the radiation sink's receiving waves, and this happens all the time. The asymmetry of radiation instead lay with the form (concentrated or dispersed) the sources take.)

If we place an isolated concentrated gas in the middle of a large volume, we would expect the particles to spread out in an expanding sphere about the center of the gas, much as radiation spreads out. It is therefore tempting to think that there is a relationship between the thermodynamic and electromagnetic arrows of time. In a debate in 1909, A. Einstein and E. Ritz disagreed about the nature of this relationship. Ritz took the position that the asymmetry of radiation had to be judged lawlike and that the thermodynamic asymmetry could be derived from this law. Einstein's position is instead that “irreversibility is exclusively based on reasons of probability” (Einstein and Ritz 1909, quoted from Zeh 1989, 13). It is unclear whether he meant probability plus the right boundary conditions, or simply probability alone. In any case, Ritz believes the radiation arrow causes the thermodynamic one, whereas Einstein seems to hold something closer to the opposite position.

If this is correct, then it seems that Einstein must be right—or at least, closer to being correct than Ritz. Ritz' position appears implausible if only because it implies gases composed of neutral particles will not tend to spread out. That aside, it is plausible to think that the wave asymmetry must originate in asymmetric boundary conditions, just as the statistical mechanical asymmetry may. Recall the statistical version of the Second Law. It implies that with the right (improbable) initial conditions a system will undergo improbable-to-probable transitions rather than the reverse. The crucial point to see is that the usual retarded radiation is a kind of improbable-to-probable transition. A concentrated source is improbable, but given its existence, a system will evolve toward more probable regions of the phase space, i.e., the waves will spread. Advanced radiation is likewise a species of improbable-to-probable transitions. Given an improbable source in the past, it will spread backwards in time to more probable regions of the phase space too. Using Popper's famous mechanical wave example as an analogy, throwing a rock into a pond so that waves on the surface spread out into the future requires every bit the conspiracy that is needed for waves to converge on a point in order to eject a rock from the bottom. Both are equally likely, pace Popper; whether one or both happen depends upon the boundary conditions. The real asymmetry lies in the fact that in the past there are concentrated sources for waves, whereas in the future there tend not to be. These considerations do not mean the radiation arrow reduces in any sense to the thermodynamic arrow. Rather, the thing to say is that the radiation arrow just seems to be the statistical mechanical one, with the qualification that the media sustaining the improbable-to-probable transition is electromagnetic.

For further discussion of this controversial point, see the articles/chapters by Arntzenius 1993, Earman 2011, Frisch 2000, 2005, North 2003, Price 1996, 2006, Rohrlich 2006 and Zeh 1989/2005.

2.3 Cosmology

Cosmology presents us with a number of apparently temporally asymmetric mechanisms. The most obvious one is the inexorable expansion of the universe. In cosmology the spatial scale factor a(t), which gives the distance between co-moving observers, is increasing. The universe seems to be uniformly expanding relative to our local frame. Since this temporal asymmetry occupies a rather unique status it is natural to wonder whether it might be the ‘master’ arrow. The cosmologist T. Gold 1962 proposed just this. Believing that entropy values covary with the size of the universe, Gold asserts that at the maximum radius the thermodynamic arrow will ‘flip’ due to the re-contraction. However, as Tolman 1936 has shown in some detail, a universe filled with non-relativistic particles will not suffer entropy increase due to expansion, nor will an expanding universe uniformly filled with blackbody radiation increase its entropy either. Interestingly, Tolman demonstrated that more realistic universes containing both matter and radiation will change their entropy contents. Coupled with expansion, various processes will contribute to entropy increase, e.g., energy will flow from the ‘hot’ radiation to the ‘cool’ matter. So long as the relaxation time of these processes is larger than the expansion time scale, they should generate entropy. We thus have a purely cosmological method of entropy generation.

Others (e.g., Davies 1994) have thought inflation provides a kind of entropy-increasing behavior — again, given the sort of matter content we have in our universe. The inflationary model is an alternative of sorts to the standard big bang model, although by now it is so well entrenched in the cosmology community that it really deserves the tag ‘standard’. In this scenario, the universe is very early in a quantum state called a ‘false vacuum’, a state with a very high energy density and negative pressure. Gravity acts like Einstein's cosmological constant, so that it is repulsive rather than attractive. Under this force the universe enters a period of exponential inflation, with geometry resembling de Sitter space. When this period ends any initial in-homogeneities will have been smoothed to insignificance. At this point ordinary stellar evolution begins. Loosely associating gravitational homogeneity with low-entropy and inhomogeneity with higher entropy, inflation is arguably another source of cosmological entropy generation. (For a distinct and recent version of an inflation-inspired explanation, see Carroll and Chen 2004, Other Internet Resources.)

There are other proposed sources of cosmological entropy generation, but these should suffice to give the reader a flavor of the idea. We shall not be concerned with evaluating these scenarios in any detail. Rather, our concern is about how these proposals explain time's arrow. In particular, how do they square with our earlier claim that the issue boils down to either assuming temporally asymmetric boundary conditions or of adding time reversal non-invariant laws of nature?

The answer is not always clear, owing in part to the fact that the separation between laws of nature and boundary conditions is especially slippery in the science of cosmology. Advocates of the cosmological explanation of time's arrow typically see themselves as explaining the origin of the needed low-entropy cosmological condition. Some explicitly state that special initial conditions are needed for the thermodynamic arrow, but differ with the conventional ‘statistical’ school in deducing the origin of these initial conditions. Earlier low-entropy conditions are not viewed as the boundary conditions of the spacetime. They came about, according to the cosmological schools, about a second or more after the big bang. But when the universe is the size of a small particle, a second or more is enough time for some kind of cosmological mechanism to bring about our low-entropy ‘initial’ condition. What cosmologists (primarily) differ about is the precise nature of this mechanism. Once the mechanism creates the ‘initial’ low-entropy we have the same sort of explanation of the thermodynamic asymmetry as discussed in the previous section. Because the proposed mechanisms are supposed to make the special initial conditions inevitable or at least highly probable, this maneuver seems like the alleged ‘third way’ mentioned above.

The central question about this type of explanation, as far as we're concerned, is this: Is the existence of the low ‘initial’ state a consequence of the laws of nature alone or the laws plus boundary conditions? In other words, first, does the proposed mechanism produce low-entropy states given any initial condition, and second, is it a consequence of the laws alone or a consequence of the laws plus initial conditions? We want to know whether our question has merely been shifted back a step, whether the explanation is a disguised appeal to special initial conditions. Though we cannot here answer the question in general, we can say that the two mechanisms mentioned are not lawlike in nature. Expansion fails on two counts. There are boundary conditions in expanding universes that do not lead to an entropy gradient, i.e., conditions without the right matter-radiation content, and there are boundary conditions that do not lead to expansion, e.g., matter-filled Friedmann models that do not expand. Inflation fails at least on the second count. Despite advertising, arbitrary initial conditions will not give rise to an inflationary period (Earman 1995, pp. 152–3). Furthermore, it's not clear that inflationary periods will give rise to thermodynamic asymmetries (Price 1996, ch. 2). The cosmological scenarios do not seem to make the thermodynamic asymmetries a result of nomic necessity. The cosmological hypotheses may be true, and in some sense, they may even explain the low-entropy initial state. But they do not appear to provide an explanation of the thermodynamic asymmetry that makes it nomologically necessary or even likely.

Another way to see the point is to consider the question of whether the thermodynamic arrow would ‘flip’ if (say) the universe started to contract. Gold, as we said above, asserts that at the maximum radius the thermodynamic arrow must ‘flip’ due to the re-contraction. Not positing a thermodynamic flip while maintaining that entropy values covary with the radius of the universe is clearly inconsistent — it is what Price 1996 calls the fallacy of a “temporal double standard”. Gold does not committ this fallacy, and so he claims that the entropy must decrease if ever the universe started to re-contract. However, as Albert 2000 writes, “there are plainly locations in the phase space of the world from which … the world's radius will inexorably head up and the world's entropy will exorably head down”. Since that it is the case, it doesn't follow from law that the thermodynamic arrow will flip during re-contraction; therefore, without changing the fundamental laws, the Gold mechanism cannot explain the thermodynamic arrow in the sense we want.

From these considerations we can understand what Price 1996 calls the basic dilemma: either we explain the earlier low-entropy condition Gold-style or it is inexplicable by time-symmetric physics (82). Because there is no net asymmetry in a Gold universe, we might paraphrase Price's conclusion in a more disturbing manner as the claim that the (local) thermodynamic arrow is explicable just in case (globally) there isn't one. However, notice that this remark leaves open the idea that the laws governing expansion or inflation are not TRI. (For more on Price's basic dilemma, see Callender 1998 and Price 1995.)

2.4 Quantum Cosmology

Quantum cosmology, it is often said, is the theory of the universe's initial conditions. Presumably this entails that its posits are to be regarded as lawlike. Because theories are typically understood as containing a set of laws, quantum cosmologists apparently assume that the distinction between laws and initial conditions is fluid. Particular initial conditions will be said to obtain as a matter of law. Hawking 1987 writes, for example, “we shall not have a complete model of the universe until we can say more about the boundary conditions than that they must be whatever would produce what we observe,” (163). Combining such aspirations with the observation that thermodynamics requires special boundary conditions leads quite naturally to the thought that “the second law becomes a selection principle for the boundary conditions of the universe [for quantum cosmology]” (Laflamme 1994, 358). In other words, if one is to have a theory of initial conditions, it would certainly be desirable to deduce initial conditions that will lead to the thermodynamic arrow. This is precisely what many quantum cosmologists have sought.[4] Since quantum cosmology is currently very speculative, it has been argued that it is premature to start worrying about what it says about time's arrow (Callender 1998). Nevertheless, there has been a substantial amount of debate on this issue (see Haliwell et al, 1994).

2.5 Time Itself

Some philosophers have sought an answer to the problem of time's arrow by claiming that time itself is directed. They do not mean time is asymmetric in the sense intended by advocates of the tensed theory of time. Their proposals are firmly rooted in the idea that time and space are properly represented on a four-dimensional manifold. The main idea is that the asymmetries in time indicate something about the nature of time itself. Christensen 1993 argues that this is the most economical response to our problem since it posits nothing besides time as the common cause of the asymmetries, and we already believe in time. A proposal similar to Christensen's is Weingard's 1977 ‘time-ordering field’. Weingard's speculative thesis is that spacetime is temporally oriented by a ‘time potential,’ a timelike vector field that at every spacetime point directs a vector into its future light cone. In other words, supposing our spacetime is temporally orientable, Weingard wants to actually orient it. The main virtue of this is that it provides a time sense everywhere, even in spacetimes containing closed timelike curves (so long as they're temporally orientable). As he shows, any explication of the ‘earlier than’ relation in terms of some other physical relation will have trouble providing a consistent description of time direction in such spacetimes. Another virtue of the idea is that it is in principle capable of explaining all the temporal asymmetries. If coupled to the various asymmetries in time, it would be the ‘master arrow’ responsible for the arrows of interest. As Sklar 1985 notes, Weingard's proposal makes the past-future asymmetry very much like the up-down asymmetry. As the up-down asymmetry was reduced to the existence of a gravitational potential — and not an asymmetry of space itself — so the past-future asymmetry would reduce to the time potential — and not an asymmetry of time itself. Of course, if one thinks of the gravitional metric field as part of spacetime, there is a sense in which the reduction of the up-down asymmetry really was a reduction to a spacetime asymmetry. And if the metric field is conceived as part of spacetime — which is itself a huge source of contention in philosophy of physics — it is natural to think of Weingard's time-ordering field as also part of spacetime. Thus his proposal shares a lot in common with Christensen's suggestion.

This sort of proposal has been criticized by Sklar on methodological grounds. Sklar 1985 claims that scientists would not accept such an explanation (111–2). One might point out, however, that many scientists did believe in analogues of the time-ordering field as possible causes of the CP violations.[5]  The time-ordering field, if it exists, would be an unseen (except through its effects) common cause of strikingly ubiquitous phenomena. Scientists routinely accept such explanations. To find a problem with the time-ordering field we need not invoke methodological scruples; instead we can simply ask whether it does the job asked of it. Is there a mechanism that will couple the time-ordering field to thermodynamic phenomena? Weingard says the time potential field needs to be suitably coupled (p. 130) to the non-accidental asymmetric processes, but neither he nor Christensen elaborate on how this is to be accomplished. Until this is addressed satisfactorily, this speculative idea must be considered interesting yet embryonic.

2.6 Interventionism

When explaining time's arrow, many philosophers and physicists have focused their attention upon the unimpeachable fact that real systems are open systems that are subjected to interactions of various sorts.[6]  We can not truly isolate thermodynamic systems, and even if we could, it would probably not be for all time. To take the most obvious example, we can not shield a system from the influence of gravity. At best, we can move systems to locations feeling less and less gravitational force, but we can never completely decouple a system from the gravitational field. Not only do we ignore the weak gravitational force when doing classical thermodynamics, but we also ignore less exotic matters, such as the walls in the standard gas in a box scenario. We can do this because the time it takes for a gas to reach equilibrium with itself is vastly shorter than the time it takes the gas plus walls system to reach equilibrium. For this reason we typically discount the effects of the box walls on the gas.

In this approximation many have thought there lies a possible solution to the problem of the direction of time. Indeed, many have thought herein lies a solution that does not change the laws of classical mechanics and does not allow for the nomological possibility of anti-thermodynamic behavior. In other words, advocates of this view seem to believe it embodies a third way.

The idea is to take advantage of what a random perturbation of the representative phase point would do to the evolution of a system. In phase space there is a tremendous asymmetry between the volume of points leading to equilibrium and points leading away from equilibrium. If the representative point of a system were knocked about randomly, then due to this asymmetry, it would be very probable that the system at any given time be on a trajectory leading toward equilibrium. Thus, if it could be argued that the earlier treatment of the statistical mechanics of ideal systems ignored a random perturber in the environment of the system, then one would seem to have a solution to our problems. Even if the perturbation were weak it would still have the desired effect. The weak ‘random’ previously ignored knocking of the environment is the sought after cause of the approach to equilibrium. Prima facie, this answer to the problem escapes the appeal to special initial conditions and the appeal to new laws.

But only prima facie. A number of criticisms have been leveled against this maneuver. One that seems on the mark is the observation that if classical mechanics is to be a universal theory, then the environment must be governed by the laws of classical mechanics as well. The environment is not some mechanism outside the governance of physical law, after all, and when we treat it too, the ‘deus ex machina’ — the random perturber — disappears. If we treat the gas-plus-the-container walls as a classical system, it is still governed by time-reversible laws that will cause the same problem as we met with the gas alone. At this point one sometimes sees the response that that combined system of gas plus walls has a neglected environment too, and so on, and so on, until we get to the entire universe. It is then questioned whether we have a right to expect laws to apply universally (Reichenbach 1956, 81ff). Or the point is made that we cannot write down the Hamiltonian for all the interactions a real system suffers, and so there will always be something ‘outside’ what is governed by the time-reversible Hamiltonian. Both of these points rely, we suspect, on an underlying instrumentalism about the laws of nature. Our problem only arises if we assume or pretend that the world literally is the way the theory says; dropping this assumption naturally ‘solves’ the problem. Rather than further address these responses, let us turn to the claim that this maneuver need not modify the laws of classical mechanics.

If one does not make the radical proclamation that physical law does not govern the environment, then it is easy to see that whatever law describes the perturber's behavior, it cannot be the laws of classical mechanics if the environment is to do the job required of it. A time-reversal noninvariant law, in contrast to the TRI laws of classical mechanics, must govern the external perturber. Otherwise we can in principle subject the whole system, environment plus system of interest, to a Loschmidt reversal. The system's velocities will reverse, as will the velocities of the millions of tiny perturbers. ‘Miraculously’, as if there were a conspiracy between the reversed system and the millions of ‘anti-perturbers’, the whole system will return to a time reverse of its original state. What is more, this reversal will be just as likely as the original process if the laws are TRI. A minimal criterion of adequacy, therefore, is that the random perturbers be time reversal noninvariant. But the laws of classical mechanics are TRI. Consequently, if this ‘solution’ is to succeed, it must exercise new laws and modify or supplement classical mechanics. (Since the perturbations need to be genuinely random and not merely unpredictable, and since classical mechanics is deterministic, the same sort of argument could be run with indeterminism instead of irreversibility. See Price 2002 for a diagnosis of why people have made this mistake, and also for an argument objecting to interventionism for offering a ’redundant’ physical mechanism responsible for entropy increase.) [7]

2.7 Quantum Mechanics

To the best of our knowledge, our world is fundamentally quantum mechanical, not classical mechanical. Does this change the situation? ‘Maybe’ is perhaps the best answer. Not surprisingly, answers to the question are affected by one's interpretation of quantum mechanics. Quantum mechanics suffers from the notorious measurement problem, a problem which demands one or another interpretation of the quantum formalism. These interpretations fall broadly into two types, depending on their view of the unitary evolution of the quantum state (e.g., evolution according to the Schroedinger equation): they either say that there is something more than the quantum state, or that the unitary evolution is not entirely correct. The former are called ‘no-collapse’ interpretations while the latter are dubbed ‘collapse’ interpretations. This is not the place to go into the details of these interpretations, but we can still sketch the outlines of the picture painted by quantum mechanics (for more see Albert 1992).

Modulo some philosophical concerns about the meaning of time reversal (Albert 2000, Callender 2000, Earman 2002), the equation governing the unitary evolution of the quantum state is time reversal invariant. For interpretations that add something to quantum mechanics, this typically means that the resulting theory is time reversal invariant too (since it would be odd or even inconsistent to have one part of the theory invariant and the other part not). Since the resulting theory is time reversal invariant, it is possible to generate the problem of the direction of time just as we did with classical mechanics. While many details are altered in the change from classical to no-collapse quantum mechanics, the logical geography seems to remain the same.

Collapse interpretations are more interesting with respect to our topic. Collapses interrupt or outright replace the unitary evolution of the quantum state. To date, they have always done so in a time reversal noninvariant manner. The resulting theory, therefore, is not time reversal invariant. This fact offers a potential escape from our problem: the transitions of type (2) in our above statement of the problem may not be lawful. And this has led many thinkers throughout the century to believe that collapses somehow explain the thermodynamic time asymmetry.

Mostly these postulated methods fail to provide what we want. We think gases relax to equilibrium even when they're not measured by Bohrian observers or Wignerian conscious beings. This complaint is, admittedly, not independent of more general complaints about the adequacy of these interpretations. But perhaps because of these controversial features they have not been pushed very far in explaining thermodynamics.

More satisfactory collapse theories exist, however. One, due to Ghirardi, Rimini, and Weber, commonly known as GRW, can describe collapses in a closed system — no dubious appeal to observers outside the quantum system is required. Albert (1994; 2001) has extensively investigated the impact GRW would have on statistical mechanics and thermodynamics. GRW would ground a temporally asymmetric probabilistic tendency for systems to evolve toward equilibrium. Anti-thermodynamic behavior is not impossible according to this theory. Instead it is tremendously unlikely. The innovation of the theory lies in the fact that although entropy is overwhelmingly likely to increase toward the future, it is not also overwhelmingly likely to increase toward the past (because there are no dynamic backwards transition probabilities providied by the theory). So the theory does not suffer from a problem of the direction of time as stated above.

This does not mean, however, that it removes the need for something like the Past Hypothesis. GRW is capable of explaining why, given a present nonequilibrium state, later states should have higher entropy; and it can do this without also implying that earlier states have higher entropy too. But it does not explain how the universe ever got into a nonequilibrium state in the first place. As indicated before, some are not sure what would explain this fact, if anything, or whether it's something we should even aspire to explain. The principal virtue GRW would bring to the situation, Albert thinks, is that it would solve or bypass various troubles involving the nature of probabilities in statistical mechanics.

More detailed discussion of the impact quantum mechanics has on our problem can be found in Albert 2000, North 2002, Price 2002. But if our superficial review is correct, we can say that quantum mechanics will not obviate our need for a Past Hypothesis though it may well solve (on a GRW interpretation) at least one problem related to the direction of time.

2.8 Lawlike Initial Conditions?

Without some new physics that eliminates or explains the Past Hypothesis, or some satisfactory ‘third way’, it seems we are left with a bald posit of special initial conditions. Again, one can question whether there really is anything unsatisfactory about this (Sklar 1993; Callender 1997, 2004b). But perhaps we were wrong in the first place to think of the Past Hypothesis as a contingent boundary condition. The question ‘why these special initial conditions?’ would be answered with ‘it's physically impossible for them to be otherwise,’ which is always a conversation stopper. Indeed, Feynman (1965, 116) speaks this way when explaining the statistical version of the second law.

Absent a particular understanding of laws of nature, there is perhaps not much to say about the issue. But given particular conceptions of lawhood, it is clear that various judgments about this issue follow naturally — as we will see momentarily. However, let's acknowledge that this may be to get matters backwards. It might be said that we first ought to find out whether the boundary conditions are lawlike, and then devise a theory of law appropriate to the answer. To decide whether or not the boundary conditions are lawlike based merely on current philosophical theories of law is to prejudge the issue. Perhaps this objection is really evidence of the feeling that settling the issue based on one's conception of lawhood seems particularly unsatisfying. And it is hard to deny this. Even so, it is illuminating to have a brief look at the relationships between some conceptions of lawhood and the topic of special initial conditions.

For instance, if one agrees with Mill that from the laws one should be able to deduce everything and one considers the thermodynamic part of that ‘everything,’ then the special initial condition will be needed for such a deduction. The modern heir of this conception of lawhood, the one associated with Ramsey and Lewis (see Loewer 1994), sees laws as the axioms of the simplest, most powerful, consistent deductive system possible. It is likely that the specification of a special initial condition would emerge as an axiom in such a system, for such a constraint may well make the laws much simpler than they otherwise would be.

We should not expect the naïve regularity view of laws to follow suit, however. On this sort of account, roughly, if As always follow Bs, then it is a law of nature that A causes B. To avoid finding laws everywhere, however, this account needs to assume that As and Bs are instantiated plenty of times. But the initial conditions occur only once.

For more robust realist conceptions of law, it's difficult to predict whether the special initial conditions will emerge as lawlike. Necessitarian accounts like Pargetter's 1984 maintain that it is a law that P in our world iff P obtains at every possible world joined to ours by a nomic accessibility relation. Without more specific information about the nature of the accessibility relations and the worlds to which we're related, one can only guess whether all of the worlds relative to ours have the same special initial conditions. Nevertheless some realist theories offer apparently prohibitive criteria, so they are able to make negative judgments. For instance, ‘universalist’ theories associated with Armstrong say that laws are relations between universals. Yet a constraint on initial conditions isn't in any natural way put in this form; hence it would seem the universalist theory would not consider this constraint lawlike.

Philosophical opinion is certainly divided. The problem is that a lawlike boundary condition lacks many of the features we ordinarily attribute to laws, e.g., multiple instances, governing temporal evolution, etc., yet different accounts of laws focus on different subsets of these features. When we turn to the issue at hand, what we find is the disagreement we expect.

3. The Problem of the Direction of Time II

A completely different problem going by the name ‘problem of the direction of time’ is the question of grounding various non-thermodynamic temporal asymmetries (to be described in detail below). In this problem, we take the thermodynamic arrow as given and use it to explain other temporally asymmetric features of the world, e.g., causation, knowledge. Boltzmann famously suggested that many of these asymmetries are given by the direction of entropy increase. And Reichenbach 1956 modified this to some of these temporal asymmetries being given by the direction of dominant entropy increase among all so-called “branch systems.”

Sklar 1985 provides a useful discussion of this topic. He points out that conceiving of the reduction of these temporal asymmetries to that of the entropic arrow evades many of its obvious shortcomings if we conceive of it as a potential a posteriori scientific reduction of the kind now very familiar. The question is then whether it is so reduced (as for instance, the up-down plausibly reduces to the local gravitational gradient) or whether there is merely a correlation between the two (as, for example, there is between left-right and parity violations in high-energy particle physics).

The question is not easily answered partly due to vagueness about what is meant by both the concept to be reduced and the reducing concept. What temporal asymmetries are we concerned with, and exactly what kind of entropic relation do we intend?

The temporal asymmetries with which we are concerend are all the phenomena that we associate with the past and future directions being different. In addition to all of the temporal asymmetries from physics (thermodynamic arrow, electromagnetic arrow, Hubble expansion, etc.), there are a number of different asymmetries with which we are all familiar. The ‘direction of time’ might then be a broad umbrella covering the following:

1. The psychological arrow. This controversial arrow is actually many different asymmetries. One, though much disputed, is that we seem to share a psychological sense of passage through time. Allegedly, we sense a moving ‘now’, the motion of the present as events are transformed from future to past. Another is that we have very different attitudes toward the past than toward the future. We dread future but not past headaches and prison sentences.

2. The mutability arrow. We feel the future is ‘open’ or indeterminate in a way the past is not. The past is closed, fixed for all eternity. Related to this, no doubt, is the feeling that our actions are essentially tied to the future and not the past. The future is mutable whereas the past is not.

3. The epistemological arrow. Although we believe that we know some facts about the future, the vast majority of propositions we claim to know about the past. I know that yesterday's broken egg on the floor had a similar outline to Chile's boundaries, but I have no idea what country tomorrow's broken egg will look like. There are many more traces of events in the future than in the past. When I say something embarrassing, information representing that event is encoded on sound and light waves that form a continually increasing spherical shell in my future light-cone. I am potentially further embarrassed throughout my whole future lightcone. Yet in the backward lightcone stretching from the event there is little or no indication of the unfortunate event.

4. The explanation-causation-counterfactual arrow. This arrow is actually three, though it seems plausible that there are connections among them. Backwards causation may be physically possible, but if it is, it seems either to never happen or be exceedingly rare. Causes typically occur before their effects. Related to the causal asymmetry in some fashion or other is the asymmetry of explanation. Usually good explanations appeal to events in the past of the event to be explained, not to events in the future. It may be that this is just a prejudice that we ought to dispense with, but it is an intuition that we frequently have. Finally, and no doubt this is again related to the other two arrows as well as the mutability arrow, we — at least naively — believe the future depends counterfactually on the present in a way that we do not believe the past depends counterfactually on the present.

For example, consider a body moving uniformly from point A to point C in accord with Newton's first law of motion.[8]  A force is impressed on the body at B and the body changes direction and proceeds uniformly towards C.

figure

We will assume the body is a molecule travelling in a relative vacuum, and that the only trace left by the force is the altered path of the body. The solid lines in the diagram represent what we take to be the actual path of the body, the broken lines the alternative paths. Now consider two competing subjunctive conditionals:

If no force had been impressed upon the body at B,

(i) it would have moved uniformly in the right line ABD.

(ii) it would have moved uniformly in the right line EBC.

The problem is to find an objective reason for our preference of (i). It seems that AB is co-tenable with the counterfactual antecedent. If the antecedent were true, it seems the body would have continued from B to D. But BC is also a leg of the actual path of the body, and to what do we appeal besides temporal asymmetry to reject BC as co-tenable with the counterfactual supposition? Perhaps after our intuitions have been tutored by physics we should say that either (i) or (ii) is correct. Or perhaps the asymmetry relies on thermodynamics (and our intuitions on thermodynamics), in which case the world described above is too bare to support our asymmetry.

Some authors — particularly defenders of the tensed theory of time — dismiss out of hand the idea of grounding the direction of time on the direction of material processes in time. But with so many asymmetric processes in the world, and with homo sapiens being just a part of this world, there are strong reasons to favor a connection between the two in many cases. But what is the connection?

Many authors have explicitly or implicitly proposed various ‘dependency charts’ that are supposed to explain which of the above arrows depend on which for their existence. Horwich 1987, for instance, argues for an explanatory relationship wherein the counterfactual arrow depends for its existence on the causal arrow, which depends on the arrow of explanation, which depends on the epistemological arrow, which in turn depends on the fork asymmetry that he associates with some chaotic conditions in the early universe. One can imagine other ways to plausibly arrange the dependency chart. Lewis 1979 thinks an alleged over-determination of traces grounds the asymmetry of counterfactuals and that this in turn grounds the rest. The chart one judges most appropriate will depend, to a large degree, upon one's general philosophical stance on realism and Humeanism, etc., and one's understanding of the above arrows. The reader can consult Earman 2006 for some of the reasons one might be dissatisfied with an entropy-based explanation of most of the arrows described above. 

Which chart is the correct one is not our concern here. Rather, returning to our main topic, the Boltzmann entropic reduction of time-direction, we now have a somewhat clearer question: do any or all of the above temporal asymmetries depend for their existence upon the thermodynamic time-asymmetry? At the end of his 1979, for instance, Lewis hints that the asymmetry of traces is linked to the thermodynamic arrow, but he can offer no further explanation. Reichenbach 1956, Gruenbaum 1963, and Smart 1967 have developed entropic accounts of the knowledge asymmetry. Various people, for instance Dowe 1992, have tied the direction of causation to the entropy gradient. And some have also tied the psychological arrow to this gradient (for a discussion see Kroes 1985). Albert 2000 argues that the Past Hypothesis accounts for the knowledge asymmetry and also the asymmetry of counterfactual dependence.

One can think of reasons for being pessimistic about any straightforward positive link between these temporal asymmetries and the entropy gradient. Do we really know how to bridge the gap between the thermodynamic arrow and the other arrows? The gap is huge when you start thinking about the science of thermodynamics. Thermodynamics is a science with very precise and definite restrictions on the applicability of its concepts. A system has an entropy, for instance, only when it is thermally isolated and in equilibrium. Yet it is clear that our experience of the above temporal asymmetries carves up the world much differently than thermodynamics does. System A's doing f at time t might cause system B's doing g at time t* (where t* > t), yet A and B may not, and typically will not, have well-defined entropies.

The objections (see Earman 1974, Horwich 1987) to the entropic account of the knowledge asymmetry are worth recalling. The entropic account claimed that because we know there are many more entropy-increasing rather than entropy-decreasing systems in the world (or our part of it), we can infer when we see a low-entropy system that it was preceded and caused by an interaction with something outside the system. To take the canonical example, upon seeing a footprint in the sand, we can infer, due to its high order, that it was caused by something previously also of high (or higher) order, i.e, someone walking. The entropic account faces some very severe and basic challenges. First, do footprints on beaches have well-defined thermodynamic entropies? To describe the example we switched from low-entropy to high order, but the association between entropy and our ordinary concept of order is teneuous at best and usually completely misleading. To describe the range of systems about which we have knowledge, the account needs something broader than the thermodynamic entropy. But what? And why expect whatever it is to behave like entropy in some respects but not (in terms of its definability) in others? Second, the entropic account doesn't license the inference to a human being walking on the beach. All it tells you is that the grains of sand in the footprint interacted with its environment previously, which barely scratches the surface of our ability to tell detailed stories about what happened in the past. Third, even if we have a broader understanding of entropy, it still doesn't seem that this broader concept always works. Consider Earman's 1974 example of bomb destroying a city. From the destruction we may infer that a bomb went off; yet the bombed city does not have lower entropy than its surroundings or even any type of intuitively higher order than its surroundings.

To escape some of the above objections, Reichenbach famously abandoned literal entropy in favor of what he called ‘quasi-entropy’. Albert, by contrast, doesn't claim to ground the temporal asymmetries on entropy itself. Still Reichenbachian in spirit, his idea is to ground the temporal asymmetries on what he thinks grounds thermodynamics — and more. He argues that the temporal asymmetries follow from the Past Hypothesis (already mentioned), a uniform probability distribution over this macrostate in the appropriate state space), and the dynamical laws of motion. Albert seems to think this package explains the existence of all counterfactual-supporting generalizations, narrowly thermodynamic or not. In this way it by-passes some of the above objections. Critics of Albert target either his claim that the above package recovers thermodynamics (Leeds 2003, Earman 2006, Winsberg 2004) or his claim that it explains some of the above temporal asymmetries (Earman 2006, Frisch 2010).

Boltzmann's suggestion that the temporal asymmetries discussed above are explained by the direction of increasing entropy, though attractive at an abstract level, is hard to maintain when one looks at the details. Still, the more general idea, that these temporal asymmetries are due to the asymmetric behavior of physical processes in our world (whatever their origin ,law or Past Hypothesis) as opposed to more metaphysical sources seems very plausible. Much work remains to be done on this problem.  

Bibliography

  • Albert, D., 2000. Time and Chance, Cambridge, MA: Harvard University Press.
  • Albert, D., 1992. Quantum Mechanics and Experience, Cambridge, MA: Harvard University Press.
  • Arntzenius, F., 1993. ‘The Classical failure to Account for Electromagnetic Arrows of Time,’ in G. Massey, T. Horowitz and A. Janis (eds.), Scientific Failure, Lanham: Rowman & Littlefield, pp. 29–48.
  • Blatt, J.M., 1959. ‘An Alternative Approach to the Ergodic Problem,’ Progress in Theoretical Physics, 22: 745.
  • Bricmont, J., 1995. ‘Science of Chaos or Chaos in Science?,’ Physicalia Magazine, 17/3–4: 159–208.
  • Brown, H., Myrvold, W. and Uffink, J. 2009. ‘Boltzmann's H-Theorem, Its Discontents, and the Birth of Statistical Mechanics, ’Studies in the History and Philosophy of Science, 40(2): 174–191.
  • Callender, C., 2011. ‘Hot and Heavy Matters in the Foundations of Statistical Mechanics,’, Foundations of Physics, 41: 960–981.
  • Callender, C., 2004a. ‘There is No Puzzle about the Low Entropy Past,’ in C. Hitchcock (ed.), Contemporary Debates in the Philosophy of Science, Oxford: Blackwell, Chapter 12.
  • Callender, C., 2004b. ‘Measures, Explanation and the Past: Should “Special” Initial Conditions Be Explained?,’ British Journal for the Philosophy of Science, 55: 195–217.
  • Callender, C., 2000. ‘Is Time “Handed” in a Quantum World?,’ Proceedings of the Aristotelian Society, 100 (June): 247–269.
  • Callender, C., 1999. ‘Reducing Thermodynamics to Statistical Mechanics: The Case of Entropy,’ Journal of Philosophy, XCVI: 348–373.
  • Callender, C., 1998. ‘The View From No-when,’ British Journal for the Philosophy of Science, 49: 135–159.
  • Callender, C., 1997. ‘What is “The Problem of the Direction of Time”?,’ Philosophy of Science (Supplement), 64: S223–34.
  • Christensen, F. M., 1993. Space-like Time, Toronto: University of Toronto Press.
  • Cocke, J., 1967. ‘Statistical Time Symmetry and Two-Time Boundary Conditions in Physics and Cosmology,’ Physical Review, 160: 1165–70.
  • Davies, P. C. W., 1994. ‘Stirring Up Trouble,’ in Haliwell et al. 1994, 119–30.
  • Dowe, P., 1992. ‘Process Causality and Asymmetry,’ Erkenntnis, 37: 179–196.
  • Earman, J., 1969. ‘The Anisotropy of Time,’ Australasian Journal of Philosophy, 67: 273–295.
  • Earman, J., 1974. ‘An Attempt to Add a Little Direction to “The Problem of the Direction of Time”,’ Philosophy of Science, 41: 15–47.
  • Earman, J., 1981. “Combining Statistical-Thermodynamics and Relativity Theory: Methodological and Foundations Problems,” in P. Asquith and I. Hacking (eds.), Proceedings of the 1978 Biennial Meeting of the Philosophy of Science Association, 2: 157–185
  • Earman, J., 2002. ‘What Time Reversal Invariance Is and Why It Matters,’ International Journal for the Philosophy of Science, 16: 245–264.
  • Earman, J., 2006. ‘The “Past Hypothesis”: Not Even False,’ Studies in History and Philosophy of Modern Physics, 37 (3): 399–430.
  • Earman, J. 2011.‘Sharpening the Electromagnetic Arrow(s) of Time,’ in C. Callender (ed.) The Oxford Handbook of Philosophy of Time, Oxford: Oxford University Press, 485–527.
  • Fermi, E., 1936. Thermodynamics, New York: Dover.
  • Feynman, R., 1965. The Character of Physical Law, Cambridge, MA: MIT Press.
  • Frigg, R., 2009. ‘Typicality and the Approach to Equilibrium in Boltzmannian Statistical Mechanics,’Philosophy of Science, 76: 997–1008.
  • Frigg, R. 2008. ‘A field guide to recent work on the foundations of statistical mechanics,’ in D. Rickles, ed., The Ashgate Companion to Contemporary Philosophy of Physics, London: Ashgate, pp. 99–196.
  • Frisch, M., 2000. ‘(Dis-)solving the Puzzle of the Arrow of Radiation’ British Journal for the Philosophy of Science, 51: 381–410.
  • Frisch, M., 2006. ‘A Tale of Two Arrows,’ Studies in History and Philosophy of Modern Physics, 37: 542–558.
  • Frisch, M., 2010. ‘Does the Low-Entropy Constraint Prevent Us from Influencing the Past?,’ in G. Ernst and A. Huttemann (eds.) Time, Chance, and Reduction: Philosophical Aspects of Statistical Mechanics, forthcoming.
  • Gold, T., 1962. ‘The Arrow of Time,’ American Journal of Physics, 30: 403–10.
  • Goldstein, S., 2001. ‘Boltzmann's Approach to Statistical Mechanics’, in J. Bricmont, D. Dürr, M.C. Galavotti, G. Ghirardi, F. Petruccione, and N. Zanghi (eds.), Chance in Physics: Foundations and Perspectives (Lecture Notes in Physics 574), Berlin: Springer-Verlag, 2001 [Preprint available online]
  • Grünbaum, A., 1973. Philosophical Problems of Space and Time, New York: Knopf.
  • Haliwell, J., Perez-Mercader, J., and W. Zurek (eds.), 1994. Physical Origins of Time Asymmetry, Cambridge: Cambridge University Press.
  • Hurley, J., 1986. ‘The Time-asymmetry Paradox,’ American Journal of Physics, 54 (1): 25–28.
  • Hawking, S., 1987. ‘The Boundary Conditions of the Universe’ in L. Fang and R. Ruffini (eds.), Quantum Cosmology, Teaneck, NJ: World Scientific, pp. 162–174.
  • Healey, R., 1981. ‘Statistical Theories, QM and the Directedness of Time,’ in R. Healey (ed.), Reduction, Time and Reality, Cambridge: Cambridge University Press.
  • Horwich, P., 1987. Asymmetries in Time, Cambridge, MA: MIT Press.
  • Joos, E. and Zeh, H. D., 1985. ‘The Emergence of Classical Properties through Interaction with the Environment,’ Zeitschrift für Physik, B59: 223–243.
  • Klein, M., 1973. ‘The Development of Boltzmann's Statistical Ideas’ in E. Cohen and W. Thirring (eds.), The Boltzmann Equation: Theory and Applications, Vienna: Springer, pp. 53–106.
  • Kroes, P., 1985. Time: Its Structure and Role in Physical Theories, Boston: D. Reidel.
  • Laflamme, R., 1994. ‘The Arrow of Time and the No-boundary Proposal’ in Haliwell et al. 1994, 358–68.
  • Lavis, D., 2005. ‘Boltzmann and Gibbs: an attempted reconciliation,’ Studies in the History and Philisophy of Modern Physics, 36: 245–273.
  • Lebowitz, J., 1993. ‘Boltzmann's Entropy and Time's Arrow,’ Physics Today, 46 (9): 32–38.
  • Leeds, S., 2003. ‘Foundations of statistical mechanics: Two approaches,’ Philosophy of Science, 70: 126–144.
  • Lewis, D., 1979. ‘Counterfactual Dependence and Time's Arrow,’ Noûs, 13: 455–76.
  • Lieb, E. H. and Yngvason, J., 2000. ‘A Fresh Look at Entropy and the Second Law of Thermodynamics’, Physics Today, 53 (4): 32–37.
  • Liu, C., 1994. ‘Is There a Relativistic Thermodynamics? A Case Study of the Meaning of Special Relativity,’ Studies in the History and Philosophy of Modern Physics, 25: 983–1004.
  • Loewer, B., 1996. ‘Humean Supervenience and Laws of Nature’ Philosophical Topics, 24: 101–127.
  • North, J., 2002. ‘What is the Problem about the Time-asymmetry of Thermodynamics? Reply to Price,’ British Journal for the Philosophy of Science, 53: 121–136.
  • North, J., 2003. ‘Understanding the Time-Asymmetry of Radiation,’ Philosophy of Science (Proceedings) 70: 1086–1097.
  • North, J. 2011.‘Time in Thermodynamics,’ in C. Callender (ed.) The Oxford Handbook of Philosophy of Time, Oxford: Oxford University Press, pp. 312–352.
  • Partovi, M.H., 1989. ‘Irreversibility, Reduction, and Entropy Increase in Quantum Measurements,’ Physics Letters A, 137 (9): 445–450.
  • Penrose, O., 1970. Foundations of Statistical Mechanics, New York: Pergamon Press.
  • Penrose, O. and Percival, I.C., 1962. ‘The Direction of Time,’ Proceedings of the Physical Society, 79: 605–615.
  • Penrose, R., 1989. The Emperor's New Mind, Oxford: Oxford University Press.
  • Pippard, A.B., 1964. The Elements of Classical Thermodynamics, Cambridge: Cambridge University Press.
  • Popper, K., 1956. “The Arrow of Time”, Nature, 177 (March): 538.
  • Price, H., 1995. ‘Cosmology, Time's Arrow, and That Old Double Standard,’, in Savitt 1995, PAGES.
  • Price, H., 1996. Time's Arrow and Archimedes' Point: New Directions for the Physics of Time, New York: Oxford University Press. [Table of Contents and Chapter 1 available online]
  • Price, H., 2002. ‘Burbury's Last Case: The Mystery of the Entropic Arrow,’ in C. Callender (ed.), Time, Reality and Experience, Cambridge: Cambridge University Press.
  • Price, H., 2006. ‘Recent Work on the Arrow of Radiation,’ Studies in History and Philosophy of Science (Part B), 37 (3): 498–527.
  • Psillos, S., 1994. ‘A Philosophical Study of the Transition from the Caloric Theory of Heat to Thermodynamics,’ Studies in the History and Philosophy of Science, 25: 159–90.
  • Redhead, M. and Ridderbos, K., 1998. ‘The Spin-Echo Experiments and the Second Law of Thermodynamics,’ Foundations of Physics, 28: 1237–1270.
  • Reichenbach, H., 1956. The Direction of Time, Berkeley: UCLA Press.
  • Rohrlich, F., 2006. ‘Time in Classical Electrodynamics,’ American Journal of Physics, 74 (4): 313–315.
  • Sanford, D., 1984. ‘The Direction of Causation and the Direction of Time,’ in P. French, et al. (eds.), Midwest Studies in Philosophy IX, Minneapolis: University of Minnesota Press, 53–75.
  • Savitt, S. (ed.), 1995. Time's Arrow Today, Cambridge: Cambridge University Press.
  • Savitt, S., 1996. ‘Survey Article: The Direction of Time,’ British Journal for the Philosophy of Science, 47: 347–370.
  • Sklar, L., 1985. Philosophy and Spacetime Physics, Los Angeles: UCLA Press.
  • Sklar, L., 1993. Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics, Cambridge: Cambridge University Press.
  • Schulman, L.S., 1997. Time's Arrows and Quantum Measurement, New York: Cambridge University Press.
  • Smart, J. J. C., 1967. ‘Time’ Encyclopedia of Philosophy, Paul Edwards (ed.), New York: Macmillan.
  • Tolman, R., 1934. Relativity, Thermodynamics and Cosmology, Oxford: Oxford University Press.
  • Uffink, J., 2001. ‘Bluff your way in the second law of thermodynamics,’ Studies in the History and Philosophy of Modern Physics, 32: 305–394.
  • Uffink, J., 2006. ‘Compendium to the foundations of classical statistical physics,’ in J. Butterfield & J. Earman, eds, Philosophy of Physics, Amsterdam: North-Holland, pp. 923–1074.
  • Weingard, R., 1977. ‘Spacetime and the Direction of Time,’ Nous, 11: 119–131.
  • Winsberg, E. 2004.‘Can Conditioning on the “Past Hypothesis” Militate Against the Reversibility Objections?,’ Philosophy of Science, 71: 489–504.
  • Zeh, H.D., 1989. The Physical Basis of the Direction of Time, Berlin: Springer-Verlag. [4th edition available online]

Other Internet Resources

Copyright © 2011 by
Craig Callender <ccallender@ucsd.edu>

This is a file in the archives of the Stanford Encyclopedia of Philosophy.
Please note that some links may no longer be functional.
[an error occurred while processing the directive]