The Role of Decoherence in Quantum Mechanics

First published Mon Nov 3, 2003; substantive revision Tue Apr 21, 2020

Interference phenomena are a well-known and crucial aspect of quantum mechanics, famously exemplified by the two-slit experiment. There are many situations, however, in which interference effects are artificially or spontaneously suppressed. The theory of decoherence is precisely the study of such situations. It is is relevant (or is claimed to be relevant) to a variety of questions ranging from the measurement problem to the arrow of time, and in particular to the question of whether and how the ‘classical world’ may emerge from quantum mechanics. (See also the entry on philosophical issues in quantum theory.)

In Section 1 we discuss the concept of suppression of interference and give a simplified survey of the theory, emphasising features that will be relevant later. In fact, the term decoherence refers to two largely overlapping areas of research. The characteristic feature of the first (often called ‘environmental’ or ‘dynamical’ decoherence) is the study of concrete models of (spontaneous) interactions between a system and its environment that lead to suppression of interference effects. The second (the theory of ‘decoherent histories’ or ‘consistent histories’) is an abstract and more general formalism capturing essential features of decoherence. The two are obviously closely related, and will be reviewed in turn. Section 2 then criticises the claim that decoherence solves the measurement problem of quantum mechanics, and discusses the exacerbation of the problem through the inclusion of environmental interactions. It is thus important to consider not decoherence by itself, but the interplay between decoherence and the various approaches to the foundations of quantum mechanics that provide possible solutions to the measurement problem and related puzzles. Section 3 deals with the role of decoherence in relation to a number of such approaches, including mainstream foundational approaches such as Everett, Bohm and GRW, traditional approaches such as those by von Neumann, Heisenberg and Bohr, and a few more. Finally, in Section 4 we describe the overall picture of the emergent structures that result from this use of decoherence, as well as a few more speculative applications.[1]

Suppression of interference has featured in many papers since the beginning of quantum mechanics, such as Mott’s (1929) analysis of \(\alpha\)-particle tracks. The modern foundation of decoherence as a subject in its own right was laid by H.-D. Zeh in the early 1970s (Zeh 1970, 1973). Equally influential were the papers by W. Zurek from the early 1980s (Zurek 1981, 1982). Some of these earlier examples of decoherence (e.g., suppression of interference between left-handed and right-handed states of a molecule) are mathematically more accessible than more recent ones. A concise and readable introduction to the theory is provided by Zurek in Physics Today (1991). This article was followed by publication of several letters with Zurek’s replies (1993), which highlight controversial issues. More recent surveys are given in Zeh (2003a), Zurek (2003), and in the books by Giulini et al. (1996, second edition Joos et al. 2003), and by Schlosshauer (2007).

1. Theory of Decoherence

The two-slit experiment is a paradigm example of an interference experiment. One repeatedly sends electrons or other particles through a screen with two narrow slits, the electrons impinge upon a second screen, and we ask for the probability distribution of detections over the surface of the screen. One might naively try to calculate them by summing over the probabilities of detection at the slits multiplied by the probabilities for detection at the screen conditional on detection at the slits. But these are the correct probabilities for a different experiment, with detections at the slits, whether or not we believe that measurements are related to a ‘true’ collapse of the wave function (i.e. that only one of the components survives the measurement and proceeds to hit the screen[2]). If there are no such detections, in general there is an additional so-called interference term in the correct expression for the probability, and this term depends on both the wave components that pass through the slits.[3]

There are, however, situations in which this interference term does not appear or is negligible, and the naive formula applies. This is the case if some other systems interact with the electron between the slits and the screen, leading to enough entanglement with the components of the wave going through the two slits. Then, the probabilities of detection at the screen are as if we had performed a detection at the slits.

It is not difficult to see why this must be so. If Alice and Bob share a pair of systems that are entangled, then the probabilities for the results of any measurements Bob might make do not depend on whether or not Alice also makes any measurements (this is the quantum mechanical no-signalling theorem). In exactly the same way, the pattern of detections at the screen cannot distinguish mere entanglement with some other systems from the actual use of those systems for detection at the slits.

So, for example, there could be sufficiently many stray particles that scatter off the electron.[4] The phase relation between the two components of the wave function, which is responsible for interference, is now well-defined only at the level of the larger system composed of electron and stray particles, and can produce interference only in a suitable experiment including the larger system. Such a phenomenon of suppression of interference is what is called decoherence.

1.1 Environmental decoherence

‘Environmental’ decoherence is decoherence that arises through suitable interaction of a system with its environment. The study of environmental decoherence consists to a large extent in the construction and investigation of concrete models of such interactions. We have already mentioned taking an environment of relatively light particles that scatter off a relatively heavy particle. Such a model can be used to study e.g. chiral molecules. Or one can take an atom in interaction with the electromagnetic field, or a harmonic oscillator in a thermal bath of oscillators, and many more. Various features of interest typically arise in such models: some are in common to most models, others are highly model-dependent.

One feature of these environmental interactions is that they suppress interference between states from some preferred set (‘eigenstates of the decohering variable’). This can be a discrete set of states, e.g. the upper and lower component of the wave function in our simple example of the two-slit experiment, or left- and right-handed states in models of chiral molecules; when an atom interacts with the electromagnetic field, the preferred states will be the stationary states (which are the states we observe in spectroscopy). Or it could be some continuous set, e.g. the ‘coherent states’ of a harmonic oscillator (in which case the terminology of ‘eigenstates’ or ‘eigenbasis’ of a preferred observable is not quite accurate). The intuitive picture is one in which the environment monitors the system of interest by spontanesouly and continuously ‘measuring’ some quantity characterised by the set of preferred states (i.e. the environment interacts with the system in such a way that it could in principle be used as a measuring apparatus).

Such a ‘measurement-like’ interaction intuitively does not disturb the eigenstates of the monitored observable. Thus these preferred states can in fact be characterised in terms of their robustness or stability with respect to the interaction with the environment. The system gets entangled with the environment, but the states between which interference is suppressed are the ones that would themselves get least entangled with the environment under this interaction. In this connection, one also says that decoherence induces ‘effective superselection rules’, meaning the following. A strict superselection rule applies when there are some observables – in technical terminology they are called classical – that commute with all observables (for a review, see Wightman 1995). Intuitively, these observables are infinitely robust, since no possible interaction can disturb them (at least as long as the interaction Hamiltonian is considered to be an observable). By an effective superselection rule one means, analogously, that certain observables (e.g. chirality) will not be disturbed by the interactions that actually take place.

In many models of decoherence, the preferred states are robust in an even stronger sense, because information about them is stored in a redundant way in the environment (say, because a Schrödinger cat has interacted with so many stray particles: photons, air molecules, dust). This information can later be acquired by an observer without further disturbing the system (we observe – however that may be interpreted – whether the cat is alive or dead by intercepting on our retina a small fraction of the light that has interacted with the cat).

What states are preferred will depend on the details of the interaction, but in many cases, interactions are characterised by potentials that are functions of position, so preferred states are often related to position. For the chiral molecule, the left- and right-handed states are indeed characterised by different spatial configurations of the atoms in the molecule. For the harmonic oscillator, one should think of the environment ‘measuring’ approximate eigenstates of position, or rather approximate joint eigenstates of position and momentum, so-called coherent states (since information about the time of flight is also recorded in the environment).

The resulting localisation can be on a very short length scale, i.e. the characteristic length above which coherence is dispersed (‘coherence length’) can be very short. A speck of dust of radius \(a = 10^{-5}\)cm floating in the air will have interference suppressed between spatially localised components with a width of \(10^{-13}\)cm. Even more strikingly, the time scales for this process are often minute. This coherence length is reached after a microsecond of exposure to air, and suppression of interference on a length scale of \(10^{-12}\)cm is achieved already after a nanosecond.[5]

Within the environmental decoherence literature, models tend to be formulated in terms of master equations for the evolution of the density operator describing the system. As a consequence of decoherence this very quickly becomes (at least approximately) diagonal in the basis of preferred states (whether discrete or continuous). Thus, the master equation for the density operator for the system is essentially equivalent to an evolution equation for the probability distribution over the preferred states. In models where coherent states are preferred, one can then compare this to the Liouville evolution of probability distributions over classical phase space, and in fact one obtains extremely good quantitative agreement.

These features are not claimed to obtain in all cases of interaction with some environment. It is a matter of detailed physical investigation to assess which systems exhibit which features, and how general the lessons are that we might learn from studying specific models. One should thus beware of common overgeneralisations. For instance, decoherence does not affect only and all ‘macroscopic systems’. It is true that middle-sized objects, say, on the Earth’s surface will be very effectively decohered by the air in the atmosphere, and this is an excellent example of decoherence at work. On the other hand, there are also very good examples of decoherence-like interactions affecting microscopic systems, such as in the interaction of \(\alpha\)-particles with the gas in a bubble chamber. (Note, however, that this also relies on the \(\alpha\)-particles being emitted in states that are superpositions of strongly outward directed wavepackets.) Further, there are arguably macroscopic systems for which interference effects are not suppressed. For instance, it has been shown to be possible to sufficiently shield SQUIDS (a type of superconducting devices) from decoherence for the purpose of observing superpositions of different macroscopic currents – contrary to what one had expected (see e.g. Leggett 1984, and esp. 2002, Section 5.4). Anglin, Paz and Zurek (1997) examine some less well-behaved models of environmental decoherence and provide a useful corrective as to its scope.

1.2 Decoherent histories

As mentioned above, when interference is suppressed in a two-slit experiment, the naive probability formula applies, and we can calculate the detection probabilities at the screen by adding probabilities for what are formally the ‘trajectories’ followed by individual electrons. The decoherent histories or consistent histories formalism (originating with Griffiths 1984; Omnès 1988, 1989; and Gell-Mann and Hartle 1990) takes this as the defining feature of decoherence. (See also the entry on the consistent histories approach to quantum mechanics. There are some differences between the various authors, but we shall gloss them over.[6])

In a nutshell, the formalism is as follows. Take a sequence of times \(t_1 ,\ldots ,t_n\), and take orthogonal families of (Heisenberg-picture) projections at those times,[7] with

\[\tag{1} \sum_{\alpha_1} P_{\alpha_1}(t_1) = \mathbf{1},\ldots ,\sum_{\alpha_{ n} } P_{\alpha_n}(t_n) = \mathbf{1} \]

One defines histories as time-ordered sequences of projections at the given times, choosing one projection from each family, respectively. Such histories form a so-called alternative and exhaustive set of histories.

Take a state \(\varrho\). We wish to define probabilities for the set of histories. If one takes the usual probability formula based on repeated application of the Born rule, one obtains

\[\tag{2} \text{Tr}(P_{\alpha_n}(t_n)\ldots P_{\alpha_1}(t_0) \varrho P_{\alpha_1}(t_0)\ldots P_{\alpha_n}(t_n)) \]

We shall take (2) as defining ‘candidate probabilities’. In general these probabilities exhibit interference, in the sense that summing over them is not equivalent to omitting the intermediate projections in (2) (‘coarse-graining’ the histories). In the special cases in which the interference terms vanish for any pair of distinct histories, we say that the set of histories satisfies the consistency or (weak) decoherence condition. It is easy to see that this condition takes the form

\[\tag{3} \text{ReTr}(P_{\alpha'_n}(t_n)\ldots P_{\alpha'_1}(t_1)\varrho P_{\alpha_1}(t_1)\ldots P_{\alpha_n}(t_n)) = 0 \]

for any pair of distinct histories (the real part of the ‘decoherence functional’ vanishes).

If this is satisfied, we can view (2) as defining the distribution functions for a stochastic process with the histories as trajectories. Decoherence in the sense of this abstract formalism is thus defined simply by the condition that the quantum probabilities for later events can be calculated as if the state had collapsed at the intermediate times. Qualitatively one recovers classical behaviour, in the sense that the histories are assigned quantum probabilities that nevertheless satisfy the classical formula of total probability.

A stronger form of the decoherence condition, namely the vanishing of both the real and imaginary part of the decoherence functional, can be used to prove theorems on the existence of (later) ‘permanent records’ of (earlier) events in a history, which is a generalisation of the idea of ‘environmental monitoring’.[8] For instance, if the state \(\varrho\) is a pure state \(\lvert \psi \rangle\langle\psi\rvert\) this (strong) decoherence condition is equivalent, for all \(n\), to the orthogonality of the vectors

\[\tag{4} P_{\alpha_n}(t_n)\ldots P_{\alpha_1}(t_1)\ket{\psi} \]

and this in turn is equivalent to the existence of a set of orthogonal projections \(R_{\alpha_1 \ldots\alpha_i}(t_i)\) (for any \(t_i \le t_n\)) that extend consistently the given set of histories and are perfectly correlated with the histories of the original set (Gell-Mann and Hartle 1990). Note, however, that these ‘generalised records’ need not be stored in separate degrees of freedom, such as an environment or measuring apparatus.[9]

Various authors have taken the theory of decoherent histories as providing an interpretation of quantum mechanics. For instance, Gell-Mann and Hartle sometimes talk of decoherent histories as a neo-Everettian approach, while Omnès appears to think of histories along neo-Copenhagen lines (perhaps as an experimental context creating a ‘quantum phenomenon’ that can stretch back into the past).[10] Griffiths (2002) has probably developed the most detailed of these interpretational approaches (also addressing various earlier criticisms, e.g. by Dowker and Kent (1995, 1996)). In itself, however, the formalism is interpretationally neutral and has the particular merit of bringing out that when interference is suppressed, one can reidentify different components of the state over time, making this formalism especially appropriate for discussing temporal evolution at the level of the non-interfering components.

1.3 Comparison

Work on environmental decoherence and that on decoherent histories tend to be unfortunately rather separate. In comparing the two, we shall need to look both at cases that can be described by both formalisms (and ask whether or not the two descriptions are equivalent), and at cases where only the more abstract formalism of decoherent histories applies.

With regard to the latter, there are of course cases in which the decoherence functional vanishes just by numerical coincidence. But there are also systematic cases of vanishing of interference even without environmental monitoring, namely in the presence of ‘conservation-induced’ decoherence (see e.g. Halliwell 2010). As an example, take an isolated system (say, with discrete energy levels), and consider histories composed of projections onto its energy states at arbitrary times. Because energy is conserved, in the energy basis each individual component is following the Schrödinger equation without interfering with the other components, and the corresponding histories decohere. While some authors in the decoherent histories literature take conservation-induced decoherence to be a significant novelty of the theory, it should be noted that it lacks the robustness of environment-induced decoherence, since it lacks a mechanism that actively suppresses interference.

With regard to the former case, environmental decoherence can be easily described also in terms of decoherent histories. One needs to take times that are separated by intervals larger than the decoherence time scale, and projections onto the preferred states. Then the environmental monitoring ensures that the resulting histories decohere. (In case of a continuous set of preferred states, one might need to generalise the histories formalism slightly, using ‘effects’ rather than projections; see e.g. Kent 1998.) In this sense, environmental decoherence can be seen as a special case of decoherent histories, but the descriptions given by the two formalisms are somewhat different. While decoherent histories define multi-time distributions over the preferred states (at discrete times), models of environmental decoherence essentially describe single-time distributions over the preferred states. While they have the advantage of being well-defined at all times, these single-time distributions do not explicitly describe any temporal evolution at the level of the individual components.

In a number of models of environmental decoherence, however, it is obvious what the dynamical behaviour should be even at the level of individual components. Specifically, in models where the preferred states are coherent states, comparison of the master equation for the reduced state of the system with the evolution of a classical Liouville distribution suggests that the trajectories of individual components in fact approximate surprisingly well the corresponding Newtonian trajectories. Intuitively, one can explain this by noting that the preferred states (which are wave packets that are narrow in position and remain so because they are also narrow in momentum) are the states that tend to get least entangled with the environment. Therefore they will tend to follow the Schrödinger equation more or less undisturbed. But, as a matter of fact, narrow wave packets follow approximately Newtonian trajectories, at least if the external potentials in which they move are uniform enough across the width of the packets (results of this kind are known as ‘Ehrenfest theorems’). Thus, the resulting trajectories will be close to Newtonian ones (on the relevant scales).[11]

This picture cannot be exact, because as soon as a localised wave packet has spread enough, it will be decohered into new more localised packets, so that intuitively one will get some kind of ‘fanning out’ of trajectories. In fact, such deviations from Newtonian behaviour are due both to the tendency of the individual components to spread and to the localising effect of the interaction with the environment, which further enhances the collective spreading of the components (because a narrowing in position corresponds to a widening in momentum). See Rosaler (2016) for a very nice treatment (that uses an ‘open systems’ version of Ehrenfest). A vivid example are the observed trajectories of \(\alpha\)-particles in a cloud chamber, which are indeed extremely close to Newtonian ones, except for additional tiny ‘kinks’.[12]

In other models, e.g. when the electromagnetic field privileges the stationary states of an atom, there is no such comparison with classical equations, and the lack of multi-time distributions becomes a limitation of the model. Such limitations might be overcome by combining models of environmental decoherence with more phenomenological models of ‘continuous measurement’ (as done in a different example by Bhattacharya, Habib and Jacobs 2000). As shown by Brun (2002), the dynamics of stationary states (quantum jumps!) can be obtained from first principles in the decoherent histories formalism.

2. Decoherence and the Measurement Problem

One often hears the claim that decoherence solves the measurement problem of quantum mechanics (see the entry on philosophical issues in quantum theory). Physicists who work on decoherence generally know better, but it is important to see why even in the presence of decoherence phenomena, the measurement problem remains or in fact gets even worse.

The measurement problem is really a complex of problems, revolving around the question of whether one can apply quantum mechanics itself to the description of quantum measurements. One can just deny this, if one takes quantum mechanics to be a phenomenological theory. But if quantum mechanics is not the fundamental theory that explains the phenomenology of quantum measurements, the question arises how we can explain what ‘measurements’ and ‘results’ are. This is the measurement problem in the wide sense of the term.

If instead we assume that quantum mechanics is itself applicable to the description of measurements, then the question becomes one of how one should model a measurement within quantum theory, specifically as some appropriate interaction between a ‘system’ and an ‘apparatus’, and of whether by so doing one can derive from the unitary evolution for the total system of system and apparatus the three phenomenological aspects of quantum measurements: that measurements have results, that these results obtain with some characteristic probabilities, and that depending on the result of a measurement the state of the system is generally transformed in a characteristic way (for this subdivision of the problem, see Maudlin 1995). This derivation, however, appears to be impossible.

Indeed, as pointed out already by von Neumann (1932, Section VI.3), one cannot reproduce the correct probabilities by assuming that they arise because we are ignorant of the exact state of a macroscopic apparatus. But whatever the exact initial state of the apparatus, if the system (say, an electron) is described by a superposition of two given states, say, spin in \(x\)-direction equal \(+\frac{1}{2}\) and spin in \(x\)-direction equal \(-\frac{1}{2}\), and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components, one in which the apparatus has coupled to (has registered) \(x\)-spin \(= +\frac{1}{2}\), and one in which the apparatus has coupled to (has registered) \(x\)-spin \(= -\frac{1}{2}\).[13] This is the measurement problem in the narrow sense of the term.

2.1 Solving the measurement problem?

The fact that interference is typically very well suppressed between localised states of macroscopic objects suggests that it is at least relevant to why macroscopic objects in fact appear to us to be in localised states. In the special case of measuring apparatuses, it would then be relevant to why we never observe an apparatus pointing, say, to two different results. Does modelling measurements including the decoherence interactions with the environment allow one to derive that measurements always have results? This is somewhat part of the ‘folklore’ of decoherence, but as pointed out by many physicists and philosophers alike (e.g. Pearle 1997; Bub 1997, Chapter 8; Adler 2003; Zeh 2003a, pp. 14–15), it is not the case: while decoherence does explain why we do not observe superpositions of measurement results, it does not explain why we do observe measurement results in the first place.

Indeed, what happens if we include decoherence in the description? Decoherence tells us, among other things, that plenty of interactions are taking place all the time in which differently localised states of the apparatus registering, say, different \(x\)-spin values of an electron couple to different states of the environment. But now, by the same arguments as above, the composite of electron, apparatus and environment will be a superposition of (i) a state corresponding to the environment coupling to the apparatus coupling in turn to the value \(+\frac{1}{2}\) for the spin, and of (ii) a state corresponding to the environment coupling to the apparatus coupling in turn to the value \(-\frac{1}{2}\) for the spin. We are thus left with the following choice, whether or not we include decoherence: either the composite system is not described by such a superposition, because the Schrödinger equation actually breaks down and needs to be modified, or it is described by such a superposition, but then we need to either to supplement quantum mechanics with appropriate hidden variables, or to give an appropriate interpretation of the superposition.

Therefore, decoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate foundational approach to the theory – whether this be one that attempts to solve the measurement problem, such as Bohm, Everett or GRW; or one that attempts to dissolve it, such as various versions of the Copenhagen interpretation. (See also Wallace 2012b.)

2.2 Widening the measurement problem

Decoherence is clearly neither a dynamical evolution contradicting the Schrödinger equation, nor a new supplementation or interpretation of the theory. As we shall discuss, however, it both reveals important dynamical effects within the Schrödinger evolution, and may be suggestive of possible interpretational moves. As such it has much to offer to the philosophy of quantum mechanics. At first, however, it seems that discussion of environmental interactions should actually exacerbate the existing problems. Intuitively, if the environment is carrying out lots of spontaneous measurements even without our intervention, then the measurement problem ought to apply more widely, also to these spontaneously occurring measurements.

Indeed, while it is well-known that localised states of macroscopic objects spread very slowly with time under the free Schrödinger evolution (i.e., if there are no interactions), the situation turns out to be different if they are in interaction with the environment. Although the different components that couple to the environment will be individually incredibly localised, collectively they can have a spread that is many orders of magnitude larger. That is, the state of the object and the environment could be a superposition of zillions of very well localised terms, each with slightly different positions, and that are collectively spread over a macroscopic distance, even in the case of everyday objects.[14] Given that everyday macroscopic objects are particularly subject to decoherence interactions, this raises the question of whether quantum mechanics can account for the appearance of the everyday world even apart from the measurement problem.

There is thus an even wider problem, which we can call the problem of the classical regime of quantum mechanics, and quite analogous to the measurement problem. Can quantum mechanics be applied to the description of classical systems? We can deny it (as orthodox approaches do), but then what are classical systems in the first place? And if we apply quantum mechanics also to the systems that seem to populate our everyday world, can we derive from quantum mechanics the behaviour that is characteristic of such ‘classical’ systems? But such a derivation appears impossible. To put it crudely: if everything is in interaction with everything else, everything is generically entangled with everything else, and that is a worse problem than measuring apparatuses being entangled with measured systems.

3. The Role(s) of Decoherence in Different Approaches to Quantum Mechanics

Despite the fact that decoherence interactions extend the measurement problem to the wider problem of the classical regime, decoherence is relevant to the solution of both problems because at the level of components of the wave function the quantum description of decoherence phenomena (tantalisingly!) includes both measurement results and other quantum phenomena (such as quantum jumps) as well as classical behaviour. This suggests that to a large extent decoherence provides an interpretation-neutral strategy for tackling the measurement problem and the problem of the classical regime (a thesis developed in greater detail by Rosaler 2016), and that the solution to these problems lies in combining decoherence with the main foundational approaches to quantum mechanics.

There are a wide range of approaches to the foundations of quantum mechanics, however (see also the entry on philosophical issues in quantum theory). In some cases, one just needs to point out how an approach fits into the overall picture suggested by decoherence, other approaches are in fact less able to exploit the results of decoherence. (The term ‘approach’ here is more appropriate than the term ‘interpretation’, because several of these are in fact modifications of or additions to the theory.) We shall thus discuss in turn a number of approaches and how they relate to decoherence. These will be: the three most widespread approaches in the philosophy of physics (Everett, Bohm and GRW), followed by the more ‘orthodox’ approaches of von Neumann, Heisenberg and Bohr, and a few others.

We shall start with the Everett theory (or many-worlds interpretation) in some of its main variants. This is in fact most closely related to decoherence, since the latter can be used to naturally identify stable (if branching) structures within the universal wave function that can instantiate the multiplicity of worlds or measurement records or conscious experiences characteristic of Everettian views. Another approach that arguably makes crucial use of decoherence is pilot-wave theory (or de Broglie–Bohm theory, or Bohmian mechanics), where particle positions (or other suitable ‘beables’) are guided in their temporal evolution by the universal wave function. The branching structure of the latter will clearly have an effect on the character of the evolution of the variables it guides. Instead, spontaneous collapse theories intuitively have less to do with decoherence because they seek to suppress unwanted superpositions. Still, they are also arguably able to make use of decoherence, perhaps with some qualifications.

More traditional approaches to quantum mechanics that somehow privilege the notion of measurement or observation also may have less-than-obvious connections with decoherence and in fact fit less well with it, but we shall look at von Neumann’s, Heisenberg’s and Bohr’s views. Finally, we shall briefly mention other approaches and remark on their various relations to decoherence. These will be Nelson’s stochastic mechanics, modal interpretations, and QBism.

3.1 Everett theories

The Everett theory (see the entries on Everett’s relative-state interpretation and on the many-worlds interpretation) was originally developed in 1957, before the theory of decoherence (Everett 1957). As we shall see, in recent years decoherence has become a defining notion of the theory, but it arguably fits rather well also with Everett’s original formulation.

The central technical notion in Everett’s own formulation of the theory is a relative state: e.g. the electron is in a state of spin up relative to the corresponding read-out state of the apparatus and in a state of spin down relative to the other read-out state. But Everett is interested in the emergence of stable structures in the universal wavefunction in terms of relative states. His paradigm example is that of a hydrogen atom: put a proton and an electron in a box, both spread out over the entire volume. After a while, the proton and electron will have relaxed. The position of the proton will still be spread out over the entire box, but relative to each position state of the proton, the electron wil now be in the usual ground state of the hydrogen atom. According to Everett, this is what we mean by a stable atom forming. Everett thinks of classical systems (a cannonball!) along the same lines, and uses these arguments as justifying the assumption that classical systems exist, in particular ones that are complex enough to store (and perhaps act upon) records of measurement-like interactions they have had with their environments. Everett’s aim is to recover the usual predictions of quantum mechanics for the memory registers of such ‘servomechanisms’.[15][16]

It should be clear that the theory of decoherence is an ideal technical tool if (like Everett) one wishes to identify stable structures within the universal wave function. And, indeed, some of the main workers in the field such as Zeh (2000) and (perhaps more guardedly) Zurek (1998) and Gell-Mann and Hartle (e.g. 1990) suggest that decoherence is most naturally understood in terms of Everett-like interpretations.[17] This role of decoherence has been emphasised most prominently by Saunders (e.g. 1993) and by Wallace (e.g. 2003), and is in fact responsible for the extraordinary renaissance of the Everett theory within the philosophy of physics since the mid-1990s.[18]

Until then, Everett was thought to be suffering from a problem of the ‘preferred basis’:[19] it was thought that without putting in by hand what should count as ‘worlds’, there were too many possible ways of defining such worlds, or too many ways of defining relative states. But looking for potentially relevant structures that are already present in the wave function allows one to identify worlds (or other relevant structures) without having to postulate the existence of some preferred states (whether or not they form an orthonormal basis).

A justification for this identification can be variously given by suggesting that a ‘world’ should be a temporally extended structure and thus reidentification over time will be a necessary condition for defining worlds; or similarly by suggesting that in order for observers to have evolved there must be stable records of past events (Saunders 1993, and the unpublished Gell-Mann and Hartle 1994 – see the Other Internet Resources section below); or that observers must be able to access robust states, preferably through the existence of redundant information in the environment (Zurek’s ‘existential interpretation’, 1998).[20] But the most comprehensive justification of the use of decoherence in terms of how Everett can be understood using structures in the universal wave function has been given by Wallace, starting with his (2003) and given its final form in his book (2012a). Wallace places his discussion in the wider context of an approach to emergence based on Dennett’s notion of ‘real patterns’. Higher-level theories are functionally instantiated by lower-level (more fundamental) ones if there exist relatively simple mappings from solutions of the lower-level theory over a certain domain to solutions of the higher-level theory. Higher-level structures are thus reduced to patterns at the more fundamental level, which are real in the (quasi-)Dennettian sense that they are objectively useful in terms of both predicting and explaining phenomena at the higher level. At the same time they are emergent, because they could be multiply realised, and because finding the relevant mapping may be possible only in a top-down perspective. Everettian worlds are such real patterns, because decoherence ensures their dynamical independence of each other.

Alternatively to some global notion of a world, one can look at the components of the (mixed) state of a (local) system, either from the point of view that the different components defined by decoherence will separately affect (different components of the state of) another system, or from the point of view that they will separately underlie the conscious experience (if any) of the system. The former sits well with the relational interpretation of Everett as put forward in the 1990s by Saunders (e.g. 1993), possibly with Zurek’s (1998) views, and arguably with Everett’s (1957) original notion of relative state.[21] The latter leads directly to the idea of ‘many-minds’ in the sense used by Zeh (2000; also 2003a, p. 24). As Zeh puts it, the ‘psycho-physical parallelism’ invoked by von Neumann (cf. below Section 3.4.1) is to be understood as the requirement of supervenience of the mental on the physical: only one mental state is experienced, so there should be only one corresponding component in the physical state. In a decohering no-collapse universe one can instead introduce a new psycho-physical parallelism, in which individual minds supervene on each non-interfering component in the physical state. (This is different from the many-minds interpretation of Albert and Loewer (1988), where the mental does not supervene on the physical, because individual minds have trans-temporal identity of their own.[22]) Zeh indeed suggests that, given decoherence, this is the most natural interpretation of quantum mechanics.[23]

3.2 Pilot-wave theories

‘Hidden variables’ approaches seek to explain quantum phenomena as equilibrium statistical effects arising from a deeper-level theory, rather strongly in analogy with attempts at understanding thermodynamics in terms of statistical mechanics (see the entry on philosophy of statistical mechanics). Of these, the most developed are the so-called pilot-wave theories, in particular the theory by de Broglie and Bohm (see also the entry on Bohmian mechanics). Pilot-wave theories are no-collapse formulations of quantum mechanics that assign to the wave function the role of determining the evolution of (‘piloting’, ‘guiding’) the variables characterising the system, say particle configurations, as in de Broglie’s (1928) and Bohm’s (1952) theory, or fermion number density, as in Bell’s (1987, Chapter 19) ‘beable’ quantum field theory, or again field configurations, as in various proposals for pilot-wave quantum field theories (for a recent survey, see Struyve 2011).

De Broglie’s idea was to modify classical Hamiltonian mechanics in such a way as to make it analogous to classical wave optics, by substituting for Hamilton and Jacobi’s action function the phase \(S\) of a physical wave. Such a ‘wave mechanics’ of course yields non-classical motions, but in order to understand how de Broglie’s dynamics relates to typical quantum phenomena, we must include Bohm’s (1952, Part II) analysis of the appearance of collapse. In the case of measurements, Bohm argued that the wave function evolves into a superposition of components that are and remain separated in the total configuration space of measured system and apparatus, so that the total configuration is ‘trapped’ inside a single component of the wave function, which will guide its further evolution, as if the wave had collapsed (‘effective’ wave function). This analysis allows one to recover the apparent collapse upon measurement (and the quantum probabilities are further recovered via statistical considerations).

It is natural to extend this analysis from the case of measurements induced by an apparatus to that of ‘spontaneous measurements’ as performed by the environment in the theory of decoherence, thus applying the same strategy to recover both quantum and classical phenomena. The resulting picture is one in which de Broglie–Bohm theory, in cases of decoherence, describes the motion of particles that are trapped inside one of the extremely well localised components selected by the decoherence interaction. Thus, de Broglie–Bohm trajectories will partake of the classical motions on the level defined by decoherence (the width of the components). This use of decoherence would arguably resolve the puzzles discussed, e.g., by Holland (1996) with regard to the possibility of a ‘classical limit’ of de Broglie’s theory. One baffling problem, for instance, is that trajectories with different initial conditions cannot cross in de Broglie–Bohm theory, because the wave guides the particles by way of a first-order equation, while, as is well known, Newton’s equations are second-order and possible trajectories in Newton’s theory do cross. Now, however, the non-interfering components produced by decoherence can indeed cross, and so will the trajectories of particles trapped inside them.

If the main instances of decoherence are indeed coextensive with instances of separation in configuration, de Broglie–Bohm theory can thus use the results of decoherence relating to the formation of classical structures, while providing an interpretation of quantum mechanics that explains why these structures are indeed observationally relevant.[24] This picture is natural, but not self-evident. De Broglie–Bohm theory and decoherence contemplate two a priori distinct mechanisms connected to apparent collapse: respectively, separation of components in configuration space and suppression of interference. While the former obviously implies the latter, it is equally obvious that decoherence need not imply separation in configuration space. One can expect, however, that decoherence interactions of the form of approximate position measurements will.

A discussion of the role of decoherence in pilot-wave theory in the form suggested above has been given by Rosaler (2015, 2016). An informal discussion is given in Bohm and Hiley (1993, Chapter 8), partial results are given by Appleby (1999),[25] and some simulations have been realised by Sanz and co-workers (e.g. Sanz and Borondo 2009).[26] Relevant results have also been derived by Toroš, Donadi and Bassi (2016) who show quantitative correspondence with a spontaneous collapse model (see also Romano 2016). A rather different approach is instead suggested by Allori (2001; see also Allori and Zanghì 2009).[27]

While, as argued above, it appears plausible that decoherence might be instrumental in recovering the classicality of pilot-wave trajectories in the case of the non-relativistic particle theory, it is less clear whether this strategy might work equally well in the case of field theory. Doubts to this effect have been raised, e.g., by Saunders (1999) and by Wallace (2008, 2012b). Essentially, these authors doubt whether the configuration-space variables, or some coarse-grainings thereof, are, indeed, decohering variables.[28]

3.3 Spontaneous collapse theories

Spontaneous collapse theories seek to modify the Schrödinger equation, so that superpositions of different ‘everyday’ states do not arise or are very unstable. The best known such theory is the so-called GRW theory (Ghirardi Rimini and Weber 1986), in which a material particle spontaneously undergoes localisation in the sense that at random times it experiences a collapse of the form used to describe approximate position measurements.[29] In the original model, the collapse occurs independently for each particle (a large number of particles thus ‘triggering’ collapse much more frequently); in later models the frequency for each particle is weighted by its mass, and the overall frequency for collapse is thus tied to mass density.[30]

Can decoherence be put to use in GRW? Such approaches may have intuitively little to do with decoherence since they seek to suppress precisely those superpositions that are created by decoherence. Nevertheless their relation to decoherence is interesting (and, as we shall see in the next section, interestingly different from the role that decoherence at least implicitly plays in von Neumann’s collapse postulate).

Qualitatively at least, since spontaneous collapse produces localisation, the effect appears formally similar as in some of the models of decoherence. But we have ‘true’ collapse instead of suppression of interference, and spontaneous collapse occurs without there being any interaction between the system and anything else. In cases in which the decoherence interaction indeed also takes the form of approximate position measurements, the relation betweeen spontaneous collapse and decoherence presumably boils down to a quantitative comparison. If collapse happens faster than decoherence, then the superposition of components relevant to decoherence will not have time to arise, and insofar as the collapse theory is successful in recovering classical phenomena, decoherence plays no role in this recovery. Instead, if decoherence takes place faster than collapse, then the collapse mechanism can find ‘ready-made’ structures onto which to truly collapse the wave function.

Not much explicit work has been done on modelling decoherence in the setting of spontaneous collapse theories, however. Simple comparison of the relevant rates in models of decoherence and in spontaneous collapse theories suggests that decoherence is generally faster (Tegmark 1993, esp. Table 2). The more detailed model by Toroš, Donadi and Bassi (2016, esp. Section V) indicates that the effect of the collapse is amplified through the presence of the environment, i.e. the collapse rate is increased. The situation may be more complex when the decoherence interaction does not approximately privilege position (e.g. when instead it selects for currents in a SQUID), because collapse and decoherence might actually ‘pull’ in different directions.[31]

A further aspect of the relation between decoherence and spontaneous collapse theories relates to the experimental testability of spontaneous collapse theories. Indeed, if we assume that collapse is a real physical process, decoherence will make it extremely difficult in practice to detect empirically when and where exactly spontaneous collapse takes place: on the one hand, decoherence makes it look as if collapse has taken place already, while on the other it makes it difficult to perform interference experiments to check whether collapse has not yet taken place. (See the nice discussion of this issue in Chapter 5 of Albert (1992)).

Even worse, what might be interpreted as evidence for collapse could be reinterpreted as ‘mere’ suppression of interference within an Everett or pilot-wave approach, and only those cases in which the collapse theory predicts collapse but the system is shielded from decoherence (or perhaps in which the two pull in different directions) could be used to test collapse theories experimentally.

One particularly bad scenario for experimental testability is related to the speculation (in the context of the ‘mass density’ version) that the cause of spontaneous collapse may be connected with gravitation. Tegmark (1993, Table 2) quotes some admittedly uncertain estimates for the suppression of interference due to a putative quantum gravity, but they are quantitatively very close to the rate of destruction of interference due to the GRW collapse (at least outside of the microscopic domain). Similar conclusions are arrived at in the more detailed work by Kay (1998). If there is indeed such a quantitative similarity between these possible effects, then it would become extremely difficult to distinguish between the two. In the presence of gravitation, any positive effect could be interpreted as support for either collapse or decoherence. And in those cases in which the system is effectively shielded from decoherence (say, if the experiment is performed in free fall), then if the collapse mechanism is indeed triggered by gravitational effects, no collapse should be expected either.[32]

3.4 Orthodox approaches

3.4.1 Von Neumann

In the final Chapter VI of his book (von Neumann 1932), von Neumann provided a systematic discussion of quantum mechanics with collapse upon measurement (described by what he calls an intervention of type \(\mathbf{1})\), as distinct from the Schrödinger equation (intervention of type \(\mathbf{2})\), and traditionally associated with a role for conscious observation. (The two types of interventions are introduced already in Section V.1, but von Neumann postpones their conceptual discussion to the final chapter.)

In actual fact, von Neumann starts his discussion by pointing out that measurements are different from other physical processes both phenomenologically and by presupposing conscious observation. But he insists on preserving what he calls ‘psycho-physical parallelism’, requiring that the process of observation be describable also in purely physical terms. He thus requires that a boundary be drawn between the ‘observed’ and the ‘observer’, but also crucially that this boundary be movable arbitrarily far towards the observer end. (Note that von Neumann stops short of at least explicitly attributing to consciousness a causal role in collapsing the quantum state.)

Von Neumann thus needs to show that the final predictions for what we consciously observe are insensitive to how far along such a ‘measurement chain’ one chooses to continue applying intervention \(\mathbf{2}\), thus ensuring that every step in the process of observation can be described purely in physical terms. In von Neumann’s example of a measurement of temperature, we need not apply intervention \(\mathbf{1}\) to the system itself, but may apply it to the thermometer, or to the retina in the eye, or to the optic nerve, or anywhere else within the physical realm between the system and the ‘abstract ego’ of the observer. By the same token, however, we can (much more practically!) apply it also directly to the measured system.

As a preliminary, von Neumann discusses the relation between states of systems and subsystems, in particular the notion of partial trace and the biorthogonal decomposition theorem (i.e. the theorem stating that an entangled quantum state can always be written in terms of perfect correlations between two special bases for the subsystems). He also shows (as mentioned above) that the usual statistics of measurements cannot be recovered by assuming that the ‘observer’ is initially in a mixed state. He then proves that it is always possible to define an interaction Hamiltonian that will correlate perfectly the eigenstates of any given observable of an ‘observed’ system with the eigenstates of some other suitable observable of an ‘observer’, leaving as an exercise for the reader to show that predictions are independent of where one places the boundary between the two.

What the reader is supposed to do is to imagine a series of such interactions, between the system and the thermometer, between the thermometer and the light, between the light and the retina, etc., and rely on the absence of interference at each step to argue that, even if we describe a number of systems using intervention \(\mathbf{2}\), they behave for the purpose of the application of intervention \(\mathbf{1}\) as if they had collapsed already. In this sense, even though he is quite clearly not thinking in terms of mechanisms for suppressing interference, he is relying on decoherence. A fuller treatment (e.g. a detailed model of how the thermometer interacts with light, and some of the light is then sampled by the eye) would resemble more closely an analysis in terms of environmental decoherence.

3.4.2 Heisenberg

Similar considerations may be made about Heisenberg’s views on quantum mechanics, even though Heisenberg’s conceptual framework is arguably rather different from von Neumann’s.

For Heisenberg, the application of quantum mechanics requires a ‘cut’ between the system to be described quantum mechanically, and what is to be considered external to the system and is to be treated classically. Indeed, if one were to apply quantum mechanics to the entire universe, one would have a perfectly closed system in which nothing would ever happen. But Heisenberg places special emphasis on the idea that any special system must be describable using quantum mechanics (indeed, that such a system is in principle always able to display interference effects if placed under the appropriate conditions[33]). Self-consistency of the theory then requires the arbitrary movability of the cut away from the system. (The most detailed presentation of these ideas is in Heisenberg’s draft reply to the Einstein–Podolsky–Rosen argument – see Crull and Bacciagaluppi (2011) in the Other Internet Resources.)

If one thinks about some of the examples that Heisenberg considers to be measurements, it is even clearer than in von Neumann’s case that the movability of the Heisenberg cut in fact requires decoherence. In particular, his discussion of \(\alpha\)-particle tracks involves successive measurements whenever the \(\alpha\)-particle ionises an atom in a cloud chamber. If we require that the Heisenberg cut be movable to the level of the entire cloud chamber, we shift directly to a Mott-type analysis of the \(\alpha\)-particle tracks.

One further aspect that is characteristic for Heisenberg and that prima facie does not fit with the theory of decoherence, is that Heisenberg does not take quantum states as fundamental. For him, Schrödinger’s notion of a ‘state’ was just a mathematical artifact that is convenient for calculating transition probabilities between values of (measured) observables. This can also be seen as underpinning the movability of the cut: there is no matter of fact about when the collapse takes place, and all that matters physically are the transition probabilities between values of observables. This view is still compatible with decoherence, however, as long as one sees the role of the quantum state there as again just a convenient tool for calculating transition probabilities (say, in a decoherent histories framework).[34]

3.4.3 Bohr

Bohr shared with von Neumann and with Heisenberg the idea that that quantum mechanics is in principle applicable to any physical system (as shown e.g. by his willingness in the course of his debates with Einstein to apply the uncertainty relations to parts of the experimental apparatus when not used as an apparatus), while denying that it is meaningful to apply it to the entire universe. What is central to Bohr’s views, however, is not so much the movability of the cut within a given experimental arrangement, but the fact that different experimental arrangements will generally select complementary aspects of the description of a physical system, corresponding to different equally necessary classical pictures that however cannot be combined. In this sense, for Bohr classical concepts are conceptually prior to quantum mechanics. In a terminology reminiscent of Kant, the quantum state is not an anschaulich (‘intuitive’) representation of a quantum object, but only a symbolic representation, a shorthand for the quantum phenomena that are constituted by applying the various complementary classical pictures. (See also the entry on the Copenhagen interpretation.)

Thus, if we understand the theory of decoherence as pointing to how classical concepts might in fact emerge from quantum mechanics, we see a tension with Bohr’s basic position. According to decoherence, even though classical concepts are autonomous in the sense of being emergent, they are not fundamentally prior to quantum mechanics. In another sense, however, decoherence does support Bohr’s point of view, because we can see decoherence (in particular environmental decoherence) as suggesting that there are no quantum phenomena without classical records: it is the suppression of interference that creates the conditions for restoring the objectivity that gets lost through what Bohr sees as the loss of independent reality attaching to both the system and the measuring apparatus.[35]

Both of these aspects can be seen in the reception of Everett’s ideas by Bohr and his circle. While Everett saw his own theory as directly opposed to von Neumann’s approach, he believed that he could provide a justification for Bohr’s idea of complementarity. Bohr, however, rejected the attempt to apply the notion of quantum state to a description of the whole universe. (The rejection of Everett’s ideas in Copenhagen in fact rather tragically contributed to Everett leaving physics in favour of military operations research.[36])

3.5 Other approaches

3.5.1 Nelson’s stochastic mechanics

Nelson’s (1966, 1985) stochastic mechanics is a proposal to recover the wave function and the Schrödinger equation as effective elements in the description of a fundamental diffusion process in configuration space. Insofar as the proposal is successful,[37] it shares many features with de Broglie–Bohm theory. Indeed, the current velocity for the particles in Nelson’s theory turns out to be equal to the de Broglie–Bohm velocity, and the particle distribution in Nelson’s theory is equal to that in de Broglie–Bohm theory (in equilibrium).

It follows that many results from pilot-wave theories can be imported into Nelson’s stochastic mechanics, including those based on decoherence. In particular, the strategies used in pilot-wave theories to recover the appearance of collapse and the emergence of a classical regime can be applied also to the case of stochastic mechanics, even though so far very little has been done along these lines. Doing so will arguably also resolve some conceptual puzzles specific to Nelson’s theory, such as the problem of two-time correlations raised in Nelson (2006).[38]

3.5.2 Modal interpretations

The first ‘modal interpretation’ of quantum mechanics was proposed by Van Fraassen (1973, 1991), and was strictly an interpretation of the theory (while other later versions came more to resemble pilot-wave theories; see the entry on modal interpretations). Van Fraassen’s basic intuition was that the quantum state of a system should be understood as describing a collection of possibilities, represented by components in the (mixed) quantum state. His proposal considers only decompositions at single instants, and is agnostic about re-identification over time. Thus, it can directly exploit only the fact that decoherence produces descriptions in terms of classical-like states, which will count as possibilities in Van Fraassen’s sense. This ensures ‘empirical adequacy’ of the quantum description (crucial in Van Fraassen’s constructive empiricism). The dynamical aspects of decoherence can be exploited indirectly, in that single-time components will exhibit records of the past, which ensure adequacy with respect to observations, but about whose veridicity Van Fraassen remains agnostic.

A different strand of modal interpretations is loosely associated with the (distinct) views of Kochen (1985), Healey (1989) and Dieks (see e.g. Dieks and Vermaas 1998). We focus on the last of these to fix ideas. Van Fraassen’s possible decompositions are restricted to one singled out by a mathematical criterion (related to the biorthogonal decomposition theorem mentioned above in Section 3.4.1), and a dynamical picture is explicitly sought (and was later developed). In the case of an ideal (non-approximate) quantum measurement, this special decomposition coincides with that defined by the eigenstates of the measured observable and the corresponding pointer states, and the interpretation thus appears to solve the measurement problem (for this case at least). In Dieks’s original intentions, the approach was meant to provide an attractive interpretation of quantum mechanics also in the case of decoherence interactions, since at least in simple models of decoherence the same kind of decomposition singles out more or less also those states between which interference is suppressed (with a proviso about very degenerate states).

Interestingly, this approach fails when applied to other models of decoherence, e.g., that in Joos and Zeh (1985, Section III.2). Indeed, it appears that in more general models of decoherence the components singled out by this version of the modal interpretation are given by delocalised states, and are unrelated to the localised components naturally privileged by decoherence (Donald 1998; Bacciagaluppi 2000). Thus the relation with decoherence has been the touchstone for these versions of the modal interpretation. Note that Van Fraassen’s original interpretation is untouched by this problem, and so are possibly some more recent modal or modal-like interpretations by Spekkens and Sipe (2001), Bene and Dieks (2002), and Berkovitz and Hemmo (2006).

The general idea of modal interpretations, more or less in the spirit of Van Fraassen, can be applied more widely. For one thing, it is cognate to some of the views expressed in the decoherent histories literature. Decoherent histories could be seen as alternative possible histories of the world, one of which is in fact actualised. A discussion in these terms has been outlined by Hemmo (1996). Such views are also possibly quite close to Everett’s own views, who (maybe surprisingly for the modern reader) was not a realist but an empiricist. A discussion of Everett with parallels to Van Fraassen is given by Barrett (2015). One final view that has some similarities with Van Fraassen’s and should be equally able to exploit the results of decoherence is Rovelli’s relational quantum mechanics (see also Van Fraassen 2010).

3.5.3 QBism

QBism (originally short for ‘quantum Bayesianism’) is a view of quantum mechanics developed by Chris Fuchs and co-workers, which has made current the idea that subjective probabilities à la de Finetti can be used also in quantum mechanics (see the entry on quantum Bayesian and pragmatist views of quantum theory). The position is more radical than this, however, in that it does not only claim that the quantum probabilities as defined by the quantum state should be interpreted subjectively, but that the quantum state itself is merely an expression of an agent’s degrees of belief.[39]

The role of decoherence in QBism is rather downplayed. E.g. Fuchs and Schack (2012, Section 7) see it in light of the reflection principle (concerning an agent’s beliefs about their future beliefs). Specifically, in the context of a von Neumann measurement chain, an agent can use the state of the system as decohered by some later elements of the chain as an expression of their beliefs about what their beliefs will be after the previous elements of the measurement chain have been completed. (And of course, the results of decoherence can be taken into account if an agent is considering making measurements on a system that is in interaction with some environment.)

Because they are subjectivists not just about probabilities but also about the quantum state, however, what QBists cannot exploit is the relevance of decoherence to the emergence of the entire classical world in which (human) agents have in fact evolved – unless, that is, agents are not treated as agents but as mere physical systems. The QBist’s predicament thus is similar to Bohr’s: decoherence clearly appears to be relevant to the emergence of the very structures that are deemed to be conceptually prior to their analysis of quantum mechanics itself.

4. Scope of Decoherence

We have seen in the last section that not all approaches to quantum mechanics can make full use of decoherence. In those approaches that can, however, decoherence is instrumental in yielding a wealth of structures that emerge from the unitary Schrödinger (or Heisenberg) dynamics. How far can this programme of decoherence (Zeh 2003a, p. 9) be successfully developed?

4.1 The world according to decoherence

What seems very clear is that decoherence is crucial for the emergence of much of the macroscopic world around us, from the motions in the solar system (cf. the discussion of the motion of Saturn’s moon Hyperion – for an assessment of which see Schlosshauer (2008)) and down to the working of enzymes (which relies on their molecular shapes). The detailed picture of the world that emerges from decoherence, however, is full of subtleties.

For one thing, while the more ‘macroscopic’ a system, the more pervasive the effects of decoherence and the more complex the structures that emerge through it, this is only a rule of thumb. Not all molecules are chiral (bound ammonia groups tend to be in superpositions for instance), and there is no clear-cut criterion for when a system should count as macroscopic. Indeed, even apart from examples like superconducting systems, there might be surprising cases in which not all interference effects have been suppressed by decoherence even at the macroscopic level. A famous proposal by Hameroff and Penrose (1996) links the phenomenon of consciousness with the possibility of quantum superpositions within microtubules (and their subsequent active suppression via collapse); other authors interpret the mathematically quantum-like effects described within ‘quantum cognition’ as actual quantum effects (for both, see the entry on quantum approaches to consciousness). At present, most macroscopic quantum effects remain speculative at best, but plausible cases for the continuing relevance of quantum superpositions at the macroscopic level can be found in quantum biology, notably the studies of possible quantum effects in the navigational system of migrating birds (Cai, Guerreschi and Briegel 2010).

Closer to home, while the classical world is recognised as having been all the time a dynamical pattern emerging from quantum mechanics, it turns out to be less classical than we might have expected. One interesting example is the description of classically chaotic systems. A straightforward application of the techniques allowing one to derive Newtonian trajectories at the level of components has been employed by Zurek and Paz (1994) to derive chaotic trajectories in quantum mechanics. The problem with the quantum description of chaotic behaviour is that prima facie it should be impossible. Chaos is characterised roughly as extreme sensitivity in the behaviour of a system on its initial conditions, in the sense that the distance between the trajectories arising from different initial conditions increases exponentially in time (see the entry on chaos). Since the Schrödinger evolution is unitary, it preserves all scalar products and all distances between quantum state vectors. Thus, it would seem, close initial conditions lead to trajectories that are uniformly close throughout all of time, and no chaotic behaviour is possible (‘problem of quantum chaos’). The crucial point that enables Zurek and Paz’s analysis is that the relevant trajectories defined by decoherence are at the level of components of the state of the system. Unitarity is preserved because the vectors in the environment, to which these different components are coupled, are and remain orthogonal: how the components themselves more specifically evolve is immaterial. Explicit modelling yields a picture of quantum chaos in which different trajectories branch (a feature absent from classical chaos, which is deterministic) and then indeed diverge exponentially. (As with the crossing of trajectories in de Broglie–Bohm theory in Section 3.2, one has behaviour at the level of components that is qualitatively different from the behaviour derived for wave functions of an isolated system.) The qualitative picture is the same as we mentioned above in Section 1.3, of classical trajectories that are kicked slightly off course (trajectories with slight kinks). In the case of classically chaotic systems, however, this has a dramatic effect. This means that systems like the weather turn out to be ‘branching’ all the time due to decoherence interactions, so that what we usually think of as classical unpredictability is in fact quantum indeterminism! (For an excellent discussion, see Wallace 2012a, Chapters 3 and 10.)

And as we have also mentioned, quantum phenomena themselves are a feature of the world that emerges through decoherence (Zeh 2003a, p. 33; see also Bacciagaluppi 2002, Section 6.2): not only the stability of the outcomes of laboratory measurements, and thus ‘quantum phenomena’ in the specific sense of Bohr, but also quantum jumps or the appearance of \(\alpha\)-particle trajectories are a direct consequence of decoherence. The classical world yielded by decoherence is thus one (or one of many!) punctuated by quantum phenomena.

4.2 Further applications

Further along these lines, Zeh (2003b) argues that decoherence can explain the appearance of particle detections within quantum field theory (see the entry on quantum field theory). Therefore, only fields need to be included in the fundamental concepts, and ‘particles’ are a derived concept, unlike what might be suggested by the customary introduction of fields through a process of ‘second quantisation’. Another application to quantum field theory is the suggestion that the justification for the (strict) superselection rule for charge in quantum field theory can also be phrased in terms of decoherence. As pointed out by Giulini, Kiefer and Zeh (1995; see also Giulini et al. 1996, Section 6.4), an electric charge is surrounded by a Coulomb field (which electrostatically is infinitely extended; the argument can also be carried out using the retarded field, though). States of different electric charge of a particle are thus coupled to different, presumably orthogonal, states of its electric field. One can consider the far-field as an effectively uncontrollable environment that decoheres the particle (and the near-field), so that superpositions of different charges are indeed always suppressed.[40]

Another claim about the significance of decoherence relates to time asymmetry (see e.g. the entries on time asymmetry in thermodynamics and philosophy of statistical mechanics). Insofar as apparent collapse (branching) is indeed a time-directed process, decoherence will have direct relevance to the emergence of this ‘quantum mechanical arrow of time’. This is not a straightforward issue, however, and questions familiar from the foundations of statistical mechanics arise also in the context of decoherence. Specific issues include the fact that models of environmental decoherence tend to assume very special (unentangled) initial conditions, and the fact that the definition of the decoherence functional in the histories approach is itself asymmetrical (although symmetric versions also exist). For a spectrum of discussions, see Hartle (1998, and references therein), Zeh (2001, Chapter 4), and Bacciagaluppi (2002, Section 6.1; 2007). Whether decoherence is connected to the other familiar arrows of time is a more specific question. Indeed, should decoherence allow us to recover the time-symmetric classical description of, say, particle trajectories in a gas, or does it allow us to derive time-asymmetric thermodynamic behaviour directly, by-passing classical attempts at understanding it? Note that insofar as classical systems such as gases are believed to be chaotic, the origin of the probabilities in classical statistical mechanics will arguably be quantum. For various discussions, see e.g. Zurek and Paz (1994), Hemmo and Shenker (2001), and Wallace (2012a, Chapter 9; and 2001, 2013a, 2013b in the Other Internet Resources).

Finally, it has been suggested that decoherence should be a useful ingredient in a theory of quantum gravity (see the entry on quantum gravity), as discussed e.g. by Kiefer (1994). First, because a suitable generalisation of decoherence theory to a full theory of quantum gravity should yield suppression of interference between different classical spacetimes (see e.g. also Giulini et al. 1996, Section 4.2). Second, it is speculated that decoherence might solve the so-called problem of time, which arises as a prominent puzzle in (the ‘canonical’ approach to) quantum gravity. This is the problem that the candidate fundamental equation (in this approach) – the Wheeler–DeWitt equation – is an analogue of a time-independent Schrödinger equation, and does not contain time at all, so that time needs somehow to emerge. In the context of decoherence theory, one can for instance construct toy models in which the analogue of the Wheeler–DeWitt wave function decomposes into non-interfering components (for a suitable sub-system) each satisfying a time-dependent Schrödinger equation, so that decoherence appears in fact as the source of time. An accessible introduction to and philosophical discussion of such models is given by Ridderbos (1999), with references to the original papers. See also the more recent model by Halliwell and Thorwart (2002).[41]

Bibliography

  • Adler, S. L., 2003, ‘Why Decoherence has not Solved the Measurement Problem: A Response to P. W. Anderson’, Studies in History and Philosophy of Modern Physics, 34 B: 135–142 [preprint available online].
  • Albert, D., 1992, Quantum Mechanics and Experience, Cambridge, Mass.: Harvard University Press.
  • Albert, D., and Loewer, B., 1988, ‘Interpreting the Many Worlds Interpretation’, Synthese, 77: 195–213 [reprint available online].
  • Allori, V., 2001, Decoherence and the Classical Limit of Quantum Mechanics, Ph.D. Thesis, Università di Genova.
  • Allori, V., and Zanghì, N., 2009, ‘On the Classical Limit of Quantum Mechanics’, Foundations of Physics, 39(1): 20–32 [preprint available online].
  • Anglin, J. R., Paz, J. P., and Zurek, W. H., 1997, ‘Deconstructing Decoherence’, Physical Review, A 55: 4041–4053 [preprint available online.]
  • Appleby, D. M., 1999, ‘Bohmian Trajectories Post-Decoherence’, Foundations of Physics, 29: 1885–1916 [preprint available online].
  • Bacciagaluppi, G., 2000, ‘Delocalized Properties in the Modal Interpretation of a Continuous Model of Decoherence’, Foundations of Physics, 30: 1431–1444.
  • –––, 2002, ‘Remarks on Space-Time and Locality in Everett’s Interpretation’, in T. Placek and J. Butterfield (eds), Non-Locality and Modality (NATO Science Series, II. Mathematics, Physics and Chemistry, Volume 64), Dordrecht: Kluwer, pp. 105–122 [preprint available online].
  • –––, 2007, ‘Probability, Arrow of Time and Decoherence’, Studies in History and Philosophy of Modern Physics, 38: 439–456 [preprint available online].
  • –––, 2014a, ‘Insolubility from No-Signalling’, International Journal of Theoretical Physics, 53(10): 3465–3474 [preprint available online].
  • –––, 2014b, ‘A Critic Looks at QBism‘, in M. C. Galavotti, S. Hartmann, M. Weber, W. Gonzalez, D. Dieks and T. Uebel (eds), New Directions in the Philosophy of Science, Cham: Springer, pp. 403–416 [preprint available online].
  • –––, 2020, ‘Unscrambling Subjective and Epistemic Probabilities’, in M. Hemmo and O. Shenker (eds.), Quantum, Probability, Logic – Itamar Pitowsky’s Work and Influence, Berlin: Springer, pp. 49–89 [preprint available online].
  • Bacciagaluppi, G., and Valentini, A., 2009, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference, Cambridge: Cambridge University Press.
  • Barbour, J., 1999, The End of Time (London: Weidenfeld and Nicolson).
  • Barrett, J. A., 2015, ‘Pure Wave Mechanics and the Very Idea of Empirical Adequacy’, Synthese, 192(10): 3071–3104 [preprint available online].
  • Bassi, A. and Ghirardi, G., 2000, ‘A General Argument Against the Universal Validity of the Superposition Principle’, Physics Letters A, 275(5–6): 373–381 [preprint available online].
  • Bassi, A. and Ulbricht, H., 2014, ‘Collapse Models: From Theoretical Foundations to Experimental Verifications’, Journal of Physics: Conference Series, 504(1), 012023 [preprint available online].
  • Bell, J. S., 1987, Speakable and Unspeakable in Quantum Mechanics, Cambridge: Cambridge University Press.
  • Bene, G., and Dieks, D., 2002, ‘A Perspectival Version of the Modal Interpretation of Quantum Mechanics and the Origin of Macroscopic Behavior’, Foundations of Physics, 32: 645–672 [preprint available online].
  • Ben-Menahem, Y., and Hemmo, M. (eds), 2012, Probability in Physics, Berlin: Springer.
  • Berkovitz, J., 2012, ‘The World According to de Finetti: On de Finetti’s Theory of Probability and its Application to Quantum Mechanics‘, in Ben-Menahem and Hemmo (2012), pp. 249–280 [preprint available online].
  • Berkovitz, J., and Hemmo, M., 2006, ‘Modal Interpretations and Relativity: A Reconsideration’, in W. Demopoulos and I. Pitowsky (eds.), Physical Theory and its Interpretation: Essays in Honor of Jeffrey Bub (Western Ontario Series in Philosophy of Science, Vol. 72), New York: Springer, pp. 1–28.
  • Bhattacharya, T., Habib, S. and Jacobs, K., 2000, ‘Continuous Quantum Measurement and the Emergence of Classical Chaos’, Physical Review Letters 85(23): 4852–4855 [preprint available online].
  • de Broglie, L., 1928, ‘La nouvelle dynamique des quanta’, in [H. Lorentz (ed.)], Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique […] Solvay, Paris: Gauthiers-Villars. Translated as ‘The New Dynamics of Quanta’ in Bacciagaluppi and Valentini (2009), pp. 374–407.
  • Bohm, D., 1952, ‘A Suggested Interpretation of the Quantum Theory in Terms of “Hidden” Variables: I and II’, Physical Review, 85: 166–179 and 180–193. Reprinted in Wheeler and Zurek (1983), pp. 369–396.
  • Bohm, D., and Hiley, B., 1993, The Undivided Universe, London: Routledge.
  • Brun, T., 2002, ‘A Simple Model of Quantum Trajectories’, American Journal of Physics 70(7): pp. 719–737 [preprint available online].
  • Bub, J., 1997, Interpreting the Quantum World, Cambridge: Cambridge University Press (2nd corrected edition, 1999).
  • Cai, J., Guerreschi, G. G., and Briegel, H. J., 2010, ‘Quantum Control and Entanglement in a Chemical Compass’, Physical Review Letters, 104(22): 220502/1–4 [preprint available online].
  • Derakhshani, M., 2017, Stochastic Mechanics Without ad hoc Quantization: Theory and Applications to Semiclassical Gravity, PhD Thesis, Universiteit Utrecht [available online].
  • Dieks, D., and Vermaas, P. E. (eds), 1998, The Modal Interpretation of Quantum Mechanics, Dordrecht: Kluwer.
  • Donald, M., 1998, ‘Discontinuity and Continuity of Definite Properties in the Modal Interpretation’, in Dieks and Vermaas (1998), pp. 213–222 [preprint available online].
  • Dowker, F., and Kent, A., 1995, ‘Properties of Consistent Histories’, Physical Review Letters, 75: 3038–3041 [preprint available online].
  • Dowker, F., and Kent, A., 1996, ‘On the Consistent Histories Approach to Quantum Mechanics’, Journal of Statistical Physics, 82: 1575–1646 [preprint available online].
  • Epstein, S. T., 1953, ‘The Causal Interpretation of Quantum Mechanics’, Physical Review, 89: 319.
  • Everett, H. III, 1957, ‘“Relative-State” Formulation of Quantum Mechanics’, Reviews of Modern Physics, 29: 454–462. Reprinted in Everett (2012), pp. 173–196, and in Wheeler and Zurek (1983), pp. 315–323.
  • –––, 2012, The Everett Interpretation of Quantum Mechanics: Collected Works 1955–1980 with Commentary, ed. by J. A. Barrett and P. Byrne, Princeton: Princeton University Press.
  • Felline, L., and Bacciagaluppi, G., 2013, ‘Locality and Mentality in Everett Interpretations: Albert and Loewer’s Many Minds’, Mind and Matter, 11(2): 223–241 [preprint available online].
  • van Fraassen, B., 1973, ‘Semantic Analysis of Quantum Logic’, in C. A. Hooker (ed.), Contemporary Research in the Foundations and Philosophy of Quantum Theory, Dordrecht: Reidel, pp. 180–213.
  • –––, 1991, Quantum Mechanics: An Empiricist View, Oxford: Clarendon Press.
  • –––, 2010, ‘Rovelli’s World’, Foundations of Physics, 40(4): 390–417 [reprint available online].
  • Fuchs, C. A., and Schack, R., 2012, ‘Bayesian Conditioning, the Reflection Principle, and Quantum Decoherence’, in Ben-Menahem and Hemmo (2012), pp. 233–247 [preprint available online].
  • Gell-Mann, M., and Hartle, J. B., 1990, ‘Quantum Mechanics in the Light of Quantum Cosmology’, in W. H. Zurek (ed.), Complexity, Entropy, and the Physics of Information, Reading, Mass.: Addison-Wesley, pp. 425–458 [preprint available online].
  • Ghirardi, G.C., Rimini, A., and Weber, T., 1986, ‘Unified Dynamics for Microscopic and Macroscopic Systems’, Physical Review, D 34: 470–479 [preprint available online].
  • Giulini, D., Kiefer, C. and Zeh, H.-D., 1995, ‘Symmetries, Superselection Rules, and Decoherence’, Physics Letters A, 199(5–6): 291–298 [preprint available online].
  • Giulini, D., Joos, E., Kiefer, C., Kupsch, J., Stamatescu, I.-O., and Zeh, H.-D., 1996, Decoherence and the Appearance of a Classical World in Quantum Theory, Berlin: Springer (for the revised edition, see Joos et al. (2003)).
  • Griffiths, R. B., 1984, ‘Consistent Histories and the Interpretation of Quantum Mechanics’, Journal of Statistical Physics, 36: 219–272.
  • –––, 2002, Consistent Quantum Theory, Cambridge: Cambridge University Press.
  • Halliwell, J. J., 1995, ‘A Review of the Decoherent Histories Approach to Quantum Mechanics’, Annals of the New York Academy of Sciences, 755: 726–740 [preprint available online].
  • –––, 1999, ‘Somewhere in the Universe: Where is the Information Stored when Histories Decohere?’, Physical Review, D 60: 105031/1–17 [preprint available online].
  • –––, 2010, ‘ Macroscopic Superpositions, Decoherent Histories, and the Emergence of Hydrodynamical Behaviour’ in Saunders et al. (2010), pp. 99–118.
  • Halliwell, J. J., and Thorwart, J., 2002, ‘Life in an Energy Eigenstate: Decoherent Histories Analysis of a Model Timeless Universe’, Physical Review, D 65: 104009/1–19 [preprint available online].
  • Hameroff, S. R., and Penrose, R., 1996 ‘Conscious Events as Orchestrated Spacetime Selections’, Journal of Consciousness Studies, 3(1): 36–53.
  • Hartle, J. B., 1998, ‘Quantum Pasts and the Utility of History’, Physica Scripta, T 76: 67–77 [preprint available online].
  • –––, 2005, ‘The Physics of “Now”’, American Journal of Physics, 73(2): 101–109 [preprint available online].
  • –––, 2010, ‘ Quasiclassical Realms’, in Saunders et al. (2010), pp. 73–98.
  • Healey, R., 1989, The Philosophy of Quantum Mechanics: An Interactive Interpretation, Cambridge: Cambridge University Press.
  • Hemmo, M., 1996, Quantum Mechanics Without Collapse: Modal Interpretations, Histories and Many Worlds, Ph.D. Thesis, University of Cambridge.
  • Hemmo, M. and Shenker, O., 2001, ‘Can we Explain Thermodynamics by Quantum Decoherence?’, Studies in History and Philosophy of Modern Physics, 32 B: 555–568. doi:10.1016/S1355-2198(01)00029-6
  • Hermann, G., 1935, ‘Die naturphilosophischen Grundlagen der Quantenmechanik’, Abhandlungen der Fries’schen Schule, 6: 75–152. Translated as ‘Natural-Philosophical Foundations of Quantum Mechanics‘ in E. Crull and G. Bacciagaluppi (eds), Grete Hermann: Between Physics and Philosophy, Dordrecht: Springer (2017), pp. 239–278.
  • Holland, P. R., 1996, ‘Is Quantum Mechanics Universal?’, in J. T. Cushing, A. Fine and S. Goldstein (eds), Bohmian Mechanics and Quantum Theory: An Appraisal, Dordrecht: Kluwer, pp. 99–110.
  • Joos, E. and Zeh, H.-D., 1985, ‘The Emergence of Classical Properties through Interaction with the Environment’, Zeitschrift für Physik, B 59: 223–243 [reprint available online from the decoherence website].
  • Joos, E., Zeh, H.-D., Kiefer, C., Giulini, D., Kupsch, J., and Stamatescu, I.-O., 2003. Decoherence and the Appearance of a Classical World in Quantum Theory, Berlin: Springer.
  • Kay, B. S., 1998, ‘Decoherence of Macroscopic Closed Systems within Newtonian Quantum Gravity’, Classical and Quantum Gravity, 15: L89–L98 [preprint available online].
  • Kent, A., 1998, ‘Quantum Histories’, Physica Scripta, T76: 78–84 [preprint available online].
  • Kiefer, C., 1994, ‘The Semiclassical Approximation to Quantum Gravity’, in J. Ehlers and H. Friedrich (eds), Canonical Gravity: From Classical to Quantum, Berlin: Springer, pp. 170–212 [preprint available online].
  • Kochen, S., 1985, ‘A New Interpretation of Quantum Mechanics’, in P. Mittelstaedt and P. Lahti (eds), Symposium on the Foundations of Modern Physics 1985, Singapore: World Scientific, pp. 151–169.
  • Leggett, A. J., 1984, ‘Schrödinger’s Cat and her Laboratory Cousins’, Contemporary Physics, 25: 583–594.
  • –––, 2002, ‘Testing the Limits of Quantum Mechanics: Motivation, State of Play, Prospects’, Journal of Physics, C 14: R415–R451.
  • Maudlin, T., 1995, ‘Three Measurement Problems’, Topoi, 14(1): 7–15.
  • Mott, N. F., 1929, ‘The Wave Mechanics of \(\alpha\)-ray Tracks’, Proceedings of the Royal Society of London, A 126 (1930, No. 800 of 2 December 1929): 79–84.
  • Nelson, E., 1966, ‘Derivation of the Schrödinger Equation from Newtonian Mechanics’, Physical Review, 150: 1079–1085.
  • –––, 1985, Quantum Fluctuations, Princeton: Princeton University Press.
  • –––, 2006, ‘Afterword’, in W. G. Faris (ed.), Diffusion, Quantum Theory, and Radically Elementary Mathematics (Mathematical Notes 47), Princeton: Princeton University Press, pp. 227–230.
  • von Neumann, J., 1932, Mathematische Grundlagen der Quantenmechanik, Berlin: Springer. Translated by R. T. Beyer as Mathematical Foundations of Quantum Mechanics, Princeton: Princeton University Press (1955).
  • Omnès, R., 1988, ‘Logical Reformulations of Quantum Mechanics: I. Foundations’, Reviews of Modern Physics, 53: 893–932; ‘II. Inferences and the Einstein–Podolsky–Rosen Experiment’, 933–955; ‘III. Classical Limit and Irreversibility’, 957–975.
  • –––, 1989, ‘Logical Reformulations of Quantum Mechanics: IV. Projectors in Semi-Classical Physics’, Reviews of Modern Physics, 57: 357–382.
  • Osnaghi, S., Freitas, F., and Freire, O. Jr, 2009, ‘The Origin of the Everettian Heresy, ’, Studies in History and Philosophy of Modern Physics, 40(2): 97–123.
  • Pearle, P., 1989, ‘Combining Stochastic Dynamical State-Vector Reduction with Spontaneous Localization’, Physical Review, A 39: 2277–2289 [preprint available online].
  • –––,1997, ‘True Collapse and False Collapse’, in Da Hsuan Feng and Bei Lok Hu (eds.), Quantum Classical Correspondence: Proceedings of the 4th Drexel Symposium on Quantum Nonintegrability, Philadelphia, PA, USA, September 8–11, 1994, Cambridge, Mass.: International Press, pp. 51–68 [preprint available online].
  • Pearle, P., and Squires, E., 1994, ‘Bound-State Excitation, Nucleon Decay Experiments, and Models of Wave-Function Collapse’, Physical Review Letters, 73: 1–5 [preprint available online].
  • Ridderbos, K., 1999, ‘The Loss of Coherence in Quantum Cosmology’, Studies in History and Philosophy of Modern Physics, 30 B: 41–60.
  • Romano, D., 2016, The Emergence of the Classical World from a Bohmian Universe, PhD Thesis, Université de Lausanne.
  • Rosaler, J., 2015, ‘Is de Broglie–Bohm Theory Specially Equipped to Recover Classical Behavior?’, Philosophy of Science, 82(5): 1175–1187 [preprint available online].
  • –––, 2016, ‘Interpretation Neutrality in the Classical Domain of Quantum Theory’, Studies in History and Philosophy of Modern Physics, 53: 54–72 [preprint available online].
  • Sanz, A. S., and Borondo, F., 2009, ‘Contextuality, Decoherence and Quantum Trajectories’, 
Chemical Physics Letters, 478: 301–306 [preprint available online].
  • Saunders, S., 1993, ‘Decoherence, Relative States, and Evolutionary Adaptation’, Foundations of Physics, 23: 1553–1585.
  • –––, 1999, ‘The “Beables” of Relativistic Pilot-Wave Theory’, in J. Butterfield and C. Pagonis (eds.), From Physics to Philosophy, Cambridge: Cambridge University Press, pp. 71–89.
  • Saunders, S., Barrett, J., Kent, A., and Wallace, D. (eds), 2010, Many Worlds? Everett, Quantum Theory, & Reality, Oxford: Oxford University Press.
  • Schlosshauer, M., 2007, Decoherence and the Quantum-to-Classical Transition, Berlin: Springer.
  • –––, 2008, ‘Classicality, the Ensemble Interpretation, and Decoherence: Resolving the Hyperion Dispute’, Foundations of Physics, 38(9): 796–803 [reprint available online].
  • Shimony, A., 1989, ‘Search for a Worldview which can Accommodate our Knowledge of Microphysics’, in J. T. Cushing and E. McMullin (eds), Philosophical Consequences of Quantum Theory, Notre Dame, Indiana: University of Notre Dame Press. Reprinted in A. Shimony, Search for a Naturalistic Worldview, Vol. 1, Cambridge: Cambridge University Press (1993), pp. 62–76.
  • Spekkens, R. W., and Sipe, J. E., 2001, ‘A Modal Interpretation of Quantum Mechanics based on a Principle of Entropy Minimization’, Foundations of Physics, 31: 1431–1464.
  • Struyve, W., 2011, ‘Pilot-wave Approaches to Quantum Field Theory’, Journal of Physics: Conference Series, 306: 012047/1–10 [preprint available online].
  • Struyve, W., and Westman, H., 2007, ‘A Minimalist Pilot-wave Model for Quantum Electrodynamics’, Proceedings of the Royal Society of London, A 463: 3115–3129 [preprint available online].
  • Tegmark, M., 1993, ‘Apparent Wave Function Collapse Caused by Scattering’, Foundations of Physics Letters, 6: 571–590 [preprint available online].
  • Toroš, M., Donadi, S. and Bassi, A., 2016, ‘Bohmian Mechanics, Collapse Models and the Emergence of Classicality’, Journal of Physics A: Mathematical and Theoretical, 49(35): 355302 [preprint available online].
  • Wallace, D., 2003, ‘Everett and Structure’, Studies in History and Philosophy of Modern Physics, 34 B: 87–105 [preprint available online].
  • –––, 2008, ‘Philosophy of Quantum Mechanics’, in D. Rickles (ed.), The Ashgate Companion to Contemporary Philosophy of Physics, Aldershot: Ashgate, pp. 16–98 [preliminary version available online as ‘The Quantum Measurement Problem: State of Play’, December 2007].
  • –––, 2012a, The Emergent Multiverse: Quantum Theory According to the Everett Interpretation, Oxford: Oxford University Press.
  • –––, 2012b, ‘Decoherence and its Role in the Modern Measurement Problem’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370(1975): 4576–4593 [preprint available online].
  • Wallstrom, T. C., 1994, ‘Inequivalence between the Schrödinger Equation and the Madelung Hydrodynamic Equations’, Physical Review A, 49(3): 1613–1617 [reprint available online].
  • Wheeler, J. A., 1978, ‘The “past” and the “delayed-choice” double-slit experiment’, in A. R. Marlow (ed.), Mathematical Foundations of Quantum Theory, New York: Academic Press, pp. 9–48.
  • Wheeler, J. A., and Zurek, W. H. (eds), 1983, Quantum Theory and Measurement, Princeton: Princeton University Press.
  • Wightman, A. S., 1995, ‘Superselection Rules: Old and New’, Il Nuovo Cimento, 110 B: 751–769.
  • Zeh, H.-D., 1970, ‘On the Interpretation of Measurement in Quantum Theory’, Foundations of Physics, 1: 69–76. Reprinted in Wheeler and Zurek (1983), pp. 342–349.
  • –––, 1973, ‘Toward a Quantum Theory of Observation’, Foundations of Physics, 3: 109–116 [corrected version available online, June 2003].
  • –––, 2000, ‘The Problem of Conscious Observation in Quantum Mechanical Description’, Foundations of Physics Letters, 13: 221–233 [preprint available online].
  • –––, 2001, The Physical Basis of the Direction of Time, Berlin: Springer, 4th edition.
  • –––, 2003a, ‘Basic Concepts and Their Interpretation‘, in Joos et al. (2003), pp. 7–40 [page numbers refer to the preprint available online].
  • –––, 2003b, ‘There is no “First” Quantization’, Physics Letters, A 309: 329–334 [preprint available online].
  • Zurek, W. H., 1981, ‘Pointer Basis of Quantum Apparatus: Into what Mixture does the Wave Packet Collapse?’, Physical Review, D 24: 1516–1525.
  • –––, 1982, ‘Environment-Induced Superselection Rules’, Physical Review, D 26: 1862–1880.
  • –––, 1991, ‘Decoherence and the Transition from Quantum to Classical’, Physics Today, 44 (October): 36–44 [abstract and updated version available online as ‘Decoherence and the Transition from Quantum to Classical – Revisited’, June 2003].
  • –––, 1993, ‘Negotiating the Tricky Border Between Quantum and Classical’, Physics Today, 46 (April): 84–90.
  • –––, 1998, ‘Decoherence, Einselection, and the Existential Interpretation (The Rough Guide)’, Philosophical Transactions of the Royal Society of London, A 356: 1793–1820 [preprint available online].
  • –––, 2003, ‘Decoherence, Einselection, and the Quantum Origins of the Classical’, Reviews of Modern Physics, 75: 715–775 [page numbers refer to the preprint available online].
  • Zurek, W. H., and Paz, J.-P., 1994, ‘Decoherence, Chaos, and the Second Law’, Physical Review Letters, 72: 2508–2511 [preprint available online].

Other Internet Resources

  • Bacciagaluppi, G. (Utrecht University), 2013, ‘Review of: The Everett Interpretation of Quantum Mechanics. Collected Works 1955–1980 with Commentary. Hugh Everett III, edited by Jeffrey A. Barrett & Peter Byrne. Princeton: Princeton University Press (xii+389 pp.)’, available online in the Pittsburgh Phil-Sci Archive.
  • Crull, E. (City College, New York), and Bacciagaluppi, G. (Utrecht University), 2011, ‘Translation of W. Heisenberg: “Ist eine deterministische Ergänzung der Quantenmechanik möglich?”’, available online in the Pittsburgh Phil-Sci Archive.
  • Diósi, L. (Wigner Centre, Budapest) 1994, ‘Critique of Weakly Decohering and Linearly Positive Histories’, available online in the arXiv.org e-Print archive.
  • Gell-Mann, M. (deceased), and Hartle, J. B. (Santa Fe Institute), 1994, ‘Equivalent Sets of Histories and Multiple Quasiclassical Realms’, available online in the arXiv.org e-Print archive.
  • Wallace, D. (University of Pittsburgh), 2001, ‘Implications of Quantum Theory in the Foundations of Statistical Mechanics’, available online in the Pittsburgh Phil-Sci Archive.
  • Wallace, D. (University of Pittsburgh), 2013a, ‘Probability in Physics: Stochastic, Statistical, Quantum’, available online in the Pittsburgh Phil-Sci Archive.
  • Wallace, D. (University of Pittsburgh), 2013b, ‘Inferential vs. Dynamical Conceptions of Physics’, available online in the Pittsburgh Phil-Sci Archive.
  • Decoherence Website, maintained by Erich Joos. This is a site with information, references and further links to people and institutes working on decoherence, especially in Germany and the rest of Europe.
  • Quantum Mechanics on the Large Scale, maintained by Philip Stamp (University of British Columbia). This page has links to the available talks from the Vancouver workshop mentioned in footnote 1; see especially the papers by Tony Leggett and by Philip Stamp.
  • A Many-Minds Interpretation Of Quantum Theory, maintained by Matthew Donald (University of Cambridge). This page contains details of Matthew Donald’s many-minds interpretation, as well as discussions of some of the books and papers quoted above (and others of interest). Follow also the link to the ‘Frequently Asked Questions’, some of which contain useful discussion of decoherence.
  • The arXiv.org e-Print archive, formerly the Los Alamos archive. This is the main physics preprint archive.
  • The Pittsburgh Phil-Sci Archive. This is the main philosophy of science preprint archive.

Acknowledgments

I wish to think many people discussions with whom have shaped my understanding of decoherence over the years, in particular Marcus Appleby, Elise Crull, Matthew Donald, Beatrice Filkin, Meir Hemmo, Wayne Myrvold, Josh Rosaler, Simon Saunders, Max Schlosshauer, David Wallace, and Wojtek Zurek. For more recent discussions, correspondence and suggestions relating to this article I wish to thank Valia Allori, Maaneli Derakhshani, Bob Griffiths, Ronnie Hermens, Peter Holland, Martin Jones, Brian Josephson, Tony Leggett, Hans Primas, Alberto Rimini, Philip Stamp, and Bill Unruh. I also gratefully acknowledge my debt to Steve Savitt and Philip Stamp for an invitation to talk at the University of British Columbia, to Claudius Gros for an invitation to talk at the University of the Saarland, and for the opportunities for discussion arising from those talks. Finally I wish to thank the following: the referee for this entry, David Wallace, for his clear and constructive commentary; my former subject editor, John Norton, who corresponded with me extensively over a previous version of part of the material and whose suggestions I took to heart; my current subject editor, Wayne Myrvold; our editor-in-chief, Ed Zalta, and all his staff, for their saintly patience; and my late and lamented friend, Rob Clifton, who invited me to write on this topic in the first place.

Copyright © 2020 by
Guido Bacciagaluppi <g.bacciagaluppi@uu.nl>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free