Quantum Theory and Mathematical Rigor

First published Tue Jul 27, 2004; substantive revision Fri Mar 1, 2024

An ongoing debate in the foundations of quantum physics concerns the role of mathematical rigor. The contrasting views of von Neumann and Dirac provide interesting and informative insights concerning two sides of this debate. Von Neumann’s contributions often emphasize mathematical rigor and Dirac’s contributions emphasize pragmatic concerns. The discussion below begins with an assessment of their contributions to the foundations of quantum mechanics. Their contributions to mathematical physics beyond quantum mechanics are then considered, and the focus will be on the influence that these contributions had on subsequent developments in quantum theorizing, particularly with regards to quantum field theory and its foundations. The entry quantum field theory provides an overview of a variety of approaches to developing a quantum theory of fields. The purpose of this article is to provide a more detailed discussion of mathematically rigorous approaches to quantum field theory, as opposed to conventional approaches, such as Lagrangian quantum field theory, which are generally portrayed as being more heuristic in character. The current debate concerning whether Lagrangian quantum field theory or axiomatic quantum field theory should serve as the basis for interpretive analysis is then discussed.

1. Introduction

There are two competing mathematical strategies that are used in connection with physical theory; one emphasizes rigor and the other pragmatics. The pragmatic approach often compromises mathematical rigor, but offers instead expediency of calculation and elegance of expression. A case in point is the notion of an infinitesimal, a non-zero quantity that is smaller than any finite quantity. Infinitesimals were used by Kepler, Galileo, Newton, Leibniz and many others in developing and using their respective physical theories, despite lacking a mathematically rigorous foundation, as Berkeley clearly showed in his famous 1734 treatise The Analyst criticizing infinitesimals. Such criticisms did not prevent various 18th Century mathematicians, scientists, and engineers such as Euler and Lagrange from using infinitesimals to get accurate answers from their calculations. Nevertheless, the pull towards rigor led to the development in the 19th century of the concept of a limit by Cauchy and others, which provided a rigorous mathematical framework that effectively replaced the theory of infinitesimals. A rigorous foundation was eventually provided for infinitesimals by Robinson during the second half of the 20th Century, but infinitesimals are rarely used in contemporary physics. For more on the history of infinitesimals, see the entry on continuity and infinitesimals.

The competing mathematical strategies are manifest in a more recent discussion concerning the mathematical foundations of quantum mechanics. In the preface to von Neumann’s (1955) treatise on that topic, he notes that Dirac provides a very elegant and powerful formal framework for quantum mechanics, but complains about the central role in that framework of an “improper function with self-contradictory properties,” which he also characterizes as a “mathematical fiction.” He is referring to the Dirac \(\delta\) function, which has the following incompatible properties: it is defined over the real line, is zero everywhere except for one point at which it is infinite, and yields unity when integrated over the real line. Von Neumann promotes an alternative framework, which he characterizes as being “just as clear and unified, but without mathematical objections.” He emphasizes that his framework is not merely a refinement of Dirac’s; rather, it is a radically different framework that is based on Hilbert’s theory of operators.

Dirac is of course fully aware that the \(\delta\) function is not a well-defined expression. But he is not troubled by this for two reasons. First, as long as one follows the rules governing the \(\delta\) function (such as using the \(\delta\) function only under an integral sign, meaning in part not asking the value of a \(\delta\) function at a given point), then no inconsistencies will arise. Second, the \(\delta\) function can be eliminated, meaning that it can be replaced with a well-defined mathematical expression. However, the drawback in that case is, according to Dirac, that the substitution leads to a more cumbersome expression that obscures the argument. In short, when pragmatics and rigor lead to the same conclusion, pragmatics trumps rigor due to the resulting simplicity, efficiency, and increase in understanding.

As in the case of the notion of an infinitesimal, the Dirac \(\delta\) function was eventually given a mathematically rigorous foundation. That was done within Schwartz’s theory of distributions, which was later used in developing the notion of a rigged Hilbert space. The theory of distributions was used to provide a mathematical framework for quantum field theory (Wightman 1964). The rigged Hilbert space was used to do so for quantum mechanics (Böhm 1966) and then for quantum field theory (Bogoluliubov et al. 1975).

The complementary approaches, rigor and pragmatics, which are exhibited in the development of quantum mechanics, later came about in a more striking way in connection with the development of quantum electrodynamics (QED) and, more generally, quantum field theory (QFT). The emphasis on rigor emerges in connection with two frameworks, algebraic QFT and Wightman’s axiomatic QFT. Algebraic QFT has its roots in the work of von Neumann on operator algebras, which was developed by him in an attempt to generalize the Hilbert space framework. Wightman’s axiomatic QFT has its roots in Schwartz’s theory of distributions, and it was later developed in the rigged Hilbert space framework. Roughly, the basic distinction between the two approaches is that the algebra of operators is the basic mathematical concept in algebraic QFT, while operator-valued distributions (the quantum analogues of field quantities) are fundamental in Wightman’s axiomatic QFT. It is worth noting that algebraic QFT is generally formulated axiomatically, and that it is just as deserving of the name “axiomatic” QFT. However, that term is often taken to refer specifically to the approach based on operator-valued distributions. To avoid any possible confusion, that approach is referred to here as “Wightman’s axiomatic” QFT. The emphasis on pragmatics arises most notably in Lagrangian QFT, which uses perturbation theory, path integrals, and renormalization techniques. Although some elements of the theory were eventually placed on a firmer mathematical foundation, there are still serious questions about its being a fully rigorous approach on a par with algebraic and Wightman’s axiomatic QFT. Nevertheless, it has been spectacularly successful in providing numerical results that are exceptionally accurate with respect to experimentally determined quantities, and in making possible expedient calculations that are unrivaled by other approaches.

The two approaches to QFT continue to develop in parallel. Fleming (2002, pp. 135–136) brings this into focus in his discussion of differences between Haag’s Local Quantum Physics (1996) and Weinberg’s Quantum Field Theory (1995); Haag’s book presents algebraic QFT, and Weinberg’s book presents Lagrangian QFT. While both books are ostensibly about the same subject, Haag gives a precise formulation of QFT and its mathematical structure, but does not provide any techniques for connecting with experimentally determined quantities, such as scattering cross sections. Weinberg gives a pragmatic formulation that engages with physical intuition and provides heuristics that are important for performing calculations; however, it is not as mathematically rigorous. Moreover, there are a number of important topics that are examined in one book while not even mentioned in the other. For example, unitarily inequivalent representations are discussed by Haag, but not by Weinberg. By contrast, Weinberg discusses Feynman’s rules for path integrals, which are not mentioned at all by Haag. There is also the issue of demographics. Most particle and experimental physicists will read and study Weinberg’s book, but very few will read Haag’s book. Because of these differences, Fleming (2002, p. 136) suggests that one might question whether the two books are really about the same subject. This gives rise to the question whether any formulation of QFT is worthy of philosophical attention to its foundations. In particular, there is a debate between Wallace (2006, 2011) and Fraser (2009, 2011) over whether an interpretation of QFT should be based on the standard textbook treatment of QFT or an axiomatic formulation of QFT.

2. Von Neumann and the Foundations of Quantum Theory

In the late 1920s, von Neumann developed the separable Hilbert space formulation of quantum mechanics, which later became the definitive one (from the standpoint of mathematical rigor, at least). In the mid-1930s, he worked extensively on lattice theory (see the entry on quantum logic), rings of operators, and continuous geometries. Part of his expressed motivation for developing these mathematical theories was to develop an appropriate framework for QFT and a better foundation for quantum mechanics. During this time, he noted two closely related structures, modular lattices and finite type-II factors (a special type of ring of operators), that have what he regarded as desirable features for quantum theory. These observations led to his developing a more general framework, continuous geometries, for quantum theory. Matters did not work out as von Neumann had expected. He soon realized that such geometries must have a transition probability function, if they are to be used to describe quantum mechanical phenomena, and that the resulting structure is not a generalization at all beyond the operator rings that were already available. Moreover, it was determined much later that the type-III factors are the most important type of ring of operators for quantum theory. In addition, a similar verdict was delivered much later with regards to his expectations concerning lattice theory. The lattices that are appropriate for quantum theory are orthomodular – a lattice is orthomodular only if it is modular, but the converse is false. Of the three mathematical theories, it is the rings of operators that have proven to be the most important framework for quantum theory. It is possible to use a ring of operators to model key features of physical systems in a purely abstract, algebraic setting (this is discussed in section 4.1). A related issue concerns whether it is necessary to choose a representation of the ring in a Hilbert space; see Haag and Kastler (1964), Ruetsche (2003), and Kronz and Lupher (2005) for further discussion of this issue. In any case, the separable Hilbert space remains a crucial framework for quantum theory. The simplest examples of separable Hilbert spaces are the finite dimensional ones, in which case the algebra of operators is a type-I\(_n\) factor (n is a positive integer). The operators are n-by-n complex matrices, which are typically used to describe internal degrees of freedom such as spin. Readers wanting to familiarize themselves with these basic examples should consult the entry on quantum mechanics.

2.1 The Separable Hilbert Space Formulation of Quantum Mechanics

Matrix mechanics and wave mechanics were formulated roughly around the same time between 1925 and 1926. In July 1925, Heisenberg finished his seminal paper “On a Quantum Theoretical Interpretation of Kinematical and Mechanical Relations”. Two months later, Born and Jordan finished their paper, “On Quantum Mechanics”, which is the first rigorous formulation of matrix mechanics. Two months after this, Born, Heisenberg, and Jordan finished “On Quantum Mechanics II”, which is an elaboration of the earlier Born and Jordan paper; it was published in early 1926. These three papers are reprinted in van der Waerden (1967). Meanwhile, Schrödinger was working on what eventually became his four famous papers on wave mechanics. The first was received by Annalen der Physik in January 1926, the second was received in February, and then the third in May and the fourth in June. All four are reprinted in Schrödinger (1928).

Schrödinger was the first to raise the question of the relationship between matrix mechanics and wave mechanics in Schrödinger (1926), which was published in Annalen in spring 1926 between the publication of his second and third papers of the famous four. This paper is also reprinted in Schrödinger (1928). It contains the germ of a mathematical equivalence proof, but it does not contain a rigorous proof of equivalency: the mathematical framework that Schrödinger associated with wave mechanics is a space of continuous and normalizable functions, which is too small to establish the appropriate relation with matrix mechanics. Shortly thereafter, Dirac and Jordan independently provided a unification of the two frameworks. But their respective approaches required essential use of \(\delta\) functions, which were suspect from the standpoint of mathematical rigor. In 1927, von Neumann published three papers in Göttinger Nachrichten that placed quantum mechanics on a rigorous mathematical foundation and included a rigorous proof (i.e., without the use of \(\delta\) functions) of the equivalence of matrix and wave mechanics. These papers are reprinted in von Neumann(1961–1963, Volume I, Numbers 8–10). In the preface to his famous 1932 treatise on quantum mechanics (von Neumann 1955), which is an elegant summary of the separable Hilbert space formulation of quantum mechanics that he provided in the earlier papers, he acknowledges the simplicity and utility of Dirac’s formulation of quantum mechanics, but finds it ultimately unacceptable. He indicates that he cannot endure the use of what could then only be regarded as mathematical fictions. Examples of these fictions include Dirac’s assumption that every self-adjoint operator can be put in diagonal form and his use of \(\delta\) functions, which von Neumann characterizes as “improper functions with self-contradictory properties”. His stated purpose is to formulate a framework for quantum mechanics that is mathematically rigorous.

What follows is a brief sketch of von Neumann’s strategy. First, he recognized the mathematical framework of matrix mechanics as what would now be characterized as an infinite dimensional, separable Hilbert space. Here the term “Hilbert space” denotes a complete vector space with an inner product; von Neumann imposed the additional requirement of separability (having a countable basis) in his definition of a Hilbert space. He then attempted to specify a set of functions that would instantiate an (infinite-dimensional) separable Hilbert space and could be identified with Schrödinger’s wave mechanics. He began with the space of square-integrable functions on the real line. To satisfy the completeness condition, that all Cauchy sequences of functions converge (in the mean) to some function in that space, he specified that integration must be defined in the manner of Lebesgue. To define an inner product operation, he specified that the set of Lebesgue square-integrable functions must be partitioned into equivalence classes modulo the relation of differing on a set of measure zero. That the elements of the space are equivalence classes of functions rather than functions is sometimes overlooked, and it has interesting ramifications for interpretive investigations. It has been argued in Kronz (1999), for example, that separable Hilbert space is not a suitable framework for quantum mechanics under Bohm’s ontological interpretation (also known as Bohmian mechanics).

2.2 Rings of Operators, Quantum Logics, and Continuous Geometries

In a letter to Birkhoff from 1935, von Neumann says: “I would like to make a confession which may seem immoral: I do not believe in Hilbert space anymore”; the letter is published in von Neumann (2005). The confession is indeed startling since it comes from the champion of the separable Hilbert space formulation of quantum mechanics and it is issued just three years after the publication of his famous treatise, the definitive work on the subject. The irony is compounded by the fact that less than two years after his confession to Birkhoff, his mathematical theorizing about the abstract mathematical structure that was to supersede the separable Hilbert space, continuous geometries with a transition probability, turned out not to provide a generalization of the separable Hilbert space framework. It is compounded again with interest in that subsequent developments in mathematical physics initiated and developed by von Neumann ultimately served to strengthen the entrenchment of the separable Hilbert space framework in mathematical physics (especially with regards to quantum theory). These matters are explained in more detail in Section 4.1.

Three theoretical developments come together for von Neumann in his theory of continuous geometries during the seven years following 1932: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. By 1934, von Neumann had already made substantial moves towards an algebraic approach to quantum mechanics with the help of Jordan and Wigner – their article, “On an Algebraic Generalization of the Quantum Mechanical Formalism”,  is reprinted in von Neumann (1961–1963, Vol. II, No. 21). In 1936, he published a second paper on this topic, “On an Algebraic Generalization of the  Quantum Mechanical Formalism (Part I)”, which is reprinted in von Neumann (1961–1963, Vol. III, No. 9). Neither work was particularly influential, as it turns out. A related paper by von Neumann and Birkhoff, “The Logic of Quantum Mechanics”, was also published in 1936, and it is reprinted in von Neumann (1961–1963, Vol. IV, No. 7). It was seminal to the development of a sizeable body of literature on quantum logics. It should be noted, however, that this happens only after modularity, a key postulate for von Neumann, is replaced with orthomodularity (a weaker condition). The nature of the shift is clearly explained in Holland (1970): modularity is in effect a weakening of the distributive laws (limiting their validity to certain selected triples of lattice elements), and orthomodularity is a weakening of modularity (limiting the validity of the distributive laws to an even smaller set of triples of lattice elements). The shift from modularity to orthomodularity was first made in (Loomis 1955). Rapid growth of literature on orthomodular lattices and the foundations of quantum mechanics soon followed. For example, see Pavičić (1992) for a fairly exhaustive bibliography of quantum logic up to 1990, which has over 1800 entries.

Of substantially greater note for the foundations of quantum theory are six papers by von Neumann (three jointly published with Murray) on rings of operators, which are reprinted in von Neumann (1961–1963, Vol. III, Nos 2–7). The first two, “On Rings of Operators” and a sequel “On Rings of Operators II”, were published in 1936 and 1937, and they were seminal to the development of the other four. The third, “On Rings of Operators: Reduction Theory”, was written during 1937–1938 but not published until 1949. The fourth, “On Infinite Direct Products”, was published in 1938. The remaining two, “On Rings of Operators III” and “On Rings of Operators IV” were published in 1941 and 1943, respectively. This massive work on rings of operators was very influential and continues to have an impact in pure mathematics, mathematical physics, and the foundations of physics. Rings of operators are now referred to as “von Neumann algebras” following Dixmier (1981), who first referred to them by this name (stating that he did so following a suggestion made to him by Dieudonné) in the introduction to his 1957 treatise on operator algebras (Dixmier 1981).

A von Neumann algebra is a \(*\)-subalgebra of the set of bounded operators B(H) on a Hilbert space H that is closed in the weak operator topology. It is usually assumed that the von Neumann algebra contains the identity operator. A \(*\)-subalgebra contains the adjoint of every operator in the algebra, where the “\(*\)” denotes the adjoint. There are special types of von Neumann algebras that are called “factors”. A von Neumann algebra is a factor, if its center (which is the set of elements that commute with all elements of the algebra) is trivial, meaning that it only contains scalar multiples of the identity element. Moreover, von Neumann showed in his reduction-theory paper that all von Neumann algebras that are not factors can be decomposed as a direct sum (or integral) of factors. There are three mutually exclusive and exhaustive factor types: type-I, type-II, and type-III. Each type has been classified into (mutually exclusive and exhaustive) sub-types: types I\(_n\) \((n = 1,2,\ldots ,\infty),\) II\(_n\) \((n = 1,\infty),\) III\(_z\) \((0\le z\le 1).\) As mentioned above, type-I\(_n\) correspond to finite dimensional Hilbert spaces, while type-I\(_{\infty}\) corresponds to the infinite dimensional separable Hilbert space that provides the rigorous framework for wave and matrix mechanics. Von Neumann and Murray distinguished the subtypes for type-I and type-II, but were not able to do so for the type-III factors. Subtypes were not distinguished for these factors until the 1960s and 1970s – see Chapter 3 of Sunder (1987) or Chapter 5 of Connes (1994) for details.

As a result of his earlier work on the foundations of quantum mechanics and his work on quantum logic with Birkhoff, von Neumann came to regard the type-II\(_1\) factors as likely to be the most relevant for physics. This is a substantial shift since the most important class of algebra of observables for quantum mechanics was thought at the time to be the set of bounded operators on an infinite-dimensional separable Hilbert space, which is a type-I\(_{\infty}\) factor. A brief explanation for this shift is provided below. See the well-informed and lucid account presented in (Rédei 1998) for a much fuller discussion of von Neumann’s views on fundamental connections between quantum logic, rings of operators (particularly type-II\(_1\) factors), foundations of probability theory, and quantum physics. It is worth noting that von Neumann regarded the type-III factors as a catch-all class for the “pathological” operator algebras; indeed, it took several years after the classificatory scheme was introduced to demonstrate the existence of such factors. It is ironic that the predominant view now seems to be that the type-III factors are the most relevant class for physics (particularly for QFT and quantum statistical mechanics). This point is elaborated further in Section 4.1 after explaining below why von Neumann’s program never came to fruition.

In the introduction to the first paper in the series of four entitled “On Rings of Operators”, Murray and von Neumann list two reasons why they are dissatisfied with the separable Hilbert space formulation of quantum mechanics. One has to do with a property of the trace operation, which is the operation appearing in the definition of the probabilities for measurement results (the Born rule), and the other with domain problems that arise for unbounded observable operators. The trace of the identity is infinite when the separable Hilbert space is infinite-dimensional, which means that it is not possible to define a correctly normalized a priori probability for the outcome of an experiment (i.e., a measurement of an observable). By definition, the a priori probability for an experiment is that in which any two distinct outcomes are equally likely. Thus, the probability must be zero for each distinct outcome when there is an infinite number of such outcomes, which can occur if and only if the space is infinite dimensional. It is not clear why von Neumann believed that it is necessary to have an a priori probability for every experiment, especially since von Mises clearly believed that a priori probabilities (“uniform distributions” in his terminology) do not always exist (von Mises 1981, pp. 68 ff.) and von Neumann was influenced substantially by von Mises on the foundations of probability (von Neumann 1955, p. 198 fn.). Later, von Neumann changed the basis for his expressed reason for dissatisfaction with infinite dimensional Hilbert spaces from probabilistic to algebraic considerations (Birkhoff and von Neumann 1936, p. 118); namely, that it violates Hankel’s principle of the preservation of formal law, which leads one to try to preserve modularity – a condition that holds in finite-dimensional Hilbert spaces but not in infinite-dimensional Hilbert spaces. The problem with unbounded operators arises from their only being defined on a merely dense subset of the set elements of the space. This means that algebraic operations of unbounded operators (sums and products) cannot be generally defined; for example, it is possible that two unbounded operators \(A\), \(B\) are such that the range of \(B\) and the domain of \(A\) are disjoint, in which case the product \(AB\) is meaningless.

The problems mentioned above do not arise for type-I\(_n\) factors, if \(n\lt \infty\), nor do they arise for type-II\(_1\). That is to say, these factor types have a finite trace operation and are not plagued with the domain problems of unbounded operators. Particularly noteworthy is that the lattice of projections of each of these factor types (type-I\(_n\) for \(n\lt \infty\) and type-II\(_1)\) is modular. By contrast, the set of bounded operators on an infinite-dimensional separable Hilbert space, a type-I\(_{\infty}\) factor, is not modular; rather, it is only orthomodular. These considerations serve to explain why von Neumann regarded the type-II\(_1\) factor as the proper generalization of the type-I\(_n\) \((n\lt \infty)\) for quantum physics rather than the type-I\(_{\infty}\) factors. The shift in the literature from modular to orthomodular lattices that was characterized above is in effect a shift back to von Neumann’s earlier position (prior to his confession). But, as was already mentioned, it now seems that this was not the best move either.

It was von Neumann’s hope that his program for generalizing quantum theory would emerge from a new mathematical structure known as “continuous geometry”. He wanted to use this structure to bring together the three key elements that were mentioned above: the algebraic approach to quantum mechanics, quantum logics, and rings of operators. He sought to forge a strong conceptual link between these elements and thereby provide a proper foundation for generalizing quantum mechanics that does not make essential use of Hilbert space (unlike rings of operators). Unfortunately, it turns out that the class of continuous geometries is too broad for the purposes of axiomatizing quantum mechanics. The class must be suitably restricted to those having a transition probability. It turns out that there is then no substantial generalization beyond the separable Hilbert space framework. An unpublished manuscript that was finished by von Neumann in 1937 was prepared and edited by Israel Halperin, and then published as von Neumann (1981). A review of the manuscript by Halperin was published in von Neumann (1961–1963, Vol. IV, No. 16) years before the manuscript itself was published. In that review, Halperin notes the following:

The final result, after 200 pages of deep reasoning is (essentially): every such geometry with transition probability can be identified with the projection geometry of a finite factor in some finite or infinite dimensional Hilbert space (\(\text{I}_m\) or \(\text{II}_1)\). This result indicates that continuous geometries do not provide new useful mathematical descriptions of quantum mechanical phenomena beyond that already available from rings of operators.

This unfortunate development does not, however, completely undermine von Neumann’s efforts to generalize quantum mechanics. On the contrary, his work on rings of operators does provide significant light to the way forward. The upshot of subsequent developments is that von Neumann settled on the wrong factor type for the foundations of physics.

3. Dirac and the Foundations of Quantum Theory

Dirac’s formal framework for quantum mechanics was very useful and influential despite its lack of mathematical rigor. It was used extensively by physicists and it inspired some powerful mathematical developments in functional analysis. Eventually, mathematicians developed a suitable framework for placing Dirac’s formal framework on a firm mathematical foundation, which is known as a rigged Hilbert space (and is also referred to as a Gelfand Triplet). This came about as follows. A rigorous definition of the \(\delta\) function became possible in distribution theory, which was developed by Schwartz from the mid-1940s to the early 1950s. Distribution theory inspired Gelfand and collaborators during the mid-to-late 1950s to formulate the notion of a rigged Hilbert space, the firm foundation for Dirac’s formal framework. This development was facilitated by Grothendiek’s notion of a nuclear space, which he introduced in the mid-1950s. The rigged Hilbert space formulation of quantum mechanics was then developed independently by Böhm and by Roberts in 1966. Since then, it has been extended to a variety of different contexts in the quantum domain including decay phenomena and the arrow of time. The mathematical developments of Schwartz, Gelfand, and others had a substantial effect on QFT as well. Distribution theory was taken forward by Wightman in developing the axiomatic approach to QFT from the mid-1950s to the mid-1960s. In the late 1960s,  the axiomatic approach was explicitly put into the rigged Hilbert space framework by Bogoliubov and co-workers.

Although these developments were only indirectly influenced by Dirac, by way of the mathematical developments that are associated with his formal approach to quantum mechanics, there are other elements of his work that had a more direct and very substantial impact on the development of QFT. In the 1930s, Dirac (1933) developed a Lagrangian formulation of quantum mechanics and applied it to quantum fields , and the latter inspired Feynman (1948) to develop the path-integral approach to QFT. The mathematical foundation for path-integral functionals is still lacking (Rivers 1987, pp, 109–134), though substantial progress has been made (DeWitt-Morette et al. 1979). Despite such shortcomings, it remains the most useful and influential approach to QFT to date. In the 1940s, Dirac (1943) developed a form of quantum electrodynamics that involved an indefinite metric – see also Pauli (1943) in that connection. This had a substantial influence on later developments, first in quantum electrodynamics in the early 1950s with the Gupta-Bluer formalism, and in a variety of QFT models such as vector meson fields and quantum gravity fields by the late 1950s – see Chapter 2 of Nagy (1966) for examples and references.

3.1 Dirac’s \(\delta\) Function, Principles, and Bra-Ket Notation

Dirac’s attempt to prove the equivalence of matrix mechanics and wave mechanics made essential use of the \(\delta\) function, as indicated above. The \(\delta\) function was used by physicists before Dirac, but it became a standard tool in many areas of physics only after Dirac very effectively put it to use in quantum mechanics. It then became widely known by way of his textbook (Dirac 1930), which was based on a series of lectures on quantum mechanics given by Dirac at Cambridge University. This textbook saw three later editions: the second in 1935, the third in 1947, and the fourth in 1958. The fourth edition has been reprinted many times. Its staying power is due, in part, to another innovation that was introduced by Dirac in the third edition, his bra-ket formalism. He first published this formalism in (Dirac 1939), but the formalism did not become widely used until after the publication of the third edition of his book. There is no question that these tools, first the \(\delta\) function and then the bra-ket notation, were extremely effective for physicists practicing and teaching quantum mechanics both with regards to setting up equations and to the performance of calculations. Most quantum mechanics textbooks use \(\delta\) functions and plane waves, which are key elements of Dirac’s formal framework, but they are not included in von Neumann’s rigorous mathematical framework for quantum mechanics. Working physicists as well as teachers and students of quantum mechanics often use Dirac’s framework because of its simplicity, elegance, power, and relative ease of use. Thus, from the standpoint of pragmatics, Dirac’s framework is much preferred over von Neumann’s. The notion of a rigged Hilbert space placed Dirac’s framework on a firm mathematical foundation.

3.2 The Rigged Hilbert Space Formulation of Quantum Mechanics

Mathematicians worked very hard to provide a rigorous foundation for Dirac’s formal framework. One key element was Schwartz’s (1945; 1950–1951) theory of distributions. Another key element, the notion of a nuclear space, was developed by Grothendieck (1955). This notion made possible the generalized-eigenvector decomposition theorem for self-adjoint operators in rigged Hilbert space – for the theorem see Gelfand and Vilenken (1964, pp. 119–127), and for a brief historical account of the convoluted path leading to it see Berezanskii (1968, pp. 756–760). The decomposition principle provides a rigorous way to handle observables such as position and momentum in the manner in which they are presented in Dirac’s formal framework. These mathematical developments culminated in the early 1960s with Gelfand and Vilenkin’s characterization of a structure that they referred to as a rigged Hilbert space (Gelfand and Vilenkin 1964, pp. 103–127). It is unfortunate that their chosen name for this mathematical structure is doubly misleading. First, there is a natural inclination to regard it as denoting a type of Hilbert space, one that is rigged in some sense, but this inclination must be resisted. Second, the term rigged has an unfortunate connotation of illegitimacy, as in the terms rigged election or rigged roulette table, and this connotation must be dismissed as prejudicial. There is nothing illegitimate about a rigged Hilbert space from the standpoint of mathematical rigor (or any other relevant standpoint). A more appropriate analogy may be drawn using the notion of a rigged ship: the term rigged in this context means fully equipped. But this analogy has its limitations since a rigged ship is a fully equipped ship, but (as the first point indicates) a rigged Hilbert space is not a Hilbert space, though it is generated from a Hilbert space in the manner now to be described.

A rigged Hilbert space is a dual pair of spaces \((\Phi , \Phi^x)\) that can generated from a separable Hilbert space \(\Eta\) using a sequence of norms (or semi-norms); the sequence of norms is generated using a nuclear operator (a good approximate meaning is an operator of trace-class, meaning that the trace of the modulus of the operator is finite). In the mathematical theory of topological vector spaces, the space \(\Phi\) is characterized in technical terms as a nuclear Fréchet space. To say that \(\Phi\) is a Fréchet space means that it is a complete metric space, and to say that it is nuclear means that it is the projective limit of a sequence of Hilbert spaces in which the associated topologies get rapidly finer with increasing n (i.e., the convergence conditions are increasingly strict); the term nuclear is used because the Hilbert-space topologies are generated using a nuclear operator. In distribution theory, the space \(\Phi\) is characterized as a test-function space, where a test-function is thought of as a very well-behaved function (being continuous, n-times differentiable, having a bounded domain or at least dropping off exponentially beyond some finite range, etc). \(\Phi^x\) is a space of distributions, and it is the topological dual of \(\Phi\), meaning that it corresponds to the complete space of continuous linear functionals on \(\Phi\). It is also the inductive limit of a sequence of Hilbert spaces in which the topologies get rapidly coarser with increasing n. Because the elements of \(\Phi\) are so well-behaved, \(\Phi^x\) may contain elements that are not so well-behaved, some being singular or improper functions (such as Dirac’s \(\delta\) function). \(\Phi\) is the topological anti-dual of \(\Phi^x\), meaning that it is the complete set of continuous anti-linear functionals on \(\Phi^x\); it is anti-linear rather than linear because multiplication by a scalar is defined in terms of the scalar’s complex conjugate.

It is worth noting that neither \(\Phi\) nor \(\Phi^x\) is a Hilbert space in that each lacks an inner product that induces a metric with respect to which the space is complete, though for each space there is a topology with respect to which the space is complete. Nevertheless, each of them is closely related to the Hilbert space \(\Eta\) from which they are generated: \(\Phi\) is densely embedded in \(\Eta\), which in turn is densely embedded in \(\Phi^x\). Two other points are worth noting. First, dual pairs of this sort can also be generated from a pre-Hilbert space, which is a space that has all the features of a Hilbert space except that it is not complete, and doing so has the distinct advantage of avoiding the partitioning of functions into equivalence classes (in the case of functions spaces). The term rigged Hilbert space is typically used broadly to include dual pairs generated from either a Hilbert space or a pre-Hilbert space. Second, the term Gelfand triplet is sometimes used instead of the term rigged Hilbert space, though it refers to the ordered set \((\Phi , \Eta , \Phi^x)\), where \(\Eta\) is the Hilbert space used to generate \(\Phi\) and \(\Phi^x\).

The dual pair \((\Phi , \Phi^x)\) possesses the means to represent important operators for quantum mechanics that are problematic in a separable Hilbert space, particularly the unbounded operators that correspond to the observables position and momentum, and it does so in a particularly effective and unproblematic manner. As already noted, these operators have no eigenvalues or eigenvectors in a separable Hilbert space; moreover, they are only defined on a dense subset of the elements of the space and this leads to domain problems. These undesirable features also motivated von Neumann to seek an alternative to the separable Hilbert space framework for quantum mechanics, as noted above. In a rigged Hilbert space, the operators corresponding to position and momentum can have a complete set of eigenfunctionals (i.e., generalized eigenfunctions). The key result is known as the nuclear spectral theorem (and it is also known as the Gelfand-Maurin theorem). One version of the theorem says that if A is a symmetric linear operator defined on the space \(\Phi\) and it admits a self-adjoint extension to the Hilbert space H, then A possesses a complete system of eigenfunctionals belonging to the dual space \(\Phi^x\) (Gelfand and Shilov 1977, chapter 4). That is to say, provided that the stated condition is satisfied, A can be extended by duality to \(\Phi^x\), its extension \(A^x\) is continuous on \(\Phi^x\) (in the operator topology in \(\Phi^x)\), and \(A^x\) satisfies a completeness relation (meaning that it can be decomposed in terms of its eigenfunctionals and their associated eigenvalues). The duality formula for extending \(A\) to \(\Phi^x\) is \(\braket{\phi}{A^x\kappa} = \braket{A\phi}{\kappa}\), for all \(\phi \in \Phi\) and for all \(\kappa \in \Phi^x\). The completeness relation says that for all \(\phi ,\theta \in \Phi\):

\[ \braket{A\phi}{\theta} = \int_{v(A)} \lambda \braket{\phi}{\lambda} \braket{\lambda}{\theta}^* \mathrm{d}\mu(\lambda), \]

where \(v(A)\) is the set of all generalized eigenvalues of \(A^x\) (i.e., the set of all scalars \(\lambda\) for which there is \(\lambda \in \Phi^x\) such that \(\braket{\phi}{A^x\lambda} = \lambda \braket{\phi}{\lambda}\) for all \(\phi \in \Phi)\).

The rigged Hilbert space representation of these observables is about as close as one can get to Dirac’s elegant and extremely useful formal representation with the added feature of being placed within a mathematically rigorous framework. It should be noted, however, that there is a sense in which it is a proper generalization of Dirac’s framework. The rigging (based on the choice of a nuclear operator that determines the test function space) can result in different sets of generalized eigenvalues being associated with an operator. For example, the set of (generalized) eigenvalues for the momentum operator (in one dimension) corresponds to the real line, if the space of test functions is the set \(S\) of infinitely differentiable functions of \(x\) which together with all derivatives vanish faster than any inverse power of \(x\) as \(x\) goes to infinity, whereas its associated set of eigenvalues is the complex plane, if the space of test functions is the set \(D\) of infinitely differentiable functions with compact support (i.e., vanishing outside of a bounded region of the real line). If complex eigenvalues are not desired, then \(S\) would be a more appropriate choice than \(D\) – see Nagel (1989) for a brief discussion. But there are situations in which it is desirable for an operator to have complex eigenvalues. This is so, for example, when a system exhibits resonance scattering (a type of decay phenomenon), in which case one would like the Hamiltonian to have complex eigenvalues – see Böhm & Gadella (1989). (Of course, it is impossible for a self-adjoint operator to have complex eigenvalues in a Hilbert space.)

Soon after the development of the theory of rigged Hilbert spaces by Gelfand and his associates, the theory was used to develop a new formulation of quantum mechanics. This was done independently by Böhm (1966) and Roberts (1966). It was later demonstrated that the rigged Hilbert space formulation of quantum mechanics can handle a broader range of phenomena than the separable Hilbert space formulation. That broader range includes scattering resonances and decay phenomena (Böhm and Gadella 1989), as already noted. Böhm (1997) later extended this range to include a quantum mechanical characterization of the arrow of time. The Prigogine school developed an alternative characterization of the arrow of time using the rigged Hilbert space formulation of quantum mechanics (Antoniou and Prigogine 1993). Kronz (1998, 2000) used this formulation to characterize quantum chaos in open quantum systems. Castagnino and Gadella (2003) used it to characterize decoherence in closed quantum systems.

3.3 Colombeau Algebras

Dirac delta functions are ubiquitous in quantum mechanics and quantum field theory. However, there are standard restrictions on their use that limits their utilization. In quantum mechanics, the position eigenstates of a free particle confined to a box in 1-dimension are “delta-function normalized”: \(\left\langle x' \middle| x \right\rangle = \delta\left( x - x' \right)\). But the introduction of a third position and another inner product, as in \(\left\langle x' \middle| x \right\rangle\left\langle x \middle| x'' \right\rangle = \delta\left( x - x' \right)\delta\left( x - x'' \right)\), results in a product of Dirac delta functions (in this case, the multiplication of distributions that share a common variable), an expression that is not well-defined. In quantum field theory, self-interaction terms may arise in a calculation, such as \(\delta^{2}(x)\), which are also not well defined. In the context of rigged Hilbert space, this limitation is characterized by the qualification that \(\Phi^{x}\), the space of distributions, does not have an inner product defined on it (see section 3.2 above).

The prospects for finding a rigorous way to define the multiplication of distributions looked particularly grim given Schwartz’ (1954) proof of the impossibility of defining a differential algebra that contains the space of distributions and preserves the product of continuous functions. However, there are ways to work around Schwartz’ impossibility result to allow for the multiplication of distributions. One way to do so involves Colombeau algebras, which are discussed in section 3.3.2; other approaches are briefly discussed at the end of section

3.3.1 Informal Sketch of Distribution Theory

To informally build up towards a definition of a distribution, it is helpful to think of distributions as a map from functions to numbers. For example, the standard way of presenting the Dirac delta function is:

\[\int f(x)\delta(x - a)dx = f(a).\]

Viewed in this way, \(\delta\) is mapping the function \(f\) to the number \(f(a)\). In other words, \(\delta\) is a functional. For a distribution to be well defined, the set of functions it is going to map, the space of test functions, must be specified. In doing so, it is necessary to specify the domain of the test functions. In distribution theory, test functions are typically defined on a nonempty fixed open subset \(\Omega\) of the real numbers or the real vector space with \(n\) dimensions \(\mathbb{R}^{n}\). Of course, the open subset \(\Omega\) can simply be \(\mathbb{R}^{n}\) since \(\mathbb{R}^{n}\) is a subset of itself, though not a proper subset.[1] Test Functions

There are other requirements for specifying the space of test functions having to do with derivatives. The first derivative \(\delta'(x - a)\) of the Dirac delta function \(\delta(x - a)\) is defined by

\[\int f(x)\delta'(x - a)dx = - f'(a).\]

Higher order derivatives are defined similarly with an alternating ±1 in front of the derivative(s) of \(f\), e.g.,

\[\int f(x)\delta''(x - a)dx = f''(a).\]

This suggests the function \(f\) should be differentiable up to some arbitrary order \(k\) or infinitely differentiable. The key point is that the number of derivatives of the Dirac delta function depends on the number of derivatives \(f\) has. For the sake of simplicity, we will assume the test functions are infinitely differentiable, which is usually denoted as \(C^{\infty}(\Omega)\). Note that choosing test functions \(C^{k}(\Omega)\) that are differentiable up to order \(k\) will give a different notion of distribution than choosing infinitely differentiable test functions \(C^{\infty}(\Omega)\). Choosing a class of test functions defines a particular kind of distribution. The differentiability of a distribution depends on the differentiability of the test functions. Distributions that would seem to be non-differentiable can in fact be differentiable precisely because they inherit the differentiability properties of the test functions. Hence, if the test functions are infinitely differentiable \(C^{\infty}(\Omega)\), then we can differentiate distributions like \(\delta(x - a)\) as many times as we like.

Test functions are also required to have compact support within \(\Omega\). This means that there exists a compact set \(K \subset \Omega\) such that the test function \(\phi(x) = 0\) whenever \(x \notin K\). While this seems to be a very restrictive condition, it will generate a larger space of distributions (see below).[2] Gathering together these properties, we can now define test functions as (1) infinitely differentiable functions \(C^{\infty}(\Omega)\) with (2) compact support in \(\Omega\). The collection of test functions has the structure of a vector space, or more precisely a topological vector space which is denoted as \(D(\Omega)\) or \(C_{c}^{\infty}(\Omega)\), where the subscript \(c\) indicates that the infinitely differentiable functions have compact support in \(\Omega\). It also follows that \(C_{c}^{\infty}(\Omega) \subset C^{\infty}(\Omega)\).

Instead of requiring the test functions to have compact support within \(\Omega\), weaker requirements can be imposed on the test functions. For example, we can replace the compact support requirement with requiring the test functions in \(C_{}^{\infty}(\Omega)\) along with their derivatives to decay sufficiently rapidly as \(|x| \rightarrow \infty\). These test functions, which are called Schwartz test functions, have excellent asymptotic properties. The space of Schwartz test functions will be denoted as \(S(\Omega)\). Any compactly supported test function will obey the Schwartz conditions. Ignoring differentiability requirements for the moment, the set of compactly supported test functions \(D(\Omega)\) will be a proper subset of \(S(\Omega)\), i.e., \(D(\Omega) \subset S(\Omega)\). Distributions are continuous linear maps from the test functions to the real (or complex) numbers (see section below). In other words, distributions are elements of the dual spaces, denoted \(D'(\Omega)\) and \(S'(\Omega)\), of their test function spaces. Elements of \(S'(\Omega)\) are called tempered distributions. Since \(D(\Omega) \subset S(\Omega)\) and the convergence in \(D(\Omega)\) is stronger than \(S(\Omega)\), \(S'(\Omega) \subset D'(\Omega)\), i.e., there are fewer tempered distributions than distributions. An example of a distribution that is not a tempered distribution is \(e^{x}\) because it is positive and not polynomially bounded as \(|x| \rightarrow \infty\). The more restrictive the test function space is the “wilder” the distributions can be. Tempered distributions are important because they allow the Fourier transform to be extended from "standard functions" to tempered distributions. The Fourier transform of a Schwartz function is a Schwartz function so the Fourier transform of any tempered distribution can be defined. Tempered distributions are “slow growing” in that their derivatives grow at most as fast as some polynomial. The Wightman field operators (see section 4.2 below) are typically assumed to be tempered distributions. There are other conditions that can be imposed on the test functions such as requiring that they are holomorphic or analytic, but we will not consider those possibilities here. Distributions

As noted above, a distribution takes a test function and maps it to a real or complex number, and in that capacity it is referred to as a functional. It is not a function, even though distributions such as “the Dirac delta function” has function in its title.[3]

The space of test functions has the structure of a vector space. That means that any two test functions can be added together \(f+g\) or multiplied by a scalar \(\lambda f\) giving another test function. Distributions are required to respect those operations and map test functions like \(f+g\) or \(\lambda f\) to a real number. A linear functional \(u\) will map functions \(f\), \(g\) to real numbers as follows: \(u(\lambda f+ \lambda' g)=\lambda u(f)+\lambda' u(g)\) for scalars \(\lambda\), \(\lambda'\).[4] A distribution will be a linear functional on the set of test functions \(D(\Omega).\)

Since an overarching goal is to facilitate doing calculus with distributions, it is necessary to characterization of how distributions converge. That characterization is based on the specification of how sequences of test functions converge; distributions should converge in a way that respects the specification of convergence for sequences of test functions. Consider some sequence of test functions \(f_{n}\) that converges to \(f\) in the limit of \(n \rightarrow \infty\), where \({f,f_{n} \in C}_{c}^{\infty}(\Omega)\). A distribution acting on each \(f_{n}\) maps each one to a number, so as \(n \rightarrow \infty\), the final number should be the same as what it assigns to the test function \(f\). In other words, the distribution should be (sequentially) continuous. The appropriate kind of continuity comes from the type of convergence imposed on the space of test functions. These considerations yield a suitable definition of a distribution: \(T:C_{c}^{\infty}(\Omega) \rightarrow \mathbb{R}\) is a distribution if \(T\) is (1) linear and (2) continuous. The set of all distributions is denoted by \(D'(\Omega)\). The set of test functions is denoted by \(D(\Omega)\) or \(C_{c}^{\infty}(\Omega)\) is a proper subset of \(D'(\Omega)\), \(D(\Omega) \subset D'(\Omega)\). This should make intuitive sense since the Dirac delta “function” is a distribution but not a function, so it could not possibly be a test function. The set of distributions has to be larger. Perhaps unsurprisingly, \(D'(\Omega)\) is also a vector space, so distributions can be added together and multiplied by scalars. However, \(D'(\Omega)\) does not permit us to multiply distributions. Schwartz’ Impossibility Result

One way to define a multiplication operation on the vector space of distributions \(D'(\Omega)\) is to embed \(D'(\Omega)\) in an algebra. An algebra over a field is a vector space over the same field with an additional binary operation called multiplication, which has the properties of associativity and distribution.

Schwartz’ “impossibility” result is supposed to show that such an embedding is impossible given certain conditions. Multiplying a smooth function by a distribution is always well-defined in Schwartz’ theory of distributions.[5] Schwartz’ impossibility result shows that there is no associative product of two distributions extending the well-defined product of a distribution and a smooth function. The basic idea of Schwartz’ proof is to consider an algebra of all continuous functions on \(\mathbb{R}\) with pointwise addition, pointwise multiplication, and 0 as the addition-identity element. Consider an algebra \(\mathcal{A}\) that contains as elements all of the continuous functions. Most presentations of Schwartz’ result show that \(D'(\Omega)\) cannot be embedded in \(\mathcal{A}\).[6] Assume that the multiplication product \(\circ\) of \(\mathcal{A}\) is associative, i.e., \(f \circ (g \circ h) = (f \circ g) \circ h\), and coincides with the pointwise multiplication of the algebra of continuous functions, i.e., \((f \circ g)(x) = f(x)g(x)\), where the constant function \(1\) that assigns every element in \(\mathbb{R}\) the value of one is the identity element of the algebra, i.e., for all \(f \in \mathcal{A}\), \(f \circ 1 = f = 1 \circ f\).

A quick example shows that this produces a contradiction. A property of the Dirac delta function is that \(x\delta(x) = 0\).[7] The Cauchy Principal Value distribution \(\text{p.v.}\left( \frac{1}{x} \right)\) when multiplied by \(x\) is \(1\), i.e., \(\text{p.v.}\left( \frac{1}{x} \right)x = 1\). If there is an associative multiplication of distributions that extends the multiplication of smooth functions by distributions, then there is a contradiction (see Alvarez, pp. 102-103 and Oberguggenberger, pp. 26-27). Since \(x\delta(x) = 0\), \(0 = p.v.\ \left( \frac{1}{x} \right) \circ \left( \left( x\delta(x \right) \right)\). By associativity of multiplication,

\[\begin{align} 0 &= \text{p.v.}\left( \frac{1}{x} \right) \circ \left( \left( x\delta(x \right) \right) \\ &= \left(\text{p.v.}\left( \frac{1}{x} \right)x \right) \circ \delta(x) \\ &= 1 \circ \delta(x) \\ &= \delta(x), \end{align}\]

so \(\delta(x) = 0\). But \(\delta(x) \neq 0\).

One of the important properties of distributions involves differentiation, so it is desirable to define a similar operator \(D\) for \(\mathcal{A}\) which coincides with the derivative defined on continuous functions and, if \(f\) has a derivative (assuming \(x \neq 0\)), \(Dx = 1\). \(D\) also satisfies Leibniz’ rule: \(D(fg) = (Df)g + f(Dg)\).[8] Schwartz observes that it is not possible to have an associative multiplication operation that also satisfies Leibniz’ rule which can coexist with the Dirac delta function. Here is an example of this incompatibility (see Nedeljkov et al. (1998, p. 3). Let \(H \in \mathcal{A}\) have the property that \(H \circ H = H\). Then \(H\) is a constant, i.e., \(DH = 0\). If you set \(H(x)\) to be the Heaviside function, then \(DH(x) = 0\). But in distribution theory, \(DH(x) = \delta(x)\) and combining both results means that \(\delta(x) = 0\).

However, Oberguggenberger (pp. 28-29) and Colombeau (1992, p. 8) note that Schwartz’ result does not depend on the multiplication of distributions. Rather, what the proof really shows is that it is “impossible” to multiply and differentiate continuous functions and have one singular object, the Dirac delta function, as an element in \(\mathcal{A}\). As Oberguggenberger (p. 28) writes, “Interpreted more favorably, the result just says that in associative algebras of generalized functions, multiplication and differentiation cannot simultaneously extend the corresponding classical operations unrestrictedly.” Nedeljkov et al. (1998, p. 3) write, “there is no way to define a “reasonable” product on all of \(D'\) which still has values in \(D'\).” Based on Oberguggenberger (1992), Grosser et al. (2001, p. 3) suggest that there are three ways to escape Schwartz’ impossibility result: (1) regular intrinsic operations, (2) irregular intrinsic operations, and (3) extrinsic products and algebras containing the distributions. In (1), the multiplication of distributions is limited to a subspace of \(D'\) where multiplication is defined classically. However, multiplication defined in this way does not apply to all of \(D'\). (2) assigns a product operator to \(D'\) but only for certain pairs of distributions. There are many ways to select the pairs, though the resulting multiplication will usually not be continuous or associative. Both (1) and (2) have the disadvantage of product operations that are not defined on all of \(D'\).

3.3.2 Colombeau Algebras

To get around an “impossibility” proof, one standard strategy is to modify or remove one or more of the assumptions within the proof.[9] Many of the assumptions in Schwartz’ proof seem natural, so a conservative approach would be to modify one assumption. Colombeau (1992, 1984) developed an associative algebra of generalized functions which contains distributions but does not preserve the product for continuous functions.

A Colombeau algebra (see Colombeau (1992), p. 2 and chapter 8) is an associative differential algebra of generalized functions \(\mathcal{G}(\Omega)\). The algebra of all infinitely differentiable functions on \(\Omega\), i.e., \(C^{\infty}(\Omega)\), is also a differential algebra. How does \(D'(\Omega)\) fit into this picture since it is not an algebra? \(C^{\infty}(\Omega)\), \(D'(\Omega)\), and \(\mathcal{G}(\Omega)\) are all vector spaces and their sets of elements have the following relationships: \(C^{\infty}(\Omega) \subset D'(\Omega)\mathcal{\subset G}(\Omega)\). \(\mathcal{G}(\Omega)\) induces on \(D'(\Omega)\) its addition, scalar multiplication, and differentiation properties, though not its multiplication. \(\mathcal{G}(\Omega)\) induces on \(C^{\infty}(\Omega)\) all of the usual properties that \(C^{\infty}(\Omega)\) already has including multiplication, thus \(C^{\infty}(\Omega)\) is a subalgebra of \(\mathcal{G}(\Omega)\). Two distributions in \(D'(\Omega)\) can be multiplied using the multiplication operation in \(\mathcal{G}(\Omega)\), but the result may not be a distribution! Rather, the multiplication of two distributions will be an element of \(\mathcal{G}(\Omega)\) (i.e., a generalized function) but not necessarily an element of \(D'(\Omega)\) (i.e., a distribution).

However, Colombeau algebras \(\mathcal{G}(\Omega)\) do not contain the algebra of continuous functions \(C(\Omega)\) as a subalgebra. In other words, the multiplication operation of a Colombeau algebra does not extend the multiplication operation on continuous functions, which Schwartz assumed for the extended algebra \(\mathcal{A}\). If two continuous functions were elements of \(\mathcal{G}(\Omega)\) and were multiplied together using the multiplication operator of \(\mathcal{G}(\Omega)\), the result need not give the same answer as the multiplication of those two continuous functions in \(C(\Omega)\). The difference between the two multiplications is “infinitesimal” in that there is no difference between the results provided that the product is not multiplied by some “infinite quantity” like \(\delta(0)\). That is allowed in \(\mathcal{G}(\Omega)\), but it is not allowed in \(C(\Omega)\). Thus, the multiplication operator in \(\mathcal{G}(\Omega)\), when restricted to \(C(\Omega)\) will give the same results in classical analysis for \(C(\Omega).\) Following Colombeau (1992, section 8.1), distributions are linear maps from the test functions \(D'(\Omega)\) to \(\mathbb{R}\) or \(\mathbb{C}\). The product of distributions looks like nonlinear maps from the test functions \(D'(\Omega)\) to \(\mathbb{R}\) or \(\mathbb{C}\). Consider \(f_{1},f_{2} \in C^{\infty}(\Omega)\). Suppose they are considered as distributions. For a test function \(\phi \in D(\Omega)\), each \(f_{1},f_{2}\) would use map it to some real (or complex) number via \(\int_{\Omega}{\ f}_{1}(x)\phi(x)dx\) and \(\int_{\Omega}{\ f}_{2}(x)\phi(x)dx\). Assuming it made sense to take the product \(f_{1},f_{2}\), their product would be a mapping from test function \(\phi \in D(\Omega)\) to some real or complex number given by \(\int_{\Omega}f_{1}(x)\phi(x)dx\ \int_{\Omega}f_{2}(x)\phi(x)dx\). If we treat their classical product as \(f_{1},f_{2} \in C(\Omega)\), then the distribution would look like a map from test function \(\phi \in D(\Omega)\) to \(\int_{\Omega}{\ f}_{1}(x)f_{2}(x)\phi(x)dx\). In general, \(\int_{\Omega}f_{1}(x)\phi(x)dx\ \int_{\Omega}f_{2}(x)\phi(x)dx\) will give a different result than \(\int_{\Omega}{\ f}_{1}(x)f_{2}(x)\phi(x)dx\). However, the two products could be identical if a quotient \(\mathcal{I}\) is used, i.e.,

\[\int_{\Omega}f_{1}(x)\phi(x)dx\ \int_{\Omega}f_{2}(x)\phi(x)dx+\mathcal{I} =\int_{\Omega}f_{1}(x)f_{2}(x)\phi(x)dx+\mathcal{I}.\]

Roughly, the algebra of generalized functions must at least accomplish two tasks: (1) \(D'(\Omega)\) is embedded in it and (2) there is an ideal \(\mathcal{I}\) of it such that the two products above are equal. To accomplish (1), consider the set of all infinitely differentiable functions on the space of test functions \(C^{\infty}(D(\Omega))\). Recall that \(D(\Omega)\) (or \(C_{c}^{\infty}(\Omega)\)) is the set of all infinitely differentiable functions with compact support in \(\Omega\), \(D(\Omega) \subset D'(\Omega)\), and that an element of \(D'(\Omega)\) is mapping from \(D(\Omega)\) to a real or complex number. An element of \(C^{\infty}(D(\Omega))\) is a mapping from \(D(\Omega)\) to a to a real or complex number. When \(C^{\infty}(D(\Omega))\) is given vector space operations, multiplication, and partial derivatives, it is a differential algebra. Suppose there is a collection of distributions

\[T_{1}(\phi),T_{2}(\phi),\ldots,T_{m}(\phi) \in D'(\Omega),\]

there could be functions

\[{f\left( T_{1}(\phi),\ T_{2}(\phi),\ldots,T_{m}(\phi) \right),\ g(T_{1}(\phi),\ T_{2}(\phi),\ldots,T_{m}(\phi)) \in C}^{\infty}(D(\Omega))\]

such that the product of \(f\) and \(g\) is well-defined. This would seem to be big enough to cover all the nonlinear functions of distributions. While there are mathematical niceties to work out, it should be plausible that that \(D'(\Omega)\) can be embedded in the differential algebra \(C^{\infty}(D(\Omega))\), which accomplishes task (1). For more details see chapter 8 of Colombeau (1992).

In preparation of finding an ideal that will accomplish (2), we are going to cut down the size of \(C^{\infty}(D(\Omega))\) by constructing a subalgebra of “moderate” functions \({f \in C}_{M}^{\infty}(D(\Omega))\), which is to say that for any test function \(\phi \in D(\Omega)\), the infinitely differentiable function \(f\) and the partial derivatives of \(f\) are bounded above by some constant multiplied by \(\left( \frac{1}{\epsilon} \right)^{N}\) as \(\epsilon \rightarrow \infty\). It can be proven that the distributions \(D'(\Omega) \subset C_{M}^{\infty}(D(\Omega))\). Lastly, we require that two generalized functions are equivalent when all of their derivatives are small denoted as \(\mathcal{N(}D(\Omega))\). \(\mathcal{N}\left( D(\Omega) \right)\) is an ideal of \(C_{M}^{\infty}\left( D(\Omega) \right).\ \)The difference

\[\int_{\Omega}f_{1}(x)\phi(x)dx\ \int_{\Omega}f_{2}(x)\phi(x)dx - \int_{\Omega}f_{1}(x)f_{2}(x)\phi(x)dx\]

belongs to \(\mathcal{N}\left( D(\Omega) \right)\), hence both products are equivalent, which accomplishes task (2). A (special) Colombeau algebra \(\mathcal{G}^{s}(\Omega)\) is defined as:[10]

\[\mathcal{G}^{s}(\Omega) = \frac{C_{M}^{\infty}(D(\Omega))}{\mathcal{N(}D(\Omega))}. \]

A key point about \(\mathcal{G}^{s}(\Omega)\) is that it preserves the multiplication of infinitely differentiable functions, i.e., \(C^{\infty}(D(\Omega)) \times C^{\infty}(D(\Omega))\), but it does not preserve the product of continuous functions, i.e., \(C(D(\Omega)) \times C(D(\Omega))\), which was an assumption in Schwartz’ impossibility result.

3.3.3 Applications of Colombeau Algebras in Physics

Colombeau algebras have been applied in several areas of physics. Colombeau (1992) provides applications in elasticity and elastoplasticity (in characterizing shock waves and making numerical simulations of collisions), in acoustics (sound propagation in a medium with discontinuous characteristics). In those cases, Colombeau algebras provide tools for analyzing nonlinear partial differential equations with singular coefficients giving tools for numerical solutions.

Another area of application could be perturbative quantum field theory. There are terms in perturbative expansions in quantum field theory that can involve time-ordered products of distributions, which can lead to divergences in calculations. Epstein and Glaser developed causal perturbative quantum field theory to deal with this problem by requiring that distributions fulfill a causality requirement (for an overview of this approach see Scharf (2014)). This is similar in spirit to the first response to Schwartz’ impossibility result involving regular intrinsic operations where the product of distributions is restricted to a subset of distributions satisfying some condition. In Wightman’s axiomatic quantum field theory (see section 4.2 below), the fields are operator valued tempered distributions and the calculation of vacuum expectation values (n-point functions) involves products of these tempered distributions, so being able to multiply distributions is crucial. Colombeau has used the Colombeau algebras to mathematically elucidate the Heisenberg-Pauli canonical formalism of quantum fields, which is developed further in later publications including Colombeau (2007), Colombeau et al. (2007), and Colombeau and Gsponer (2008).

Colombeau algebras have been used more recently to address issues in the general theory of relativity and in quantum mechanics. Grosser et al. (2001) show how to create diffeomorphism invariant Colombeau algebras for both general and special relativity on differentiable manifolds. This “nonlinear distributional geometry” provides a rigorous mathematical framework for nonlinear global analysis in the presence of singularities. In the context of nonrelativistic quantum mechanics, Colombeau algebras have been used to provide a solution to a problem that cannot be solved in either a standard Hilbert space or a rigged Hilbert space. The angle and angular momentum operators do not commute, so it is not possible to make precise simultaneous measurements of this pair of observables on a planar rotator. This leads to the question as to whether it is possible to find minimum uncertainty functions for the planar rotator. Fuss and Filinkov (2014) note that it was shown in Holevo (2011) that such states do not exist in a standard Hilbert space. They then demonstrate that they cannot exist in a rigged Hilbert space, but they do exist in the state space of a Colombeau algebra on the unit circle.

In Colombeau et al. (2008), the authors note in closing their discussion: “A comprehensive approach should discuss many pending problems related to the properties of the Green’s functions, the definition of asymptotic states, the derivation of closed-form and perturbative solutions, renormalization, etc. Moreover, the implications of nonlinear generalized functions should be related to the numerous efforts that have been made to give an axiomatic foundation to quantum field theory.” There have been no publications to date that fulfill this promissory note to relate Colombeau algebras with the rigged Hilbert space framework for axiomatic QFT or the algebra of observables of algebraic QFT.

4 Mathematical Rigor: Two Paths

4.1 Algebraic Quantum Field Theory

In 1943, Gelfand and Neumark published an important paper on an important class of normed rings, which are now known as abstract \(C^*\)-algebras. Their paper was influenced by Murray and von Neumann’s work on rings of operators, which was discussed in the previous section. In their paper, Gelfand and Neumark focus attention on abstract normed \(*\)-rings. They show that any \(C^*\)-algebra can be given a concrete representation in a Hilbert space (which need not be separable). That is to say, there is an isomorphic mapping of the elements of a \(C^*\)-algebra into the set of bounded operators of the Hilbert space. Four years later, Segal (1947a) published a paper that served to complete the work of Gelfand and Neumark by specifying the definitive procedure for constructing concrete (Hilbert space) representations of an abstract \(C^*\)-algebra. It is called the GNS construction (after Gelfand, Neumark, and Segal). That same year, Segal (1947b) published an algebraic formulation of quantum mechanics, which was substantially influenced by (though deviating somewhat from) von Neumann’s (1963, Vol. III, No. 9) algebraic formulation of quantum mechanics, which is cited in the previous section. It is worth noting that although \(C^*\)-algebras satisfy Segal’s postulates, the algebra that is specified by his postulates is a more general structure known as a Segal algebra. Every \(C^*\)-algebra is a Segal algebra, but the converse is false since Segal’s postulates do not require an adjoint operation to be defined. If a Segal algebra is isomorphic to the set of all self-adjoint elements of a \(C^*\)-algebra, then it is a special or exceptional Segal algebra. Although the mathematical theory of Segal algebras has been fairly well developed, a \(C^*\)-algebra is the most important type of algebra that satisfies Segal’s postulates.

The algebraic formulations of quantum mechanics that were developed by von Neumann and Segal did not change the way that quantum mechanics was done. Nevertheless, they did have a substantial impact in two related contexts: QFT and quantum statistical mechanics. The key difference leading to the impact has to do with the domain of applicability. The domain of quantum mechanics consists of finite quantum systems, meaning quantum systems that have a finite number of degrees of freedom. Whereas in QFT and quantum statistical mechanics, the systems of special interest – i.e., quantum fields and particle systems in the thermodynamic limit, respectively – are infinite quantum systems, meaning quantum systems that have an infinite number of degrees of freedom. Dirac (1927) was the first to recognize the importance of infinite quantum systems for QFT, which is reprinted in Schwinger (1958).

Segal (1959, p. 5) was the first to suggest that the beauty and power of the algebraic approach becomes evident when working with an infinite quantum system . The key advantage of the algebraic approach, according to Segal (1959, pp. 5–6), is that one may work in the abstract algebraic setting where it is possible to obtain interacting fields from free fields by an automorphism on the algebra, one that need not be unitarily implementable. Segal notes (1959, p. 6) that von Neumann (1937) had a similar idea (that field dynamics are to be expressed as an automorphism on the algebra) in an unpublished manuscript. Segal notes this advantage in response to a result obtained by Haag (1955), that field theory representations of free fields are unitarily inequivalent to representations of interacting fields. Haag mentions that von Neumann (1938) first discovered ‘different’ (unitarily inequivalent) representations much earlier. A different way of approaching unitarily equivalent representations, by contrast with Segal’s approach, was later presented by Haag and Kastler (1964), who argued that unitarilty inequivalent representations are physically equivalent. Their notion of physical equivalence was based on Fell’s mathematical idea of weak equivalence (Fell 1960).

After indicating important similarities between his and von Neumann’s approaches to infinite quantum systems, Segal draws an important contrast that serves to give the advantage to his approach over von Neumann’s. The key mathematical difference, according to Segal, is that von Neumann was working with a weakly closed ring of operators (meaning that the ring of operators is closed with respect to the weak operator topology), whereas Segal is working with a uniformly closed ring of operators (closed with respect to the uniform topology). It is crucial because it has the following interpretive significance, which rests on operational considerations:

The present intuitive idea is roughly that the only measurable field-theoretic variables are those that can be expressed in terms of a finite number of canonical operators, or uniformly approximated by such; the technical basis is a uniformly closed ring (more exactly, an abstract \(C^*\)-algebra). The crucial difference between the two varieties of approximation arises from the fact that, in general, weak approximation has only analytical significance, while uniform approximation may be defined operationally, two observables being close if the maximum (spectral) value of the difference is small (Segal 1959, p. 7).

Initially, it appeared that Segal’s assessment of the relative merits of von Neumann algebras and \(C^*\)-algebras with respect to physics was substantiated by a seminal paper, (Haag and Kastler 1964). Among other things, Haag and Kastler introduced the key axioms of the algebraic approach to QFT. They also argued that unitarily inequivalent representations are “physically equivalent” to each other. However, the use of physical equivalence to show that unitarily inequivalent representations are not physically significant has been challenged; see Kronz and Lupher (2005), Lupher (2018), and Ruetsche (2011). The prominent role of type-III factor von Neumann algebras within the algebraic approach to quantum statistical mechanics and QFT raises further doubts about Segal’s assessment.

The algebraic approach has proven most effective in quantum statistical mechanics. It is extremely useful for characterizing many important macroscopic quantum effects including crystallization, ferromagnetism, superfluidity, structural phase transition, Bose-Einstein condensation, and superconductivity. A good introductory presentation is Sewell (1986), and for a more advanced discussion see Bratteli and Robinson (1979, 1981). In algebraic quantum statistical mechanics, an infinite quantum system is defined by specifying an abstract algebra of observables. A particular state may then be used to specify a concrete representation of the algebra as a set of bounded operators in a Hilbert space. Among the most important types of states that are considered in algebraic statistical mechanics are the equilibrium states, which are often referred to as “KMS states” (since they were first introduced by the physicists Kubo, Martin, and Schwinger). There is a continuum of KMS states since there is at least one KMS state for each possible temperature value \(\tau\) of the system, for \(0\le \tau \le +\infty\). Given an automorphism group, each KMS state corresponds to a representation of the algebra of observables that defines the system, and each of these representations is unitarily inequivalent to any other. It turns out that each representation that corresponds to a KMS state is a factor: if \(\tau = 0\) then it is a type-I factor, if \(\tau = +\infty\) then it is a type-II factor, and if \(0\lt \tau \lt +\infty\) then it is a type-III factor. Thus, type-III factors play a predominant role in algebraic quantum statistical mechanics.

In algebraic QFT, an algebra of observables is associated with bounded regions of Minkowski spacetime (and unbounded regions including all of spacetime by way of certain limiting operations) that are required to satisfy standard axioms of local structure: isotony, locality, covariance, additivity, positive spectrum, and a unique invariant vacuum state. The resulting set of algebras on Minkowski spacetime that satisfy these axioms is referred to as the net of local algebras. It has been shown that special subsets of the net of local algebras – those corresponding to various types of unbounded spacetime regions such as tubes, monotones (a tube that extends infinitely in one direction only), and wedges – are type-III factors. Of particular interest for the foundations of physics are the algebras that are associated with bounded spacetime regions, such as a double cone (the finite region of intersection of a forward and a backward light cone). As a result of work done over the last thirty years, local algebras of relativistic QFT appear to be type III von Neuman algebras see Halvorson (2007, pp. 749–752) for more details.

One important area for interpretive investigation is the existence of a continuum of unitarily inequivalent representations of an algebra of observables. Attitudes towards unitarily inequivalent representations differ drastically in the philosophical literature. In (Wallace 2006) unitarily inequivalent representations are not considered a foundational problem for QFT, while in Ruetsche (2011), Lupher (2018) and Kronz and Lupher (2005) unitarily inequivalent representations are considered physically significant.

4.2 Wightman’s Axiomatic Quantum Field Theory

In the early 1950s, theoretical physicists were inspired to axiomatize QFT. One motivation for axiomatizing a theory, not the one for the case now under discussion, is to express the theory in a completely rigorous form in order to standardize the expression of the theory as a mature conceptual edifice. Another motivation, more akin to the case in point, is to embrace a strategic withdrawal to the foundations to determine how renovation should proceed on a structure that is threatening to collapse due to internal inconsistencies. One then looks for existing piles (fundamental postulates) that penetrate through the quagmire to solid rock, and attempts to drive home others at advantageous locations. Properly supported elements of the superstructure (such as the characterization of free fields, dispersion relations, etc.) may then be distinguished from those that are untrustworthy. The latter need not be razed immediately, and may ultimately glean supportive rigging from components not yet constructed. In short, the theoretician hopes that the axiomatization will effectively separate sense from nonsense, and that this will serve to make possible substantial progress towards the development of a mature theory. Grounding in a rigorous mathematical framework can be an important part of the exercise, and that was a key aspect of the axiomatization of QFT by Wightman.

In the mid-1950s, Schwartz’s theory of distributions was used by Wightman (1956) to develop an abstract formulation of QFT, which later came to be known known as axiomatic quantum field theory. Mature statements of this formulation are presented in Wightman and Gårding (1964) and in Streater and Wightman (1964). It was further refined in the late 1960s by Bogoliubov, who explicitly placed axiomatic QFT in the rigged Hilbert space framework (Bogoliubov et al. 1975, p. 256). It is by now standard within the axiomatic approach to put forth the following six postulates: spectral condition (there are no negative energies or imaginary masses), vacuum state (it exists and is unique), domain axiom for fields (quantum fields correspond to operator-valued distributions), transformation law (unitary representation in the field-operator (and state) space of the restricted inhomogeneous Lorentz group – “restricted” means inversions are excluded, and “inhomogeneous” means that translations are included), local commutativity (field measurements at spacelike separated regions do not disturb one another), asymptotic completeness (the scattering matrix is unitary – this assumption is sometimes weakened to cyclicity of the vacuum state with respect to the polynomial algebra of free fields). Rigged Hilbert space entered the axiomatic framework by way of the domain axiom, so this axiom will be discussed in more detail below.

In classical physics, a field is is characterized as a scalar- (or vector- or tensor-) valued function \(\phi(x)\) on a domain that corresponds to some subset of spacetime points. In QFT, a field is characterized by means of an operator rather than a function. A field operator may be obtained from a classical field function by quantizing the function in the canonical manner – see Mandl (1959, pp. 1–17). For convenience, the field operator associated with \(\phi(x)\) is denoted below by the same expression (since the discussion below only concerns field operators). Field operators that are relevant for QFT are too singular to be regarded as realistic, so they are smoothed out over their respective domains using elements of a space of well-behaved functions known as test functions. There are many different test-functions spaces (Gelfand and Shilov 1977, Chapter 4). At first, the test-function space of choice for axiomatic QFT was the Schwartz space \(\Sigma\), the space of functions whose elements have partial derivatives of all orders at each point and such that each function and its derivatives decreases faster than \(x^{-n}\) for any \(n\in N\) as \(x\rightarrow \infty\). It was later determined that some realistic models require the use of other test-function spaces. The smoothed field operators \(\phi[f\)] for \(f \in \Sigma\) are known as quantum field operators, and they are defined as follows

\[ \phi[f] = \int d^4 x f(x)\phi(x). \]

The integral (over the domain of the field operator) of the product of the test function \(f(x)\) and the field operator \(\phi(x)\) serves to “smooth out” the field operator over its domain; a more colloquial description is that the field is “smeared out” over space or spacetime. It is postulated within the axiomatic approach that a quantum field operator \(\phi[f\)] may be represented as an unbounded operator on a separable Hilbert space \(\Eta\), and that \(\{\phi[f]: f\in \Sigma \}\) (the set of smoothed field operators associated with \(\phi(x))\) has a dense domain \(\Omega\) in \(\Eta\). The smoothed field operators are often referred to as operator-valued distributions, and this means that for every \(\Phi,\Psi \in \Omega\) there is an element of the space of distributions \(\Sigma^x\), the topological dual of \(\Sigma\), that may be equated to the expression \(\langle \Phi {\mid} \phi[\ ]{\mid}\Psi\rangle\). If \(\Omega'\) denotes the set of functions obtained by applying all polynomials of elements of \(\{\phi[f]: f\in \Sigma \}\) onto the unique vacuum state, then the axioms mentioned above entail that \(\Omega'\) is dense in \(\Eta\) (asymptotic completeness) and that \(\Omega'\subset \Omega\) (domain axiom). The elements of \(\Omega\) correspond to possible states of the elements of \(\{\phi[f]: f\in \Sigma \}\). Though only one field has been considered thus far, the formalism is easily generalizable to a countable number of fields with an associated set of countably indexed field operators \(\phi_k (x)\) – cf. (Streater and Wightman 1964).

As noted earlier, the appropriateness of the rigged Hilbert space framework enters by way of the domain axiom. Concerning that axiom, Wightman says the following (in the notation introduced above, which differs slightly from that used by Wightman).

At a more advanced stage in the theory it is likely that one would want to introduce a topology into \(\Omega\) such that \(\phi[f\)] becomes a continuous mapping of \(\Omega\) into \(\Omega\). It is likely that this topology has to be rather strong. We want to emphasize that so far we have only required that \(\langle \Phi{\mid}\phi[f]{\mid}\Psi\rangle\) be continuous in \(f\) for \(\Phi ,\Psi\) fixed; continuity in the pair \(\Phi ,\Psi\) cannot be expected before we put a suitable strong topology on \(\Omega\) (Wightman and Gårding 1964, p. 137).

In Bogoliubov et al. (1975, p. 256), a topology is introduced to serve this role, though it is introduced on \(\Omega'\) rather than on \(\Omega\). Shortly thereafter, they assert that it is not hard to show that \(\Omega'\) is a complete nuclear space with respect to this topology. This serves to justify a claim they make earlier in their treatise:

… it is precisely the consideration of the triplet of spaces \(\Omega \subset \Eta \subset \Omega^*\) which give a natural basis for both the construction of a general theory of linear operators and the correct statement of certain problems of quantum field theory (Bogoliubov et al. 1975, p. 34).

Note that they refer to the triplet \(\Omega \subset \Eta \subset \Omega^*\) as a rigged Hilbert space. In the terminology introduced above, they refer in effect to the Gelfand triplet \((\Omega , \Eta , \Omega^x )\) or (equivalently) the associated rigged Hilbert space \((\Omega , \Omega^x)\) .

Finally, it is worth mentioning that the status of the field in algebraic QFT differs from that in Wightman’s axiomatic QFT. In both approaches, a field is an abstract system having an infinite number of degrees of freedom. Sub-atomic quantum particles are field effects that appear in special circumstances. In algebraic QFT, there is a further abstraction: the most fundamental entities are the elements of the algebra of local (and quasi-local) observables, and the field is a derived notion. The term local means bounded within a finite spacetime region, and an observable is not regarded as a property belonging to an entity other than the spacetime region itself. The term quasi-local is used to indicate that we take the union of all bounded spacetime regions. In short, the algebraic approach focuses on local (or quasi-local) observables and treats the notion of a field as a derivative notion; whereas the axiomatic approach (as characterized just above) regards the field concept as the fundamental notion. Indeed, it is common practice for proponents of the algebraic approach to distance themselves from the field notion by referring to their theory as “local quantum physics”. The two approaches are mutually complementary – they have developed in parallel and have influenced each other by analogy (Wightman 1976). For a discussion of the close connections between these two approaches, see Haag (1996, p. 106).

5 Philosophical Issues

5.1 Pragmatics versus Axiomatics

Most physicists use Lagrangian QFT (LQFT) to make predictions that have been experimentally verified with extraordinary precision in some cases. However, LQFT has been described as a “grab bag of conflicting mathematical ideas” that has not provided a sharp mathematical description of what counts as a QFT model (Swanson 2017, pp. 1–2). Those criticisms motivated mathematically inclined physicists to search for a mathematically rigorous formulation of QFT. Axiomatic versions of QFT have been favored by mathematical physicists and most philosophers. With greater mathematical rigor it is possible to prove results about the theoretical structure of QFT independent of any particular Lagrangian. Axiomatic QFT provides clear conceptual frameworks within which precise questions and answers to interpretational issues can be formulated. There are three main axiomatic frameworks for QFT: Wightman QFT, Osterwalder-Schrader QFT, and algebraic QFT. In Wightman QFT, the axioms use functional analysis and operator algebras and is closer to LQFT since its axioms describe covariant field operators acting on a fixed Hilbert space. The Osterwalder-Schrader axioms use a functional integration approach to QFT. The algebraic QFT axioms use \(C^*\)-algebras to model local observables. However, axiomatic QFT approaches are sorely lacking with regards to building empirically adequate models. Unlike quantum mechanics which has a canonical mathematical framework in terms of von Neumann’s Hilbert space formulation, QFT has no canonical mathematical framework. Even though there is a canonical mathematical framework for quantum mechanics, there are many interpretations of that framework, e.g., many-worlds, GRW, Copenhagen, Bohmian, etc... QFT has two levels that require interpretation: (1) which QFT framework should be the focus of these foundational efforts, if any, and (2) how that preferred framework should be interpreted. Since (1) involves issues about mathematical rigor and pragmatic virtues, it directly bears on the focus of this article. The lack of a canonical formulation of QFT threatens to impede any metaphysical or epistemological lessons that might be learned from QFT.

One view is that these two approaches to QFT, the mathematically rigorous axiomatic approach and the pragmatic / empirically adequate LQFT approach, are rival research programs (see David Wallace (2006, 2011) and Doreen Fraser (2009, 2011)), though Swanson (2017) argues that they are not rival programs. Fraser (2009, 2011) argues that the interpretation of QFT should be based on the mathematically rigorous approach of axiomatic formulations of QFT. By contrast, Wallace (2006, 2011) argues that an interpretation of QFT should be based on LQFT. (Wallace, in 2006, calls his preferred QFT framework conventional QFT (CQFT), but changes his terminology to LQFT in Wallace 2011). Swanson (2017) and Egg, Lam, and Oldofedi (2017) are good overviews of the debate between Fraser and Wallace (for an extended analysis see James Fraser 2016). The debate covers many different philosophical topics in QFT, which makes it more challenging to pin down exactly what is essential to the arguments for both sides (for one view of what is essential for the debate, see Egg, Lam, and Oldofedi 2017). One issue is the role of internal consistency established by mathematical rigor versus empirical adequacy. Wallace argues that LQFT is empirically adequate since it can describe the forces of the Standard Model. LQFT has a collection of calculational techniques including perturbation theory, path integrals, and renormalization group methods. One criticism of LQFT is that the calculational techniques it uses are not mathematically rigorous. Wallace argues that renormalization group methods puts perturbative QFT, an approach within LQFT, on mathematically rigorous ground and removes the main motivation for axiomatic QFT.

5.1.1 Perturbative Quantum Field Theory

What follows is a rough overview of perturbative QFT (see James Fraser 2016 for more details). Since exactly solvable free QFT models are more mathematically tractable than interacting QFT models, perturbative QFT treats interactions as perturbations to the free Lagrangian assuming weak coupling. For strongly coupled theories like quantum chromodynamics that idealization fails. Using perturbation theory, approximate solutions for interacting QFT models can be calculated by expanding S-matrix elements in a power series in terms of a coupling parameter. However, the higher order terms will often contain divergent integrals. Typically, renormalization of the higher order terms is required to get finite predictions. Two sources of divergent integrals are infrared (long distance, low energy) and ultraviolet (short distance, high energy) divergences. Infrared divergences are often handled by imposing a long distance cutoff or putting a small non-zero lower limit for the integral over momentum. A sharp cutoff at low momentum is equivalent to putting the theory in a finite volume box. Imposing asymptotic boundary conditions and restricting the observables to long distance “friendly” observables also help with infrared divergences. Ultraviolet divergences are often handled by imposing a momentum cutoff to remove high momentum modes of a theory. That is equivalent to freezing out variations in the fields at arbitrarily short length scales. Putting the system on a lattice with some finite spacing can also help deal with the high momentum. Dimensional regularization, where the integral measure is redefined to range over a fractional number of dimensions, can help with both infrared and ultraviolet divergences. The last step in renormalization is to remove the cutoffs by taking the continuum limit (i.e., removing the high momentum cutoff) and the infinite volume limit (i.e., removing the low momentum cutoff). The hope is that the limit is well-defined and there are finite expressions of the series at each order.

James Fraser (2016) identifies three problems for perturbative QFT. (1) The rigor problem: perturbative QFT is not mathematically rigorous which makes it difficult to analyze and interpret. (2) The consistency problem: perturbative calculations rest on the interaction picture existing, but Haag’s theorem seems to show that the interaction picture does not exist. (3) The justification problem: renormalization lacks physical motivation and appears ad hoc. James Fraser argues that (1) and (2) do not pose severe problems for perturbative QFT because it is not attempting to build continuum QFT models. It is building approximate physical quantities – not mathematical structures that are to be interpreted as physical systems.

Baker (2016) and Swanson (2017) note that LQFT makes false or unproven assumptions such as the convergence of certain infinite sums in perturbation theory. Dyson (1952) gives a heuristic argument that quantum electrodynamic perturbation series do not converge. Baker and Swanson also argue that the use of long distance cutoffs is at odds with cosmological theory and astronomical observations which suggest that the universe is spatially infinite. Even in the weak coupling limit where perturbation theory can be formally applied, it is not clear when the perturbative QFT gives an accurate approximation of the underlying physics. In the interacting \(\phi^4\) theory, when the dimension is less than 4 for Minkowski spacetime, the theory is nontrivial, but when the dimension is greater than 4, the renormalized perturbation series is asymptotic to a free field theory even though it appears to describe nontrivial interactions. When there are 4 dimensions, the theory is also trivial if additional technical assumptions hold (see Swanson 2017 (p. 3) for more details).

5.1.2 Path Integrals in Quantum Field Theory

Another area where questions of mathematical rigor arise within perturbative QFT is the use of path integrals. The S-matrix power series expansion contains integrals over momentum space and this is where path integrals / Feynman diagrams have been helpful for making calculations. The key concept is the partition function \(Z\), which is defined as a functional integral involving the action, which is itself an integral of the Lagrangian. The following details come mainly from Hancox-Li (2017). More specifically, the action is a functional of quantum fields. The functional integral over the action ranges over all possible combinations of the quantum fields values over spacetime. Informally, the sum is being taken over all possible field configurations. As Swanson (2017) notes, the path integral requires choosing a measure over an infinite dimensional path space, which is only mathematically well-defined in special cases. For example, if the system is formulated on a hypercubic lattice, then the measure can be defined (see section 1.2 of James Fraser 2016). Another way of having a well-defined measure is to restrict attention to a finite dimensional subspace. But if functions are allowed to vary arbitrarily on short length scales, then the integral ceases to be well-defined (Wallace 2006, p. 42). All of the correlation functions (i.e., vacuum state expectation values of the fields at different spacetime points), can be derived from the partition function \(Z\). So, given \(Z\), all empirical quantities associated with the Lagrangian can be calculated, e.g., scattering cross-sections. Finding \(Z\) amounts to a solution of LQFT. \(Z\) can be expanded in a Taylor series in the coupling constant. When this is done, two types of divergences can occur: (1) individual terms of the perturbation series can diverge and/or (2) the perturbation series itself is divergent, though the series may be an asymptotic series. To deal with (1), physicists do the following procedures (Hancox-Li 2017, pp. 344–345): (i) regularization, which involves reducing the number of degrees of freedom via dimensional regularization, momentum cutoffs, or using a lattice formulation and (ii) add counterterms to compensate for the regularization in (i). But this construction is purely formal and not mathematically defined. The rules used to manipulate the Lagrangian, and hence the partition function, are not well-defined.

5.1.3 Renormalization Group Techniques

Wallace (2011) argues that renormalization group techniques have overcome the mathematical deficiencies of older renormalization calculational techniques (for more details on the renormalization group see Butterfield and Bouatta 2015, Fraser 2016, Hancox-Li (2015a, 2015b, 2017)). According to Wallace, the renormalization group methods put LQFT on the same level of mathematical rigor as other areas of theoretical physics. It provides a solid theoretical framework that is explanatorily rich in particle physics and condensed matter physics, so the impetus for axiomatic QFT has been resolved. Renormalization group techniques presuppose that QFT will fail at some short length scale, but the empirical content of LQFT is largely insensitive to the details at such short length scales. Doreen Fraser (2011) argues that renormalization group methods help articulate the empirical content of QFT, but the renormalization group has no significance for the theoretical content of QFT insofar as it does not tell us whether we should focus on LQFT or AQFT. James Fraser (2016) and Hancox-Li (2015b) argue that the renormalization group does more than provide empirical predictions in QFT. The renormalization group gives us methods for studying the behavior of physical systems at different energy scales, namely how properties of QFT models depend or do not depend on small scale structure. The renormalization group provides a non-perturbative explanation of the success of perturbative QFT. Hancox-Li (2015b) discusses how mathematicians working in constructive QFT use non-perturbative approximations with well controlled error bounds to prove the existence or non-existence of ultraviolet fixed points. Hancox-Li argues that the renormalization group explains perturbative renormalization non-perturbatively. The renormalization group can tell us whether certain Lagrangians have an ultraviolet limit that satisfies the axioms a QFT should satisfy. Thus, the use of the renormalization group in constructive QFT can provide additional dynamical information (e.g., whether a certain dynamics can occur in continuous spacetime) that a pure axiomatic approach does not.

5.2 Middle Grounds

Egg, Lam, and Oldofredi (2017) argue that the main disagreement between Doreen Fraser and David Wallace is over the very definition of QFT. Fraser takes QFT to be the union of quantum theory and special relativity. If QFT = QM + SR as Fraser maintains, then LQFT fails to satisfy that criterion since it employs cutoffs which violate Poincaré covariance. For Wallace, the violation of QFT Poincaré covariance is not as worrisome. QFT is not a truly fundamental theory since gravity is absent. Wallace is more interested in what QFT’s approximate truth tells us about the world. LQFT gives us an effective ontology. The renormalization group tell us that QFT cannot be trusted in the high energy regimes where quantum gravity can be expected to apply, i.e., the Planckian length scale where gravitational effects cannot be ignored. The violation of Poincaré covariance via cutoffs may not amount to much if the fundamental quantum theory of gravity imposes some real cutoff, according to Wallace. There are, however, other options to consider.

5.2.1 Pluralistic Approaches

Some philosophers have rejected the seemingly either-or nature of the debate between Wallace and Fraser to embrace more pluralistic views. On these pluralistic views, different formulations of QFT might be appropriate for different philosophical questions. Baker (2016) advocates that AQFT or LQFT should be trusted in domains of inquiry where their idealizations are unproblematic. For example, if the domain to be interpreted is the Standard Model, then LQFT is the appropriate framework. Swanson (2017) analyzes LQFT, AQFT, and Wightman QFT and argues that all three approaches are complementary and have no deep incompatibilities. LQFT supplies various powerful predictive tools and explanatory schemas. It can account for gauge theories, the Standard Model of particle physics, the weak and strong nuclear force, and the electromagnetic force. However, the collection of calculational techniques are not all mathematically well-defined. LQFT provides QFT theories at only certain length scales and cannot make use of unitarily inequivalent representations since LQFT uses cutoffs which renders all representations finite dimensional and unitarily equivalent by the Stone-von Neumann theorem. Axiomatic QFT is supposed to provide a rigorous description of fundamental QFT at all length scales, but that conflicts with the effective field theory viewpoint where QFT is only defined for certain lengths. But if axiomatic QFT capture what all QFTs have in common, then effective field theories should be captured by it as well. Axiomatic QFT gives a precise regimentation of LQFT, but it is unclear if axiomatic QFT is fully faithful to the LQFT picture. Within the axiomatic approach, Wightman QFT has many sophisticated tools for building concrete models of QFT in addition to rigorously proving structural results like the PCT theorem and the spin statistics theorem. But Wightman QFT relies on localized gauge-dependent field operators that do not directly represent physical properties. AQFT might provide a more physically transparent gauge-free description of QFT. It has topological tools to define global quantities like temperature, energy, charge, particle number which use unitarily inequivalent representations. But AQFT has difficulty constructing models. While LQFT is more mathematically amorphous, there are recent algebraic constructions of low dimensional interacting models with no known Lagrangian, which suggest that AQFT is more general than LQFT (Swanson 2017, p. 5). However, LQFT provides constructive QFT with guidance on correctly building models corresponding to Lagrangians particle physicists use with great empirical success (Hancox-Li 2017, p. 353).

5.2.2 Constructive Quantum Field Theory

Constructive QFT is an attempt to mediate between LQFT and axiomatic QFT by rigorously constructing specific interacting models of QFT. The nontrivial solutions it constructs are supposed to correspond to Lagrangians that particle physicists use. This ensures that various axiomatic systems have a physical connection to the world via the empirical success of LQFT. While constructive QFT has done this for some models with dimensions less than 4, it has not yet been accomplished for a 4 dimensional Lagrangian that particle physicists use. Any model that satisfies the Osterwalder-Schrader axioms will automatically satisfy the Wightman axioms. Constructive QFT tries to construct the functional integral measures for path integrals by shifting from Minkowski spacetime to Euclidean spacetime via a Wick rotation (what follows is based on section four of Hancox-Li (2017)). In Euclidean field theory, the Schwinger functions, which are defined in terms of \(Z\), must satisfy the Osterwalder-Schrader axioms. The measure of \(Z\) is a Gaussian measure on the Schwartz space of rapidly decreasing functions. The Osterwalder-Schrader axioms are related to the Wightman axioms by the Osterwalder-Schrader Reconstruction Theorem which states that any set of functions satisfying the Osterwalder-Schrader axioms determines a unique Wightman model whose Schwinger functions form that set. It allows the constructive field theorists to use the advantages of Euclidean space for defining a measure while ensuring that they are constructing models that exist in Minkowski spacetime. It still has to be verified that the solution corresponds to a renormalized perturbation series that physicists derive for the corresponding Lagrangian in LQFT. The challenge is how to translate something not mathematically well-defined into something that is while showing that the “solutions” in LQFT can be reproduced by something that is consistent with a set of axioms. This is crucial since, as Swanson (2017) points out, it is unclear whether perturbation theory is an accurate guide for the underlying physics described by LQFT. This leads Hancox-Li (2017) to argue that mathematically unrigorous LQFT is relevant to the rigorous program of constructive QFT in building rigorous interacting models of QFT. Those models correspond to the Lagrangians of interest to particle physicists. Hence, LQFT can inform the theoretical content of QFT.

Another tool of constructive QFT is the use of asymptotic series, which can tell us which function the perturbative series is asymptotic to, which perturbative QFT does not. Constructive QFT tries to determine some properties of non-perturbative solutions to the equations of motion which guarantee that certain methods of summing asymptotic expansions will lead to a unique solution (see Hancox-Li 2017, pp. 349–350, for more details). Is the rigorously defined partition function \(Z\) asymptotic to the renormalized perturbative series? Roughly, a function is asymptotic to a series expansion when successive terms of the series provide an increasingly accurate description of how quickly the function grows. The difference between the function and each order of the perturbation series is approximately small. But there are many different functions that have the same asymptotic expansion. Ideally, we want there to be a unique function because then there is a unique non-perturbative solution. The concept of strong asymptoticity requires that the difference between the function and each order of the series is smaller than what was required by asymptoticity. A strongly asymptotic series uniquely determines a function. If there is a strong asymptotic series, then the function can be uniquely reconstructed from the series by Borel summation. The Borel transform of the series is given by dividing the coefficients each term in the series by a factorial of the order of that term and then integrating to recover the exact function. In constructive QFT, the goal is to associate a unique function with a renormalized perturbation series and some kind of Borel summability is the main candidate so far, though the Borel transform cannot remove large-order divergences. The asymptotic behavior of the renormalized perturbation series can be extremely sensitive to the choice of regularization and render it asymptotic to a free field theory even if it appears to describe nontrivial perturbations (see Swanson 2017, p. 11, for more details).


  • Alvarez, J., 2020, “A Mathematical Presentation of Laurent Schwartz’s Distributions,” Surveys in Mathematics and its Applications 15:1–137.
  • Antoniou, I. and Prigogine, I., 1993, “Intrinsic Irreversibility and Integrability of Dynamics”, Physica A, 192: 443–464.
  • Baker, D., 2016, “Philosophy of Quantum Field Theory”, Oxford Handbooks Online, doi: 10.1093/oxfordhb/9780199935314.013.33.
  • Birkhoff, G., and von Neumann, J., 1936, “The Logic of Quantum Mechanics”, Annals of Mathematics, 37: 823–843.
  • Berezanskii, J. M., 1968, Expansions in Eigenfunctions of Selfadjoint Operators, Providence, RI: American Mathematical Society, Translations of Mathematical Monographs, 17. [First published in Russian in 1965.]
  • Bogoliubov, N. N., Logunov, A. A., and Todorov, I. T., 1975, Introduction to Axiomatic Quantum Field Theory, Reading, Massachusetts: The Benjamin/Cummings Publishing Company, Inc. [First published in Russian in 1969.]
  • Böhm, A., 1966, “Rigged Hilbert Space and Mathematical Description of Physical Systems”, Physica A, 236: 485–549.
  • Böhm, A. and Gadella, M., 1989, Dirac Kets, Gamow Vectors and Gel’fand Triplets, New York: Springer-Verlag.
  • Böhm, A., Maxson, S., Loewe, M. and Gadella, M., 1997, “Quantum Mechanical Irreversibility”, in Lectures in Theoretical Physics, 9A: Mathematical Methods of Theoretical Physics, New York: Wiley.
  • Bratteli, O. and Robinson, D.W., 1979–1981, Operator Algebras and Quantum Statistical Mechanics, Volumes 1–2, New York: Springer-Verlag.
  • Brunetti, R., Dütsch, M., Fredenhagen, K., 2009, “Perturbative Algebraic Quantum Field Theory and the Renormalization Groups”, Advances in Theoretical and Mathematical Physics, 13 (5): 1541–1599.
  • Butterfield, J., Bouatta, N., 2015, “Renormalization for philosophers”, In: Bigaj, T., Wüthrich, C. (eds.) Metaphysics in Contemporary Physics, 104: 437–485.
  • Castagnino, M. and Gadella, M., 2003, “The role of self-induced decoherence in the problem of the classical limit of quantum mechanics,” Foundations of Physics, 36(6): 920–952.
  • Colombeau, J., 1992, Multiplication of Distributions, Berlin: Springer-Verlag.
  • –––, 2007, “Mathematical Problems on Generalized Functions and the Canonical Hamiltonian Formalism”, 15 pp. eprint arXiv:0708.3425.
  • Colombeau, J., and Gsponer, A., 2008, “The Heisenberg-Pauli Canonical Formalism of Quantum Field Theory in the Rigorous Setting of Nonlinear Generalized Functions (Part I)”, 107 pp. eprint arXiv:0807.0289v2.
  • Colombeau, J.F., Gsponer, A. and Perrot, B., 2008, “Nonlinear generalized functions and the Heisenberg-Pauli foundations of Quantum Field Theory”, 20 pp. eprint arXiv:0705.2396.
  • Connes, A., 1994, Noncommutative Geometry, San Diego: Academic Press.
  • DeWitt-Morette, C. Maheshwari, A. and Nelson, B., 1979, “Path Integration in Non-Relativisitic Quantum Mechanics”, in Physics Reports, 50C: 255–372.
  • Dirac, P. A. M., 1927, “The Quantum Theory of the Emission and Absorption of Radiation”, Proceedings of the Royal Society of London, Series A, 114: 243–265. [It is reprinted in (Schwinger 1958).]
  • –––, 1930, The Principles of Quantum Mechanics, Oxford: Clarendon Press.
  • –––, 1933, “The Lagrangian in Quantum Mechanics”, Physikalische Zeitschrift der Sowietunion, 3: 64–72.
  • –––, 1939, “A New Notation for Quantum Mechanics”, Proceedings of the Cambridge Philosophical Society, 35: 416–418.
  • –––, 1943, “Quantum Electrodynamics”, Communications of the Dublin Institute for Advanced Studies, A1: 1–36.
  • Dixmier, J., 1981, Von Neumann Algebras, Amsterdam: North-Holland Publishing Company. [First published in French in 1957: Les Algèbres d’Opérateurs dans l’Espace Hilbertien, Paris: Gauthier-Villars.]
  • Dyson, F., 1952, “Divergence of perturbation theory in quantum electrodynamics”, Physical Review, 85: 631–632.
  • Egg, M., Lam, V., Oldofredi, A., 2017, “Particles, cutoffs, and inequivalent representations”, Foundations of Physics, 47 (3): 453–466.
  • Fell, J. M. G., 1960, “The Dual Spaces of  \(C^*\)-Algebras”, Transactions of the American Mathematical Society, 94: 365–403.
  • Feynman, R. P., 1948, “Space-Time Approach to Non-Relativistic Quantum Mechanics”, Reviews of Modern Physics, 20: 367–387. [It is reprinted in (Schwinger 1958).]
  • Fleming, G., 2002, “Comments on Paul Teller’s Book ”An Interpretive Introduction to Quantum Field Theory“”, in M. Kuhlmann, H. Lyre and A. Wayne (eds.), Ontological Aspects of Quantum Field Theory, River Edge, NJ: World Scientific: 135–144.
  • Franssens, G., 2013, “On the impossibility of the convolution of distributions”, CUBO A Mathematical Journal 14(2): 71–77.
  • Fraser, D., 2009, “Quantum Field Theory: Undetermination, Inconsistency, and Idealization”, Philosophy of Science, 76: 536–567.
  • –––, 2011, “How to Take Particle Physics Seriously: A Further Defence of Axiomatic Quantum Field Theory”, Studies in History and Philosophy of Modern Physics, 42: 126–135.
  • Fraser, J., 2016, What is Quantum Field Theory?, Ph.D. Dissertation, University of Leeds.
  • Fuss, I. G. and Filinkov, A., 2014, “An Introduction to Generalised Functions in Periodic Quantum Theory, 21 pp., eprint arXiv:1406.3436.
  • Gelfand, I. and Neumark, M., 1943, “On the Imbedding of Normed Rings into the Ring of Operators in Hilbert Space”, Recueil Mathématique [Matematicheskii Sbornik] Nouvelle Série, 12 [54]: 197–213. [Reprinted in \(C^*\)-algebras: 1943–1993, in the series Contemporary Mathematics, 167,  Providence, R.I. : American Mathematical Society, 1994.]
  • Gelfand, I. and Shilov, G. E., 1977, Generalized Functions, Volume 2, New York: Academic Press. [First published in Russian in 1958.]
  • –––, 1977, Generalized Functions, Volume 3, New York: Academic Press. [First published in Russian in 1958.]
  • Gelfand, I. and Vilenkin, N. Ya., 1964, Generalized Functions, Volume 4, New York: Academic Press. [First published in Russian in 1961.]
  • Grosser, M., Kunzinger, M., Oberguggenberger, M., 2001, Geometric Theory of Generalized Functions with Applications to General Relativity, Kluwer Academic Publishers.
  • Grothendieck, A., 1955, “Produits Tensoriels Topologiques et Espaces Nucléaires”, Memoirs of the American Mathematical Society, 16: 1–140.
  • Haag, R. and Kastler, D. 1964, “An Algebraic Approach to Quantum Field Theory”, Journal of Mathematical Physics, 5: 848–861.
  • Haag, R. 1955, “On Quantum Field Theories”, Danske Videnskabernes Selskab Matematisk-Fysiske Meddelelser, 29 (12): 1–37.
  • –––, 1996, Local Quantum Physics, second revised edition, Berlin: Springer-Verlag.
  • Halvorson, H., 2007, “Algebraic Quantum Field Theory”, in J. Butterfield and J. Earman (eds.), Philosophy of Physics, Amsterdam: Elsevier: 731–922.
  • Hancox-Li, L., 2015a, Moving Beyond “Theory T”: The Case of Quantum Field Theory, Ph.D. Dissertation, University of Pittsburgh.
  • –––, 2015b, “Coarse-Graining as a Route to Microscopic Physics: The Renormalization Group in Quantum Field Theory”, Philosophy of Science, 82 (5): 1211–1223.
  • –––, 2017, “Solutions in Constructive Field Theory”, Philosophy of Science, 84 (2): 335–358.
  • Holevo, A. S., 2011, Probabilistic and Statistical Aspects of Quantum Theory, Pisa: Scuola Normale Superiore.
  • Holland, S. S. Jr., 1970, “The Current Interest in Orthomodular Lattices”, in Trends in Lattice Theory, J. C. Abbott (ed.), New York: Van Nostrand: 41–116. [Reprinted in The Logico-Algebraic Approach to Quantum Mechanics, Vol. 1, C. A. Hooker (ed.), New York: Academic Press, 1972: 437–496]
  • Horuzhy, S. S., 1990, Introduction to Algebraic Quantum Field Theory, Dordrecht: Kluwer Academic Publishers.
  • König, H., 1953, “Neue Begründung der Theorie der "Distributionen" von L. Schwartz,” Mathematische Nachrichten 9(3): 129–148.
  • –––, 1955, “Multiplikation von Distributionen. I,” Mathematische Annalen 128: 420–452.
  • Kronz, F. M., 1998, “Nonseparability and Quantum Chaos” , Philosophy of Science, 65: 50–75.
  • –––, 1999, “Bohm’s Ontological Interpretation and Its Relation to Three Formulations of Quantum Mechanics”, Synthese, 117: 31–52.
  • –––, 2000, “A Model of a Chaotic Open Quantum System”, Proceedings of the 1998 Biennial Meeting of the Philosophy of Science Association: 446–453.
  • Kronz, F., and Lupher, T., 2005, “Unitarily Inequivalent Representations in Algebraic Quantum Theory”, International Journal of Theoretical Physics, 44 (8): 1239–1258.
  • Loomis, L., 1955, “The Lattice-Theoretic Background of the Dimension Theory of Operator Algebras”, Memoirs of the American Mathematical Society, 18: 1–36.
  • Lupher, T., 2018, “The Limits of Physical Equivalence in Algebraic Quantum Field Theory”, British Journal for the Philosophy of Science, 69 (2): 553–576.
  • Mandl, F., 1959, Introduction to Quantum Field Theory, New York: Wiley.
  • Nagel, B., 1989, “Introduction to Rigged Hilbert Spaces”, in E. Brändas and N. Elander (eds), Resonances (Springer Lecture Notes in Physics: Volume 325), Berlin: Springer: 1–10.
  • Nagy, K. L., 1966, State Vector Spaces with Indefinite Metric in Quantum Field Theory, Groningen: P. Noordhoff Ltd.
  • Nedeljkov, M., Pilipović, S., Scarpalezos, D., 1998, Linear Theory of Colombeau's Generalized Functions, Addison Wesley, Longman.
  • Oberguggenberger, M., 1992, Multiplication of Distributions and Applications to Partial Differential Equations, Longman Higher Education.
  • Pavičić, M., 1992, “Bibliography on Quantum Logics and Related Structures”, International Journal of Theoretical Physics, 31: 373–461.
  • Pauli, W., 1943, “On Dirac’s New Method of Field Quantizations”, Reviews of Modern Physics, 15: 175–207.
  • Rédei, M., 1998, Quantum Logic in Algebraic Approach, Dordrecht: Kluwer Academic Publishers.
  • Rédei, M. and Stöltzner, M. (eds), 2001, John von Neumann and the Foundations of Quantum Physics, Vol. 8, Dordrecht: Kluwer Academic Publisers.
  • Rivers, R. J., 1987, Path Integral Methods in Quantum Field Theory, Cambridge: Cambridge University Press.
  • Roberts, J. E., 1966, “The Dirac Bra and Ket Formalism”, Journal of Mathematical Physics, 7: 1097–1104.
  • Ruetsche, L., 2011, Interpreting Quantum Theories, Oxford: Oxford University Press.
  • –––, 2003, “A Matter of Degree: Putting Unitary Inequivalence to Work”, Philosophy of Science, 70: 1329–1342.
  • Salmhofer, M., 2007, Renormalization: An Introduction, Berlin: Springer.
  • Scharf, G., 2014, Finite Quantum Electrodynamics – The Causal Approach (3rd edition), Dover Publications.
  • Schrödinger, E., 1926, “On the Relation of the Heisenberg-Born-Jordan Quantum Mechanics and Mine”, Annalen der Physik, 79: 734–756.
  • –––, 1928, Collected Papers on Wave Mechanics, London: Blackie & Son.
  • Schwartz, L., 1945, “Généralisation de la Notion de Fonction, de Dérivation, de Transformation de Fourier et Applications Mathématiques et Physiques”, Annales de l’Université de Grenoble, 21: 57–74.
  • –––, 1951–1952, Théorie des Distributions, Publications de l’Institut de Mathématique de l’Université de Strasbourg, Volumes 9–10, Paris: Hermann.
  • –––, 1954, “Sur l’impossibilite de la multiplications des distributions”, Paris: Comptes Rendus de l'Académie des Sciences 239: 847–848
  • Schwinger, J. S., 1958, Selected Papers on Quantum Electrodynamics, New York: Dover.
  • Segal, I. E., 1947a, “Irreducible Representations of Operator Algebras”, Bulletin of the American Mathematical Society, 53: 73–88.
  • –––, 1947b, “Postulates for General Quantum Mechanics”, Annals of Mathematics, 4: 930–948.
  • –––, 1959, “Foundations of the Theory of Dynamical Systems of Infinitely Many Degrees of Freedom I”, Danske Videnskabernes Selskab Matematisk-Fysiske Meddelelser, 31 (12): 1–39.
  • Steinmann, O., 2000, Perturbative Quantum Electrodynamics and Axiomatic Field Theory, Berlin, Heidelberg: Springer.
  • Sewell, G.L., 1986, Quantum Theory of Collective Phenomena, Oxford: Oxford University Press.
  • Streater, R. F. and Wightman, A. S., 1964, PCT, Spin and Statistics, and All That, New York: W. A. Benjamin.
  • Sunder, V. S., 1987, An Invitation to von Neumann Algebras, New York: Springer-Verlag.
  • Swanson, N., 2014, Modular Theory and Spacetime Structure in QFT, Ph.D. Dissertation, Princeton University.
  • –––, 2017, “A philosopher’s guide to the foundations of quantum field theory”, Philosophy Compass, 12 (5): e12414.
  • Treves, F., 1967, Topological Vector Spaces, Distributions and Kernels, Academic Press.
  • von Mises, R., 1981, Probability, Statistics and Truth, second revised English edition, New York: Dover; first published in German, Wahrscheinlichkeit, Statistik und Wahrheit, Berlin: Springer, 1928.
  • von Neumann, J., 1937, “Quantum Mechanics of Infinite Systems”, first published in (Rédei and Stöltzner 2001, 249–268). [A mimeographed version of a lecture given at Pauli’s seminar held at the Institute for Advanced Study in 1937, John von Neumann Archive, Library of Congress, Washington, D.C.]
  • –––, 1938, “On Infinite Direct Products”, Compositio Mathematica, 6: 1–77. [Reprinted in von Neumann 1961–1963, Vol. III).]
  • –––, 1955, Mathematical Foundations of Quantum Mechanics, Princeton, NJ: Princeton University Press. [First published in German in 1932: Mathematische Grundlagen der Quantenmechank, Berlin: Springer.]
  • –––, 1961–1963, Collected Works, 6 volumes, A. H. Taub (ed.), New York: Pergamon Press.
  • –––, 1981, Continuous geometries with a transition probability, Halperin, I. (ed.), Providence: Memoirs of the American Mathematical Society.
  • –––, 2005, John von Neumann: Selected Letters, M. Rédei (ed.), Providence: American Mathematical Society.
  • Waerden, B. L. van der, (ed.), 1967, Sources of Quantum Mechanics, Amsterdam: North Holland Publishing Company.
  • Wallace, D., 2006, “In Defense of Naiveté: The Conceptual Status of Lagrangian Quantum Field Theory”, Synthese, 151: 33–80.
  • –––, 2011, “Taking Particle Physics Seriously: A Critique of the Algebraic Approach to Quantum Field Theory”, Studies in History and Philosophy of Modern Physics, 42: 116–125.
  • Weinberg, S., 1995, The Quantum Theory of Fields, New York: Cambridge University Press.
  • Wightman, A. S., 1956, “Quantum Field Theory in Terms of Vacuum Expectation Values”, Physical Review, 101: 860–866.
  • –––, 1976, “Hilbert’s Sixth Problem: Mathematical Treatment of the Axioms of Physics”, Proceedings of Symposia in Pure Mathematics, 28: 147–240.
  • Wightman, A.S. and Gårding, L., 1964, “Fields as Operator-Valued Distributions in Relativistic Quantum Theory”, Arkiv för Fysik, 28: 129–184.

Other Internet Resources

[Please contact the author with suggestions.]

Copyright © 2024 by
Fred Kronz <fkronz@nsf.gov>
Tracy Lupher <lupherta@jmu.edu>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free