# George Boole

*First published Wed Apr 21, 2010; substantive revision Wed Apr 18, 2018*

George Boole (1815–1864) was an English mathematician and a
founder of the algebraic tradition in logic. He worked as a
schoolmaster in England and from 1849 until his death as professor of
mathematics at Queen’s University, Cork, Ireland. He revolutionized
logic by applying methods from the then-emerging field of symbolic
algebra to logic. Where traditional (Aristotelian) logic relied on
cataloging the valid syllogisms of various simple forms, Boole’s
method provided general algorithms in an algebraic language which
applied to an infinite variety of arguments of arbitrary
complexity. These results appeared in two major works,
*The Mathematical Analysis of Logic* (1847)
and
*The Laws of Thought* (1854).

- 1. Life and Work
- 2. The Context and Background of Boole’s Work In Logic
- 3. The Mathematical Analysis of Logic (1847)
- 3.1 Boole’s Version Of Aristotelian Logic
- 3.2 Class Symbols and Elective Symbols
- 3.3 Operations and Laws for Elective Symbols
- 3.4 Common Algebra
- 3.5 Impact of the Index Law
- 3.6 Equational Expressions of Categorical Propositions
- 3.7 Hypothetical Syllogisms
- 3.8 General Theorems of Boole’s Algebra in
*MAL*

- 4. The Laws of Thought (1854)
- 5. Later Developments
- 6. Boole’s Methods
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries

## 1. Life and Work

George Boole was born November 2, 1815 in Lincoln, Lincolnshire, England, into a family of modest means, with a father who was evidently more of a good companion than a good breadwinner. His father was a shoemaker whose real passion was being a devoted dilettante in the realm of science and technology, one who enjoyed participating in the Lincoln Mechanics’ Institution; this was essentially a community social club promoting reading, discussions, and lectures regarding science. It was founded in 1833, and in 1834 Boole’s father became the curator of its library. This love of learning was clearly inherited by Boole. Without the benefit of an elite schooling, but with a supportive family and access to excellent books, in particular from Sir Edward Bromhead, FRS, who lived only a few miles from Lincoln, Boole was able to essentially teach himself foreign languages and advanced mathematics.

Starting at the age of 16 it was necessary for Boole to find gainful employment, since his father was no longer capable of providing for the family. After 3 years working as a teacher in private schools, Boole decided, at the age of 19, to open his own small school in Lincoln. He would be a schoolmaster for the next 15 years, until 1849 when he became a professor at the newly opened Queen’s University in Cork, Ireland. With heavy responsibilities for his parents and siblings, it is remarkable that he nonetheless found time during the years as a schoolmaster to continue his own education and to start a program of research, primarily on differential equations and the calculus of variations connected with the works of Laplace and Lagrange (which he studied in the original French).

There is a widespread belief that Boole was primarily a
logician—in reality he became a recognized mathematician well
before he had penned a single word about logic, all the while running
his private school to care for his parents and siblings. Boole’s
ability to read French, German and Italian put him in a good position
to start serious mathematical studies when, at the age of 16, he read
Lacroix’s *Calcul Différentiel*, a gift from his friend
Reverend G.S. Dickson of Lincoln. Seven years later, in 1838, he would
write his first mathematical paper (although not the first to be
published), “On certain theorems in the calculus of
variations,” focusing on improving results he had read in
Lagrange’s *Méchanique Analytique.*

In early 1839 Boole travelled to Cambridge to meet with the young
mathematician Duncan F. Gregory (1813–1844), the editor
of the *Cambridge Mathematical Journal*
(*CMJ*)—Gregory had founded this journal in 1837 and
edited it until his health failed in 1843 (he died in early 1844, at
the age of 30). Gregory, though only 2 years beyond his degree in
1839, became an important mentor to Boole. With Gregory’s support,
which included coaching Boole on how to write a mathematical paper,
Boole entered the public arena of mathematical publication in
1841.

Boole’s mathematical publications span the 24 years from 1841 to 1864, the year he died from pneumonia. Breaking these 24 years into three segments, the first 6 years (1841–1846), the second 8 years (1847–1854), and the last 10 years (1855–1864), we find that his published work on logic was entirely in the middle 8 years.

In his first 6 career years, Boole published 15 mathematical papers,
all but two in the *CMJ* and its 1846 successor, *The
Cambridge and Dublin Mathematical Journal*. He wrote on standard
mathematical topics, mainly differential equations, integration and the
calculus of variations. Boole enjoyed early success in using the new
symbolical method in analysis, a method which took a differential
equation, say:

and wrote it in the form Operator\((y) =\) cos\((x)\). This was (formally) achieved by letting:

\[ D = d/dx, D^2 = d^2 /dx^2, \text{etc.} \]leading to an expression of the differential equation as:

\[ (D^2 - D - 2) y = \cos(x). \]Now symbolical algebra came into play by simply treating the operator \(D^2 - D - 2\) as though it were an ordinary polynomial in algebra. Boole’s 1841 paper “On the integration of linear differential equations with constant coefficients” gave a nice improvement to Gregory’s method for solving such differential equations, an improvement based on a standard tool in algebra, the use of partial fractions.

In 1841 Boole also published his first paper on invariants, a paper that would strongly influence Eisenstein, Cayley, and Sylvester to develop the subject. Arthur Cayley (1821–1895), the future Sadlerian Professor in Cambridge and one of the most prolific mathematicians in history, wrote his first letter to Boole in 1844, complimenting him on his excellent work on invariants. He became a close personal friend, one who would go to Lincoln to visit and stay with Boole in the years before Boole moved to Cork, Ireland. In 1842 Boole started a correspondence with Augustus De Morgan (1806–1871) that initiated another lifetime friendship.

In 1843 the schoolmaster Boole finished a lengthy paper on
differential equations, combining an exponential substitution and
variation of parameters with the separation of symbols method. The
paper was too long for the *CMJ*—Gregory, and later De
Morgan, encouraged him to submit it to the Royal Society. The first
referee rejected Boole’s paper, but the second recommended it for the
Gold Medal for the best mathematical paper written in the years
1841–1844, and this recommendation was accepted. In 1844 the
Royal Society published Boole’s paper and awarded him the Gold
Medal—the first Gold Medal awarded by the Society to a mathematician.
The next year Boole read a paper at the annual meeting of the British
Association for the Advancement of Science at Cambridge in June 1845.
This led to new contacts and friends, in particular William Thomson
(1824–1907), the future Lord Kelvin.

Not long after starting to publish papers, Boole was eager to find a way to become affiliated with an institution of higher learning. He considered attending Cambridge University to obtain a degree, but was counselled that fulfilling the various requirements would likely seriously interfere with his research program, not to mention the problems of obtaining financing. Finally, in 1849, he obtained a professorship in a new university opening in Cork, Ireland. In the years he was a professor in Cork (1849–1864) he would occasionally inquire about the possibility of a position back in England.

The 8 year stretch from 1847 to 1854 starts and ends with Boole’s
two books on mathematical logic. In addition Boole published 24 more
papers on traditional mathematics during this period, while only one
paper was written on logic, that being in 1848. He was awarded an
honorary LL.D. degree by the University of Dublin in 1851, and this was
the title that he used beside his name in his 1854 book on logic.
Boole’s 1847 book, *Mathematical Analysis of Logic*, will be
referred to as *MAL*; the 1854 book, *Laws of Thought*,
as *LT*.

During the last 10 years of his career, from 1855 to 1864, Boole published 17 papers on mathematics and two mathematics books, one on differential equations and one on difference equations. Both books were highly regarded, and used for instruction at Cambridge. Also during this time significant honors came in:

1857 Fellowship of the Royal Society 1858 Honorary Member of the Cambridge Philosophical Society 1859 Degree of DCL, honoris causa from Oxford

Unfortunately his keen sense of duty led to his walking through a rainstorm in late 1864, and then lecturing in wet clothes. Not long afterwards, on December 8, 1864 in Ballintemple, County Cork, Ireland, he died of pneumonia, at the age of 49. Another paper on mathematics and a revised book on differential equations, giving considerable attention to singular solutions, were published post mortem.

The reader interested in an excellent and thorough account of
Boole’s personal life is referred to Desmond MacHale’s *George
Boole, His Life and Work*, 1985/2014, a source to which this article is
indebted.

- 1815 — Birth in Lincoln, England
- 1830 — His translation of a Greek poem printed in a local paper
- 1831 — Reads Lacroix’s
*Calcul Différentiel* - Schoolmaster
- 1834 — Opens his own school
- 1835 — Gives public address on Newton’s achievements
- 1838 — Writes first mathematics paper
- 1839 — Visits Cambridge to meet Duncan Gregory, editor of the
*Cambridge Mathematical Journal*(*CMJ*) - 1841 — First four mathematical publications (all in the
*CMJ*) - 1842 — Initiates correspondence with Augustus De Morgan — they become lifelong friends
- 1844 — Correspondence with Cayley starts (initiated by Cayley) — they become lifelong friends
- 1844 — Gold Medal from the Royal Society for a paper on differential equations
- 1845 — Gives talk at the Annual Meeting of the British Association for the Advancement of Science, and meets William Thomson (later Lord Kelvin) — they become lifelong friends
- 1847 — Publishes
*Mathematical Analysis of Logic* - 1848 — Publishes his only paper on the algebra of logic
- Professor of Mathematics
- 1849 — Accepts position as (the first) Professor of Mathematics at the new Queen’s University in Cork, Ireland
- 1851 — Honorary Degree, LL.D., from Trinity College, Dublin
- 1854 — Publishes
*Laws of Thought* - 1855 — Marriage to Mary Everest, niece of George Everest, Surveyor-General of India after whom Mt. Everest is named
- 1856 — Birth of Mary Ellen Boole
- 1857 — Elected to the Royal Society
- 1858 — Birth of Margaret Boole
- 1859 — Publishes
*Differential Equations*; used as a textbook at Cambridge - 1860 — Birth of Alicia Boole, who will coin the word “polytope”
- 1860 — Publishes
*Difference Equations*; used as a textbook at Cambridge - 1862 — Birth of Lucy Everest Boole
- 1864 — Birth of daughter Ethel Lilian Boole, who would write
*The Gadfly*, an extraordinarily popular book in Russia after the 1917 revolution - 1864 — Death from pneumonia, Cork, Ireland

## 2. The Context and Background of Boole’s Work In Logic

To understand how Boole developed, in such a short time, his
impressive algebra of logic, it is useful to understand the broad
outlines of the work on the foundations of algebra that had been
undertaken by mathematicians affiliated with Cambridge University in
the 1800s prior to the beginning of Boole’s mathematical publishing
career. An excellent reference for further reading connected to this
section is the annotated sourcebook *From Kant to Hilbert*,
1996, by William Ewald, which contains a complete copy of Boole’s
*Mathematical Analysis of Logic*.

The 19th century opened in England with mathematics in the doldrums. The English mathematicians had feuded with the continental mathematicians over the issues of priority in the development of the calculus, resulting in the English following Newton’s notation, and those on the continent following that of Leibniz. One of the obstacles to overcome in updating English mathematics was the fact that the great developments of algebra and analysis had been built on dubious foundations, and there were English mathematicians who were quite vocal about these shortcomings. In ordinary algebra, it was the use of negative numbers and imaginary numbers that caused concern.

The first major attempt among the English to clear up the foundation
problems of algebra was the *Treatise on Algebra*, 1830, by
George Peacock (1791–1858). A second edition appeared as two
volumes, 1842/1845. He divided the subject into two parts, the first
part being *arithmetical algebra*, the algebra of the positive
numbers (which did not permit operations like subtraction in cases
where the answer would not be a positive number). The second part was
*symbolical algebra*, which was governed not by a specific
interpretation, as was the case for arithmetical algebra, but solely
by laws. In symbolical algebra there were no restrictions on using
subtraction, etc.

The terminology of algebra was somewhat different in the 19th century
from what is used today. In particular they did not use the word
“variable”; the letter \(x\) in an expression like
\(2x + 5\) was called a *symbol*, hence the name
“symbolical algebra”. In this article a prefix will
sometimes be added, as in *number symbol* or *class
symbol*, to emphasize the intended interpretation of a symbol.

Peacock believed that in order for symbolical algebra to be a useful
subject its laws had to be closely related to those of arithmetical
algebra. In this connection he introduced his *principle of the
permanence of equivalent forms*, a principle connecting results in
arithmetical algebra to those in symbolical algebra. This principle has
two parts:

*General results in arithmetical algebra belong to the laws of symbolical algebra.**Whenever an interpretation of a result of symbolical algebra made sense in the setting of arithmetical algebra, the result would give a correct result in arithmetic.*

A fascinating use of algebra was introduced in 1814 by
François-Joseph Servois (1776–1847) when he tackled
differential equations by separating the differential operator part
from the subject function part, as described in an example given
above. This application of algebra captured the interest of Gregory
who published a number of papers on the method of the *separation
of symbols*, that is, the separation into operators and objects,
in the *CMJ*. He also wrote on the foundation of algebra, and
it was Gregory’s foundation that Boole embraced, almost verbatim.
Gregory had abandoned Peacock’s principle of the permanence of
equivalent forms in favor of three simple laws, one of which Boole
regarded as merely a notation convention. Unfortunately these laws
fell far short of what is required to justify even some of the most
elementary results in algebra, like those involving subtraction.

In “On the foundation of
algebra,” 1839, the first of four papers on this
topic by De Morgan that appeared in the *Transactions of the Cambridge
Philosophical Society*, one finds a tribute to the separation of
symbols in algebra, and the claim that modern algebraists usually
regard the symbols as denoting operators (e.g., the derivative
operation) instead of objects like numbers. The footnote:

“Professor Peacock is the first, I believe, who distinctly set forth the difference between what I have called the technical [syntactic] and the logical [semantic] branches of algebra”credits Peacock with being the first to separate (what are now called) the syntactic and the semantic aspects of algebra. In the second foundations paper (in 1841) De Morgan proposed what he considered to be a complete set of eight rules for working with symbolical algebra.

## 3. The Mathematical Analysis of Logic (1847)

Boole’s path to logic fame started in a curious way. In early 1847 he
was stimulated to launch his investigations into logic by a trivial
but very public dispute between De Morgan and the Scottish philosopher
Sir William Hamilton (1788–1856)—not to be confused with
his contemporary the Irish mathematician Sir William Rowan Hamilton
(1805–1865). This dispute revolved around who deserved credit
for the idea of quantifying the predicate (e.g., “All \(A\)
is all \(B\),” “All \(A\) is some
\(B\),” etc.). Within a few months Boole had written his
82 page monograph, *Mathematical Analysis of Logic*, giving an
algebraic approach to Aristotelian logic, then looking briefly at the
general theory. (Some say that this monograph and De Morgan’s book
*Formal Logic* appeared on the same day in November 1847.)

Although Boole’s algebra of logic is not the Boolean algebra of power
sets \(P(U)\) with the operations of union, intersection
and complement, nonetheless the goal of the two algebras is the same,
namely to provide an equational logic for the calculus of classes and
propositional logic. The name “Boolean algebra” was
introduced by Charles Sanders Peirce (1839–1914) and adopted by
his friend, the Harvard philosopher Josiah Royce (1855–1916)
around 1900, then by Royce’s students and other Harvard
mathematicians, and eventually the world. It essentially referred to
the modern version of the algebra of logic, introduced in 1864 by
William Stanley Jevons (1835–1882), a version that Boole had
rejected in their correspondence—see Section 5.1. For this
reason the word “Boolean” will not be used in this article
to describe the algebra of logic that Boole actually created; instead
the name *Boole’s algebra* will be used.

In *MAL*, and more so in *LT*, Boole was interested in the insights
that his algebra of logic gave to the inner workings of the mind. This pursuit
has met with little favor, and will not be discussed in this article.

### 3.1 Boole’s Version Of Aristotelian Logic

In pages 15–59, a little more than half of the 82 pages in
*MAL*, Boole focused on a slight generalization of Aristotelian
logic, namely augmenting its four types of categorical propositions by
permitting the subject and/or predicate to be of the form
not-\(X\). In the chapter on conversions, such as Conversion by
Limitation—All \(X\) is \(Y\), therefore Some \(Y\) is
\(X\)—Boole found the Aristotelian classification defective in
that it did not treat contraries, such as not-\(X\), on the same
footing as the named classes \(X, Y, Z\), etc. For example, he wanted
to be able to convert “No \(X\) is \(Y\)” into “All
\(Y\) is not-\(X\)”. (*MAL*, p. 28)

With his extended version of Aristotelian logic in mind (where
contraries enjoy equal billing), he gave (*MAL*, p. 30) a set
of three transformation rules which allowed one to construct all valid
two-line categorical arguments (providing you accepted the unwritten
convention that simple names like \(X\), and perhaps
not-\(X\), denoted non-empty classes).

Regarding syllogisms, Boole did not care for the Aristotelian classification into Figures and Moods as it seemed rather arbitrary (and not well-suited to the algebraic setting). In particular he did not like the requirement that the predicate of the conclusion had to be the major term in the premises.

It is somewhat curious that when it came to analyzing categorical syllogisms, it was only in the conclusion that he permitted his generalized categorical propositions to appear. Among the vast possibilities for hypothetical syllogisms, the ones that he discussed were standard, with one new example added.

### 3.2 Class Symbols and Elective Symbols

The “Introduction” chapter starts with Boole reviewing the
symbolical method. The second chapter, “First
Principles”, lets the symbol 1 represent the universe which
“comprehends every conceivable class of objects, whether
existing or not.” Capital letters \(X, Y,
Z,\ldots\) denoted classes. Then, no doubt heavily
influenced by his very successful work using algebraic techniques on
differential operators, and consistent with De Morgan’s 1839 assertion
that algebraists preferred interpreting symbols as operators, Boole
introduced the elective symbol \(x\) corresponding to the class
\(X\), the elective symbol \(y\) corresponding to
\(Y\), etc. The *elective symbols* denoted elective
operators—for example the elective operator “red”
when applied to a class would elect (select) the red items in the
class. (One can simply replace the elective symbols by their
corresponding class symbols and have the interpretation used in
*LT* in 1854.)

### 3.3 Operations and Laws for Elective Symbols

The first operation Boole introduced was the *multiplication*
\(xy\) of elective symbols. The standard notation
\(xy\) for multiplication also had a standard meaning
for operators (for example, differential operators), namely one
applied \(y\) to an object and then \(x\) is applied to the
result. (In modern terminology, this is the *composition* of
the two operators.) Thus, as pointed out by Theodore Hailperin
(1916—2014) in his insightful book (1976/1986) on *MAL*
and *LT*, it seems likely that this established notation
convention handed Boole his interpretation of the multiplication of
elective symbols as composition of operators.

When one switches to using classes instead of elective operators, as
in *LT*, the corresponding multiplication of two classes
results in their intersection—that is, one has \(xy = z\) if and
only if \(XY = Z\), where \(XY\) is the intersection of \(X\) and
\(Y\).

The first law in *MAL* was the *distributive law*

where Boole said that \(u+v\) corresponded to dividing
a class into two parts. This was the first mention of addition in *MAL*.
From *LT* one can determine the proper interpretation of the addition of
elective operators in
*MAL*:

\((x + y)(Z)\) is the union of \(x(Z)\) and \(y(Z)\) provided \(X\) and \(Y\) are disjoint classes; otherwise \(x + y\) is not defined.

Thus addition is a partial operation on elective operators. Likewise one finds that subtraction is defined by:

\((x - y)(Z)\) is \(x(Z) \smallsetminus y(Z)\) provided \(Y\) is contained in \(X\); otherwise \(x - y\) is not defined.

Thus subtraction is also a partial operation on elective operators.

Boole added (*MAL*, p. 17) the *commutative law* \(xy =
yx\) and the *index law* \(x^n = x\)—in *LT* the
latter would be replaced by the *law of duality* \(x^2 = x\)
(called the *idempotent law* in 1870 by the Harvard
mathematician Benjamin Peirce (1809–1880), in another
context).

After stating the above distributive and commutative laws, Boole
believed he was entitled to fully employ the ordinary algebra of his
time, saying (*MAL*, p. 18) that

“all the processes of Common Algebra are applicable to the present system”,

and indeed in addition to the usual algebra of polynomials one sees
power series and Lagrange multipliers in *MAL*.

Boole went beyond the foundations of symbolical algebra that Gregory had used in 1840—he added De Morgan’s 1841 single rule of inference, that equivalent operations performed upon equivalent subjects produce equivalent results.

### 3.4 Common Algebra

It is likely more difficult for the modern reader to come to grips with the idea that Boole’s algebra is based on ordinary algebra than would have been the case with Boole’s contemporaries—the modern reader has been exposed to modern Boolean algebra (and perhaps Boolean rings). In the mid 1800s the word “algebra” meant, for most mathematicians, simply the algebra of numbers. Boole’s algebra was mainly concerned with polynomials with integer coefficients, and with their values when the variables were restricted to taking on only the values 0 and 1. To put the reader in the proper frame of mind, some of the key polynomials in Boole’s work, along with their values on \(\{0,1\}\), are presented in the following table:

\(x\) | y | \(1 - x\) | \(x - x^2\) | \(xy\) | \(x + y\) | \(x - y\) | \(x+y - xy\) | \(x+y - 2xy\) |

1 | 1 | 0 | 0 | 1 | 2 | 0 | 1 | 0 |

1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 |

0 | 1 | 1 | 0 | 0 | 1 | \(-1\) | 1 | 1 |

0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |

Note that all of the polynomials \(p\)(*x,y*) in the above
table, except for addition and subtraction, take values in \(\{0,1\}\) when
the variables take values in \(\{0,1\}\). Such polynomials are called
*switching functions* in computer science and electrical
engineering, and as functions on \(\{0,1\}\) they are idempotent, that is,
*p\(^2 =\) p*. The switching functions are exactly
the idempotent polynomials in Boole’s algebra.

### 3.5 Impact of the Index Law

Boole’s three laws for his algebra of logic are woefully inadequate
for what follows in *MAL*. The reader will, for the most part,
be well served by assuming that Boole is doing ordinary polynomial
algebra augmented by the assumption that any power
\(x^n\) of an elective symbol \(x\) can
be replaced by \(x\). Indeed one can safely assume that any
polynomial equation *p = q* that holds in the integers is valid
in Boole’s algebra. Also any equational argument

that holds in the integers is valid in Boole’s algebra.

[A
note of caution: the argument “\(x^2 = x \therefore x = 1\) or
\(x = 0\)” is valid in the integers, but it is *not* an
equational argument since the conclusion is a disjunction of
equations, not a single equation.]

In Boole’s algebra, any polynomial \(p(x)\) in one variable can be reduced to a linear polynomial \(ax + b\) since one has

\[\begin{align} a_n x^n + \cdots + a_1 x + a_0 &= a_n x + \cdots + a_1 x + a_0 \\ &= (a_n + \cdots + a_1)x + a_0. \end{align}\]Likewise any polynomial \(p(x, y)\) can be expressed as \(axy + bx + cy + d\). Etc.

However Boole was much more interested in the fact that \(ax + b\) can be written as a linear combination of \(x\) and \(1-x\), namely

\[ ax + b = (a + b)x + b(1-x). \]
This gives his *Expansion Theorem* in one variable:

The Expansion Theorem for polynomials in two variables is

\[\begin{align} p(x,y) =& p(1,1)xy + p(1,0)x(1-y)\ + \\ & p(0,1) (1 - x)y + p(0,0) (1 - x)(1 - y). \end{align}\]For example,

\[\begin{align} x + y &= 2xy + x(1-y) + (1-x)y \\ x - y &= x(1-y) - (1-x)y. \end{align}\]
The expressions \(xy, \ldots, (1 - x)(1 - y)\), are called
the *constituents* of \(p(x,y)\)—it would be better to
call them the constituents of the variables \(x, y\)—and the
coefficients \(p(1,1), \ldots, p(0,0)\) are the *modulii* of
\(p(x,y)\).

Similar results hold for polynomials in any number of variables
(*MAL*, pp. 62–64). In Boole’s algebra there are three
important facts about the constituents for a given list of
variables:

- each constituent is idempotent,
- the product of two distinct constituents is 0,
- the sum of all the constituents is 1.

The index law, \(x^n = x\), was different from Boole’s two fundamental laws for the common algebra—it only applied to the individual elective symbols, not in general to compound terms that one could build from these symbols. For example, one does not in general have \((x + y)^2 = x + y\) in Boole’s system since, by ordinary algebra with idempotent class symbols, this would imply \(2xy = 0\), and then \(xy = 0\), which would force \(x\) and \(y\) to represent disjoint classes. But it is not the case that every pair of classes is disjoint.

Keeping the laws and valid equational arguments from the algebra of
numbers, augmented by the index law, forces addition \(x + y\) to
be *undefined* unless the classes \(X\) and \(Y\) are
disjoint. The only place where Boole wrote down the argument showing
that addition must be a partial operation was in his unpublished
Nachlass—see *Boole: Selected Manuscripts …*, 1997,
edited by Ivor Grattan-Guiness and Gérard Bornet, pp. 91,92.

### 3.6 Equational Expressions of Categorical Propositions

In the chapter “Of Expression and Interpretation”, Boole
said that necessarily the class not-\(X\) is expressed by
\(1-x\). This is the first appearance of *subtraction*
in *MAL*. Boole’s initial equational expressions of the
Aristotelian categorical propositions as elective equations
(*MAL*, pp. 21,22) will be called his
*primary* expressions. Then in the next several pages he adds
supplementary expressions; of these the main ones will be called the
*secondary* expressions.

Propositions | Primary Expressions | Secondary Expressions |

All \(X\) is \(Y\) | \(x = xy\) | \(x = vy\) |

No \(X\) is \(Y\) | \(xy = 0\) | \(x = v(1-y)\) |

Some \(X\) is \(Y\) | \(v = xy\) | \(vx = vy\) |

Some \(X\) is not \(Y\) | \(v = x(1 - y)\) | \(vx = v(1 - y)\) |

The first primary expression given was for “All \(X\) is
\(Y\)”, an equation which he then converted into
\(x(1-y) = 0\). This was the first appearance of
0 in *MAL*. It was not introduced as the symbol for the empty
class—indeed the empty class does not appear in *MAL*.
Evidently “\(= 0\)” performed the role of a predicate in
*MAL*, with an equation \(E = 0\) asserting that the class
denoted by \(E\) simply did not exist. (In *LT*, the
empty class was introduced, and denoted by 0.)

Boole emphasized that when a premise about \(X\) and \(Y\) is translated into an equation involving \(x, y\) and \(v\), the symbol \(v\) expressed “some”, but only in the context in which it appeared in the premise. For example, “Some \(X\) is \(Y\)” has the primary translation \(v = xy\), which implies the secondary translation \(vx = vy\). This could also be read as “Some \(X\) is \(Y\)”. Another consequence of \(v = xy\) is \(v(1-x) = v(1-y)\). However it was not permitted to read this as “Some not-\(X\) is not-\(Y\)” since \(v\) did not appear with \(1-x\) or \(1-y\) in the premise. Boole’s use of \(v\) in the equational expression of propositions has been a long-standing bone of contention.

The simple algebra and considerable detail in this part of
*MAL* can be appealing to the new reader, but there are
complications that need to be dealt with. Does “Some
not-\(X\) is \(Y\)” follow from “All
not-\(X\) is \(Y\)”? There is a lack of clarity on
when to use the primary and secondary equations when analyzing
syllogisms, and with what one is permitted to do to derive \(0 = 0\) as a
marker that the premises being considered do not belong to a valid
syllogism.

Syllogistic reasoning is just an exercise in *elimination*,
namely the middle term is eliminated from the premises to give the
conclusion. Elimination was a standard topic in the theory of
equations, and Boole borrowed a simple elimination result regarding
two equations to use in his algebra of logic—if the premises of
a syllogism involved the classes \(X, Y\), and
\(Z\), and one wanted to eliminate the middle term \(Y\),
then Boole put the equations for the two premises in the form

where \(y\) does not appear in the coefficients *a,b,c,d*.
The result of eliminating \(y\) in ordinary algebra gives the
equation

and this is what Boole used in *MAL*. Unfortunately this is a
weak elimination result for Boole’s algebra. One finds, using the
improved reduction and elimination theorems of *LT*, that the
best possible result of elimination is

(Boole never pointed out this defect in *MAL*.)

The primary equational expressions were not sufficient to derive all
of the desired syllogisms. For example, in the cases where the
premises had primary expressions \(ay = 0\) and \(cy = 0\),
elimination gave \(0 = 0\), even though Aristotelian logic might demand a
non-trivial conclusion. Boole introduced the alternative equational
expressions (see *MAL*, p. 32) of categorical propositions to
be able to derive all of the valid Aristotelian syllogisms. With this
convention, of using alternative expressions when needed, it turned
out that the premises that only led to \(0 = 0\) were *among*
those which did not belong to a valid syllogism. Boole did not offer
an algebraic way to completely determine which premises could be
completed to valid syllogisms.

Toward the end of the chapter on categorical syllogisms there is a
long footnote (*MAL*, pp. 42–45) to support a claim
(*MAL*, pp. 42, 43) that secondary translations alone are
sufficient for the analysis of [his generalization of] Aristotelian
categorical logic. The footnote loses much of its force because the
results it presents depend heavily on the weak elimination theorem
being best possible, which is not the case. In the Postscript he says
that using only the secondary translations is altogether superior to
what was presented in the main text.

Boole would use only the secondary translations of *MAL* in
*LT*, but in *LT* the reader will no longer find a
leisurely and detailed treatment of Aristotelian logic. Indeed the
discussion of Aristotelian logic is delayed until the last chapter on
logic, namely Chapter XV, and in this chapter it is presented in such
a compressed form, using such long equations, that the reader is not
likely to want to check that Boole’s analysis is correct.

### 3.7 Hypothetical Syllogisms

Boole analyzed the seven *hypothetical syllogisms* that were
standard in Aristotelian logic, from the Constructive and Destructive
Conditionals to the Complex Destructive Dilemma. Letting capital
letters \(X, Y, \ldots\) represent categorical propositions, the
*hypothetical propositions* traditionally involved in
hypothetical syllogisms were in one of the forms “\(X\) is
true”, “\(X\) is false”,“If \(X\)
then \(Y\)”, “\(X\) or \(Y\) or …”,
“\(X\) and \(Y\) and …”. At the end of the
chapter on hypothetical syllogisms he noted that it was easy to create
new ones, and one could enrich the collection by using mixed
hypothetical propositions such as “If \(X\) is true, then
either \(Y\) is true, or \(Z\) is true.”

One sees that Boole is taking first steps towards the general notion of a propositional formula \(\Phi(X,Y, \ldots)\), but he never reached our modern approach using a recursive definition, an approach which is essential to being able to do inductive proofs on the set of propositional formulas.

Most important in this chapter was Boole’s claim that his algebra of
logic for categorical propositions was equally suited to the study of
hypothetical syllogisms. This was based on adopting the standard
reduction of hypothetical propositions to propositions about classes
by letting the *hypothetical universe*, also denoted by 1, be
the collection of all *cases and conjunctures of circumstances*
(which was usually abbreviated to just the word *cases*).
Evidently his notion of a “case” was an assignment of
truth values to the propositional variables.

This brings up the question of whether or not his hypothetical
universe depended on the variables being considered in an
argument—if so then for \(n\) variables the universe would
have \(2^n\) cases. However he makes the remark
(*MAL*, p. 50) that “the extent of the hypothetical
Universe does not at all depend upon the number of circumstances which
are taken into account”. In this context
“circumstances” means propositional variables; one still
has the question of what he means by cases. A modern solution would be
to use the collection of all mappings from the set of propositional
variables to the set \(\{\rT, \rF\}\).

For \(X\) a categorical proposition Boole let \(x\) be the elective operator that selects the cases for which \(X\) is true. Consider the hypothetical proposition “If \(X\) then \(Y\)”, where \(X, Y\) are categorical propositions. A natural conversion of this hypothetical proposition into a categorical proposition would be “All \(Cases(X)\) are \(Cases(Y)\)”, where \(Cases(X)\) is the class of all cases for which \(X\) is true, etc. The equational translation would be \(xy = x\).

The hypothetical proposition “\(X\) or \(Y\)”, with the
“or” being inclusive, can be expressed by “Every
case is in \(Cases(X)\) or in \(Cases(Y)\) or in both”, but this
is not in the form of a categorical proposition. Boole says
the *universe of a categorical proposition* has two
cases, *true* and *false*. To find an equational
expression for a hypothetical proposition Boole resorts to a near
relative of truth tables (*MAL*, p. 50). To each case, that is,
assignment of truth values to \(X\) and \(Y\), he associates an
elective expression as follows:

\(Cases(x)\) | \(Cases(y)\) | Elective Expressions |

\(\rT\) | \(\rT\) | \(xy\) |

\(\rT\) | \(\rF\) | \(x(1- y)\) |

\(\rF\) | \(\rT\) | \((1 - x)y\) |

\(\rF\) | \(\rF\) | \((1 - x)(1 - y)\) |

These elective expressions are, of course, the *constitutents*
of the elective operators \(x\), \(y\).

Boole translates a propositional formula \(\Phi(X,Y, \ldots)\) into an elective expression \(\phi(x,y, \ldots)\) by ascertaining all the distinct cases (assignments of truth values) which imply the formula, and summing their corresponding elective expressions. The elective equation for \(\Phi(X,Y, \ldots)\) is then \(\phi(x,y, \ldots) = 1\).

The elective expression for “\(X\) or \(Y\)”, with “or” inclusive, is the sum of the elective expressions for the truth assignments to \(X\) and \(Y\) for which “\(X\) or \(Y\)” holds, that is, the sum of the first three elective expressions in the above table, namely \(xy + x(1 - y) + (1 - x)y\), which simplifies to \(x + y - xy\). The elective equation of the assertion “\(X\) or \(Y\)” is \(x + y - xy = 1\).

Boole did not have the modern view that a propositional formula can be considered a function on \(\{\rT, \rF\}\), taking values in \(\{\rT, \rF\}\). The function viewpoint gives us an algorithm to determine which constituents are to be summed to give the desired elective expression, namely those constituents associated with the cases for which the propositional formula has the value \(\rT\). Applying this to the propositional formula “\(X\) implies \(Y\)” gives the following:

\(Cases(X)\) | \(Cases(Y)\) | Value of \(X \rightarrow Y\) |
Elective Expressions |

T | T | T | \(xy\) |

T | F | F | ––– |

F | T | T | \((1 - x)y\) |

F | F | T | \((1 - x)(1 - y)\) |

Thus the elective expression for “\(X\) implies \(Y\)” is \(xy + (1 - x) y + (1 - x)(1 - y)\), which simplifies to \(1 - x + xy\).

By not viewing propositional formulas as functions on \(\{\)T, F\(\}\) Boole missed out on being the inventor of truth tables. His algebraic method of analyzing hypothetical syllogisms was to transform each of the hypothetical premises into an elective equation, and then apply his algebra of logic (which was developed for categorical propositions). For example, the premises “\(X\) or \(Y\)” and “not-\(X\)” are expressed by “\(x + y - xy = 1\)” and “\(x = 0\)”. From these it immediately follows that “\(y = 1\)”, giving the conclusion “\(Y\)”, that is, if “\(X\) or \(Y\)” and “not-\(X\)” are true, then “\(Y\) is true”.

Boole’s assumption that \(x\) selected the cases for which \(X\) is true leads to some confusion. In the above example, the premise “\(x = 0\)” apparently says that \(X\) is false in all cases, and the conclusion “\(y = 1\)” says that \(Y\) is true in all cases. But the meaning of the argument “\(X\) or \(Y\), not-\(X \therefore Y\)” is that any case which makes the premises true also makes the conclusion true.

The confusion is cleared up by adopting De Morgan’s 1847 concept
of the *universe of discourse* (as Boole did in
in *LT*). Namely, given premises \(\Phi , \Psi, \ldots\), the
universe of discourse is chosen to be the collection of cases for
which the premises hold. In *LT* Boole abandoned the use of
cases specified by assignments of truth-values to the variables, and
instead associated with a proposition the *time* during which
it was true, noting that to use “cases” one needed to
define the notion of a case, which he evidently was unable to do in a
satisfactory manner.

Boole only considered rather simple hypothetical propositions on the
grounds these were the only ones encountered in common usage (see
*LT*, p. 172). His algebraic approach to propositional logic
is easily extended to all propositional formulas as follows. For
\(\Phi\) a propositional formula the associated elective function
\(\Phi^*\) is defined recursively as follows:

- \(0^* = 0\); \(1^* = 1\); \(X^* = x\);
- \((\text{not-}\Phi)^* = 1 - \Phi^*\);
- \((\Phi \text{ and } \Psi)^* = \Phi^* \cdot \Psi^*\);
- \((\Phi \text{ or } \Psi)^* = \Phi^* + \Psi^* - \Phi^* \cdot \Psi^*\), where “or” is inclusive;
- \((\Phi \text{ or } \Psi)^* = \Phi^* + \Psi^* - 2\Phi^* \cdot \Psi^*\), where “or” is exclusive;
- \((\Phi \text{ implies } \Psi)^* = 1 - \Phi^* + \Phi^* \cdot \Psi^*\);
- \((\Phi \text{ iff } \Psi)^* = \Phi^* \cdot \Psi^* + (1 - \Phi^*) \cdot(1 - \Psi^*)\).

Then one has:

- \(\Phi\) is a tautology iff \(\Phi^* = 1\) is valid in Boole’s algebra.
- \(\Phi_1\), ... , \(\Phi_k \therefore \Phi\) is valid in
propositional logic iff

\(\Phi_{1}^* = 1, \ldots , \Phi_{k}^* = 1 \therefore \Phi^* = 1\) is valid in Boole’s algebra.

This looks quite different from modern propositional logic where one takes a few tautologies, such as \(X \rightarrow(Y \rightarrow X)\), as axioms, and inference rules such as modus ponens to form a deductive system.

This translation, from \(\Phi\) to \(\Phi^*\), viewed as mapping
expressions in modern Boolean algebra to polynomials, would be
presented in a 1933 paper of Hassler Whitney (1907–1989), with
the objective of showing that one does not need to learn the algebra
of logic [modern Boolean algebra] to verify the equational laws and
equational arguments of Boolean algebra—they can be translated
into the ordinary algebra with which one is familiar. Howard Aiken
(1900–1973), Director of the Harvard Computation Laboratory,
would use such translations of logical functions into ordinary algebra
in his 1951 book *Synthesis of Electronic Computing and Control
Circuits*, specifically stating that he preferred Boole’s
numerical function approach to that of Boolean algebra or
propositional logic.

### 3.8 General Theorems of Boole’s Algebra in *MAL*

Beginning with the chapter “Properties of Elective
Functions”, Boole developed general theorems for working with
equations in his algebra of logic—the Expansion Theorem and the
properties of constituents are discussed in this chapter. His proof
of the one-variable case of the Expansion Theorem (*MAL*,
p. 60) is rather strange—there is no need to take a power series
expansion of a polynomial. Otherwise his proof is correct. From the
Expansion Theorem and the properties of constituents he shows that the
modulii of the sum/difference/product of two elective functions are
the sums/differences/products of the corresponding modulii of the two
functions.

The Expansion Theorem is used (*MAL*, p. 61) to prove an
important result, that \(p(x)\) and \(q(x)\) are equivalent in
Boole’s algebra if and only if corresponding *modulii*
are the same, that is, \(p(1) = q(1)\) and \(p(0) = q(0)\). This
result generalizes to functions of several variables. It will not be
stated as such in *LT*, but will be absorbed in the much more
general (if somewhat opaquely stated) result that will be called the
Rule of 0 and 1.

An elective function \(p(x, y, \ldots)\) is
*interpretable* in Boole’s algebra whenever it is
defined. For example \(1+1 + x\) is not interpretable (for any class
\(X\)), \(x + y\) is only interpretable for \(X\) and \(Y\) disjoint
classes, and \(xy\) is totally (always) interpretable. An elective
equation \(p = q\) is interpretable whenever both sides are
interpretable. A *constituent equation* is an elective equation
of the form \(r = 0\), where \(r\) is a constituent. Constituent
equations are totally interpretable. Boole shows (*MAL*, p. 64)
that every elective equation \(p = 0\) is equivalent to the collection
of constituent equations \(r = 0\) where the modulus (coefficient) of
\(r\) in the expansion of \(p\) is not zero, and thus *every
elective equation is interpretable*. Furthermore this leads
(*MAL*, p. 65) to the fact that \(p = 0\) is equivalent to the
equation \(q = 0\) where \(q\) is the sum of the constituents in the
expansion of \(p\) whose modulus is non-zero. As examples, consider
the equations \(x + y = 0\) and \(x - y = 0\). The following table
gives the constituents and modulii of their expansions:

\(x\) | y | constituents | \(x + y\) | \(x - y\) |

1 | 1 | \(xy\) | 2 | 0 |

1 | 0 | \(x(1 - y)\) | 1 | 1 |

0 | 1 | \((1 - x)y\) | 1 | \(-1\) |

0 | 0 | \((1 - x)(1 - y)\) | 0 | 0 |

Thus \(x + y = 0\) is equivalent to the collection of constituent equations

\[ xy = 0,\ x(1 - y) = 0,\ (1 - x)y = 0 \]as well as the single equation

\[ xy + x(1 - y) + (1 - x)y = 0, \]and \(x - y = 0\) is equivalent to the collection of constituent equations

\[ x(1 - y) = 0,\ (1 - x)y = 0 \]as well as the single equation

\[ x(1 - y) + (1 - x)y = 0. \]
It was natural for Boole to want to solve equations in his algebra of
logic since this had been a main goal of ordinary algebra, and had led
to many difficult questions (e.g., how to solve a 5^{th} degree
equation). Fortunately for Boole, the situation in his algebra of
logic was much simpler—he could always solve an equation, and
finding the solution was important to applications of his system, to
derive conclusions in logic. An equation was solved in part by using
formal expansion after performing formal division, and then decoding the
fractional coefficients.

This *Solution Theorem* was the result of which he was the most
proud—it described how to solve an elective equation for one of
its symbols in terms of the others, often introducing constraint
equations on the independent variables, and it is this that Boole
claimed (in the Introduction chapter of *MAL*, p. 9) would
offer “the means of a perfect analysis of any conceivable system
of propositions, …”. In *LT* Boole would continue
to regard this tool as the highlight of his work.

Boole’s final example (*MAL*, p. 78), solving three equations
in three unknowns for one of the unknowns in terms of the other two,
used a well known technique for handling side conditions in analysis
called Lagrange Multipliers—this method (which reduced the three
equations in the example to a single equation in five unknowns)
reappears in *LT* (p. 117), but is only used in a single
example. It is superseded by the sum of squares reduction
(*LT*, p. 121) which does not introduce new variables. Power
series had not been completely abandoned in *LT*—they
appeared in *LT*, but only in a footnote (*LT*, p. 72).
Using the Reduction and Elimination Theorems in *LT* one
discovers that Boole’s constraint equations (3) (*MAL*, p. 80)
for his three equation example are much too weak—each of the
products should be 0, and there are additional constraint equations.

*MAL* shows more clearly than *LT* how closely Boole’s
algebra of logic is based on the common algebra plus idempotent class
symbols. The Elimination Theorem that he simply borrowed from algebra
turned out to be weaker than what his algebra offered, and his method
of reducing equations to a single equation was clumsier than the main
one used in *LT*, but the Expansion Theorem and Solution
Theorem were the same. One sees that *MAL* contained not only
the basic outline for *LT*, but also some parts fully
developed. Much of *LT* would be devoted to clarifying and
correcting what was said in *MAL*, and providing more
substantial applications, the main one being his considerable work in
probability theory.

## 4. The Laws of Thought (1854)

Boole’s second logic book, *An Investigation of The Laws of Thought
on which are founded the Mathematical Theories of Logic and
Probabilities*, published in 1854, was an effort to correct and
perfect his 1847 book on logic. The second half of this 424 page book
presented probability theory as an excellent topic to illustrate the
power of his algebra of logic. Boole discussed the theoretical
possibility of using probability theory (enhanced by his algebra of
logic) to uncover fundamental laws governing society by analyzing large
quantities of social data by large numbers of (human) computers.

Boole said that he would use letters like \(x\) to represent classes,
although later he would also use capital letters like
\(V\). The *universe* was a class, denoted by 1; and there was
a class described as having “no beings”, denoted by 0,
which we call the *empty class*. The operation of
*multiplication* was defined to be intersection, and this led
to his first law, \(xy = yx\). Next (some pages later) he gave the
idempotent law \(x^2 = x\). *Addition* was introduced as
aggregation when the classes were disjoint. He stated the commutative
law for addition, \(x + y = y + x\), and the distributive law \(z(x +
y) = zx + zy\). Then followed \(x - y = - y + x\) and \(z(x - y) = zx
- zy\). The associative laws for addition and multiplication were
conspicuously absent. A possible reason for this omission was that he
worked with the standard algebra of
*polynomials*, where the parentheses involved in the
associative laws are absent, instead of the *terms* which are
fundamental to modern logic.

Boole seems to justify his choice of laws on the basis that they are valid where defined. This does not guarantee the compatibility of the axioms with the algebraic structures since the equation \((x+y)^2 = x+y\) is certainly valid where defined, namely when \(xy = 0\), but adding this to Boole’s axioms leads to the theorem \(xy = 0\), that is, any two classes are disjoint, which is not the case. Working with partial algebras has its subtleties.

One might expect that Boole was building toward an axiomatic
foundation for his algebra of logic, just as in *MAL*,
evidently having realized that the three laws in *MAL* were not
enough. Indeed he did discuss the rules of inference, that adding or
subtracting equals from equals gives equals, and multiplying equals by
equals gives equals. But then the development of an axiomatic
approach came to an abrupt halt. There was no discussion as to whether
the stated axioms (which he called *laws*) and rules (which he
called *axioms*) were sufficient to construct his algebra of
logic. (They were not.) Instead he simply and briefly, with
remarkably little fanfare, presented a radically new foundation for
his algebra of logic (*LT* pp. 37,38).

He said that since the only idempotent numbers were 0 and 1, this
suggested that the correct algebra to use for logic would be the
common algebra of the ordinary numbers modified by restricting the
symbols to the values 0 and 1. He stated what, in this article, is
called *The Rule of 0 and 1*, that a law or argument held in
logic iff after being translated into equational form it held in
common algebra with this 0,1-restriction on the possible
interpretations (i.e., values) of the symbols. Boole would use this
Rule to justify his main theorems (Expansion, Reduction, Elimination),
and for no other purpose. The main theorems in turn yielded Boole’s
General Method for discovering the strongest possible consequences of
propositional premises under certain desired constraints (such as
eliminating some of the variables).

In Chapter V he discussed the role of *uninterpretables* in his
work; as a (partial) justification for the use of uninterpretable
steps in symbolic algebra he pointed to the well known use of
\(\sqrt{-1}\). Unfortunately his *Principles of Symbolical
Reasoning* do not, in general apply to partial algebras, that is,
where some of the operations are only partially defined, such as
addition and subtraction in Boole’s algebra. Nonetheless it turns out
that they do apply to his algebra of logic. In succeeding chapters he
gave the Expansion Theorem, the new full-strength Elimination Theorem,
an improved Reduction Theorem, and the use of division to solve an
equation.

After many examples and results for special cases of solving
equations, Boole turned to the topic of the interpretability of a
logical function. Boole had already stated that *every equation is
interpretable* (by converting an equation into a collection of
constituent equations). However terms need not be interpretable, e.g.,
\(1+1\) is not interpretable. Working with the modern notion of terms,
one can recursively define the domain of interpretability of a
term. For example, \((x+y) - z\) has a different domain of
interpretability than the equivalent term \(x + (y -z)\). The first is
interpretable if and only if \(x\) and \(y\) are disjoint classes, and
\(z\) is contained in the union of \(x\) and \(y\). The second is
interpretable if and only if \(z\) is contained in \(y\), and \(x\)
and \(y \smallsetminus z\) are disjoint. Both terms are equivalent to the
same polynomial \(x + y - z\), leaving Boole with the problem of
determining when a polynomial \(p\) is interpretable. Eventually he
comes to the conclusion that the condition for a polynomial to be
equivalent to a (totally) interpretable elective function is that it
satisfy \(p^2 = p\), in which case it is equivalent to a sum of
distinct constituents, namely those belonging to the non-vanishing
modulii of \(p\). Of course a polynomial is idempotent if and only if
all of its modulii are idempotent, that is, they are in \(\{0, 1\}\),
in which case the expansion of the polynomial is a sum of distinct
constituents (or it is 0).

Boole’s chapter on secondary propositions is essentially the same as
in *MAL* except that he changed from using “the cases
when \(X\) is true” to “the times when \(X\) is
true”. In Chapter XIII Boole selected some well-known arguments
of Clarke and Spinoza, on the nature of an eternal being, to put under
the magnifying glass of his algebra of logic, starting with the
comment (*LT*, p. 185):

2. The chief practical difficulty of this inquiry will consist, not in the application of the method to the premises once determined, but in ascertaining what the premises are.

One conclusion was (*LT*, p. 216):

“19. It is not possible, I think, to rise from the perusal of the arguments of Clarke and Spinoza without a deep conviction of the futility of all endeavours to establish, entirely a priori, the existence of an Infinite Being, His attributes, and His relation to the universe.”

In the final chapter on logic, chapter XV, Boole presented his
analysis of the conversions and syllogisms of Aristotelian logic.
He now considered this ancient logic to be a weak, fragmented attempt at a
logical system.
This much neglected chapter is quite interesting
because it is the *only* chapter where he analyzed particular
propositions, making essential use of additional letters like
“\(v\)” to encode “some”.
This is also the chapter
where he detailed (unfortunately incompletely) the rules for working
with “some”.

Briefly stated, Boole gave the reader a summary of traditional Aristotelian categorical logic, and analyzed some simple examples using ad hoc techniques with his algebra of logic. Then he launched into proving a comprehensive result by applying his General Method to the pair of equations:

\[\begin{align} vx &= v'y \\ wz &= w'y, \end{align}\]noting that the premises of many categorical syllogisms can be put in this form. His goal was to eliminate \(y\) and find expressions for \(x, 1-x\) and \(vx\) in terms of \(z, v, v', w, w'\). This led to three equations involving large algebraic expressions. Boole omitted almost all details of his derivation, but summarized the results in terms of the established results of Aristotelian logic. Then he noted that the remaining categorical syllogisms are such that their premises can be put in the form:

\[\begin{align} vx &= v'y \\ wz &= w'(1-y), \end{align}\]and this led to another triple of large equations.

## 5. Later Developments

### 5.1 Objections to Boole’s Algebra of Logic

Many objections to Boole’s system have been published over the years; three among the most important concern:

- the use of uninterpretable expressions in derivations,
- the treatment of particular propositions by equations, and
- the method of dealing with division.

We look at a different objection, namely at the Boole/Jevons dispute
over adding \(x + x = x\) as a law. In *Laws
of Thought*, p. 66, Boole said:

The expression \(x + y\) seems indeed uninterpretable, unless it be assumed that the things represented by \(x\) and the things represented by \(y\) are entirely separate; that they embrace no individuals in common.

[The following details are from “The development of the theories of mathematical logic and the principles of mathematics, William Stanley Jevons,” by Philip Jourdain, 1914.]

In an 1863 letter to Boole regarding a draft of a commentary on
Boole’s system that Jevons was considering for his forthcoming book
(*Pure Logic*, 1864), Jevons said:

It is surely obvious, however, that \(x+x\) is equivalent only to \(x,\ldots\)

Professor Boole’s notation [process of subtraction] is inconsistent with a self-evident law.

If my view be right, his system will come to be regarded as a most remarkable combination of truth and error.

Boole replied:

Thus the equation \(x + x = 0\) is equivalent to the equation \(x = 0\); but the expression \(x + x\) is not equivalent to the expression \(x\).

Jevons responded by asking if Boole could deny the truth of \(x + x = x\).

Boole, clearly exasperated, replies:

To be explicit, I now, however, reply that it is not true that in Logic \(x + x = x\), though it is true that \(x + x = 0\) is equivalent to \(x = 0\). If I do not write more it is not from any unwillingness to discuss the subject with you, but simply because if we differ on this fundamental point it is impossible that we should agree in others.

Jevons’s final effort to get Boole to understand the issue was:

I do not doubt that it is open to you to hold …[that \(x + x = x\) is not true] according to the laws of your system, and with this explanation your system probably is perfectly consistent with itself … But the question then becomes a wider one—does your system correspond to the Logic of common thought?

Jevons’s new law, \(x + x = x\), resulted from his conviction that “+” should denote what we now call union, where the membership of \(x + y\) is given by an inclusive “or”. Boole simply did not see any way to define \(x + y\) as a class unless \(x\) and \(y\) were disjoint, as already noted.

Various explanations have been given as to why Boole could not comprehend the possibility of Jevons’s suggestion. Boole clearly had the semantic concept of union—he expressed the union of \(x\) and \(y\) as \(x + y(1-x)\), a union of two disjoint classes, and pointed out that the elements of this class are the ones that belong to either \(x\) or \(y\) or both. So how could he so completely fail to see the possibility of taking union for his fundamental operation + instead of his curious partial union operation?

The answer is simple: the law \(x + x = x\) would have destroyed his
ability to fully use *ordinary* algebra: from \(x + x = x\) one
has, by ordinary algebra, \(x = 0\). This would force every class
symbol to denote the empty class. Jevons’s proposed law \(x + x
= x\) was simply not true if one was committed to constructing the
algebra of logic on top of the laws and inference rules of ordinary
algebra. (Boolean rings have all the laws of the integers, but not all
of the inference rules, for example, \(2x = 0\) implies \(x = 0\) does
not hold in Boolean rings. It seems quite possible that Boole found
the simplest way to construct an algebra of logic for classes that
allowed one to use *all* the equations and equational arguments
that were valid for the integers.)

Perhaps it is interesting to note that the title of Jevon’s 1864 book
started out with the words *Pure Logic*, referring to the fact
that his version of the algebra of logic had been cleansed from
connections to the algebra of numbers. The same point would be made in
the introduction to Whitehead and Russell’s *Principia
Mathematica*, that they had adopted the notation of Peano in part
to free their work from such connections.

### 5.2 Modern Reconstruction of Boole’s System

Given the enormous degree of sophistication achieved in modern algebra
in the 20th century, it is rather surprising that a law-preserving
total algebra extension of Boole’s partial algebra of classes did not
appear until Theodore Hailperin’s book of 1976—the delay was
likely caused by readers not believing that Boole was using ordinary
algebra. Hailperin’s extension was to look at labelings of the
universe with integers, that is, each element of the universe is
labeled with an integer. Each labeling of the universe creates a
*signed multi-set* (perhaps one should say *signed
multi-class*) consisting of those labeled elements where the label
is non-zero. For *multi-sets*, whose labels are all
non-negative, one can think of the label of an element as describing
how many copies of the element are in the multi-set. Boole’s classes
correspond to the signed multi-sets where all the labels are 0 or 1
(the elements not in the class have the label 0). The uninterpretable
elements of Boole become interpretable when viewed as signed
multi-sets—they are given by labelings of the universe where
some label is *not* 0 or 1.

To add two signed multi-sets one simply adds the labels on each
element of the universe. Likewise for subtraction and multiplication.
(For the reader familiar with modern abstract algebra, one can take
the extension of Boole’s partial algebra to
be \(Z^U\) where \(Z\) is the ring of
integers, and \(U\) is the universe of discourse.) The signed
multi-sets corresponding to classes are precisely the idempotent
signed multi-sets. It turns out that the laws and principles Boole
was using in his algebra of logic hold for this system. By this means
Boole’s methods are proved to be correct for the algebra of logic
of *universal* propositions. Hailperin’s analysis did not
apply to particular propositions. Frank W. Brown’s paper “
George Boole’s deductive system” (2009) proposes that one can
avoid signed multi-sets by working with the ring of polynomials Z[X]
modulo a certain ideal.

Boole could not find a translation that worked as cleanly for the particular propositions as for the universal propositions. In 1847 Boole used the following two translations, the second one being a consequence of the first:

Some \(X\)s are \(Y\)s …………. \(v = xy\) and \(vx = vy\).

He initially used the symbol \(v\) to capture the essence of “some”. Later he used other symbols as well, and also he used \(v\) with other meanings (such as for the coefficients in an expansion). One of the problems with his translation scheme with \(v\) was that at times one needed “margin notes,” to keep track of which class(es) the \(v\) was attached to when it was introduced. The rules for translating from equations with \(v\)’s back to particular statements were never clearly formulated. For example in Chapter XV one sees a derivation of \(x = vv'y\) which is then translated as Some \(X\) is \(Y\). But he had no rules for when a product of \(v\)’s carries the import of “some”. Such problems detract from Boole’s system; his explanations leave doubts as to which procedures are legitimate in his system when dealing with particular statements.

There is one point on which even Hailperin was not faithful to Boole’s
work, namely he used *modern semantics*, where the
symbols \(x, y\), etc., can refer to the empty class as
well as to a non-empty class. With modern semantics one cannot have
the Conversion by Limitation which held in Aristotelian logic: from
All \(X\) is \(Y\) follows Some \(Y\) is \(X\).
In his *Formal Logic* of 1847, De Morgan pointed out that all
writers on logic had assumed that the subject in a universal
proposition was assumed to be non-empty. The simplest way to deal
with this in an algebra of logic is to restrict class symbols to
represent non-empty classes; and given the interest in liberating the
role of contraries like not-\(x\), perhaps class symbols should
also be restricted to representing non-universe classes. Such a
convention will be called *Aristotelian semantics*. Boole had
evidently followed this Aristotelian convention because he derived all
the Aristotelian results, including Conversion by Limitation. A
proper interpretation (faithful to Boole’s work) of Boole’s system
requires Aristotelian semantics for the class
symbols \(x, y, z,\ldots\) ; unfortunately
it seems that the published literature on Boole’s system has failed to
note this. Authors seem quite satisfied that Boole’s results,
especially his general theorems, have been so compatible with the
modern semantics of class symbols.

## 6. Boole’s Methods

While reading through this section, on the technical details of Boole’s methods, the reader may find it useful to consult the

supplement of examples from Boole’s two books.

These examples have been augmented with comments explaining, in each step of a derivation by Boole, which aspect of his methods is being employed.

### 6.1 The Three Methods of Argument Analysis Used by Boole in *LT*

Boole used three methods to analyze arguments in *LT*:

- The first was the purely ad hoc algebraic manipulations that
were used (in conjunction with a weak version of the Elimination
Theorem) on the Aristotelian arguments in
*MAL*. - Secondly, in section 15 of Chapter II of
*LT*, one finds the method that, in this article, is called the Rule of 0 and 1.

The theorems of *LT* combine to yield the master result,

- Boole’s General Method (in this article it will always be referred to using capitalized first letters—Boole just called it “a method”).

When applying the ad hoc method, he used parts of ordinary algebra along with the idempotent law \(x^2 = x\) to manipulate equations. There was no pre-established procedure to follow—success with this method depended on intuitive skills developed through experience.

The second method, the Rule of 0 and 1, is very powerful, but it
depends on being given a collection of premise equations and a
conclusion equation. It is a truth-table like method (but Boole never
drew a table when applying the method) to determine if the argument is
correct. Boole only used this method to establish the theorems that
justified his General Method, even though it is an excellent tool for
verifying simple arguments like syllogisms. But Boole was more
interested in finding the most general conclusion from given premises,
modulo certain conditions, and aside from his general theorems, showed
no interest in simply verifying logical arguments. The Rule of 0 and
1 is a somewhat shadowy figure in *LT*—it has no name,
and is never referred to by section or page number. A precise version
of Boole’s Rule of 0 and 1 that yields Boole’s results is given in
Burris and Sankappanavar 2013.

The third method to analyze arguments was the highlight of Boole’s
work in logic, his General Method (discussed immediately after this).
This is the one he used for all but the simplest examples in
*LT*; for the simplest examples he resorted to the first method
of ad hoc algebraic techniques because, for one skilled in algebraic
manipulations, using them is usually far more efficient than going
through the General Method.

The final version (from *LT*) of his General Method for
analyzing arguments is, briefly stated, to:

- convert (or translate) the propositions into equations,
- apply a prescribed sequence of algebraic processes to the equations, processes which yield desired conclusion equations, and then
- convert the equational conclusions into propositional conclusions, yielding the desired consequences of the original collection of propositions.

With this method Boole had replaced the art of reasoning from premise propositions to conclusion propositions by a routine mechanical algebraic procedure.

In *LT* Boole divided propositions into two kinds, primary
and secondary. These correspond to, but are not exactly the same as,
the Aristotelian division into categorical and hypothetical
propositions. First we discuss his General Method applied to primary
propositions.

### 6.2. Boole’s General Method for Primary Propositions

Boole recognized three forms of primary propositions:

- All \(X\) is \(Y\)
- All \(X\) is all \(Y\)
- Some \(X\) is \(Y\)

These were his version of the Aristotelian categorical propositions, where \(X\) is the subject term and \(Y\) the predicate term. The terms \(X\) and \(Y\) could be complex names, for example, \(X\) could be \(X_1\) or \(X_2\).

**STEP 1**: Names are converted into algebraic terms as
follows:

Terms | MAL |
LT |
||

universe | 1 | p.15 | 1 | p.48 |

empty class | ––– | 0 | p.47 | |

not \(X\) | \(1 - x\) | p.20 | \(1 - x\) | p.48 |

\(X\) and \(Y\) | \(xy\) | p.16 | \(xy\) | p.28 |

\(X\) or \(Y\) (inclusive) | ––– |
\(x + y(1 - x)\)
\(xy + x(1 - y) + y(1- x)\) |
p.56 | |

\(X\) or \(Y\) (exclusive) | ––– | \(x(1 - y) + y(1 - x)\) | p.56 |

We will call the letters \(x, y,\ldots\) *class symbols* (as
noted earlier, the algebra of the 1800s did not use the word
*variables*).

**STEP 2**: Having converted names for the terms into algebraic terms,
one then converts the propositions into equations using the
following:

Primary Propositions |
MAL (1847) |
LT (1854) |
||

All \(X\) is \(Y\) | \(x(1-y) = 0\) | p.26 | \(x = vy\) | pp.64,152 |

No \(X\) is \(Y\) | \(xy = 0\) | (not primary) | ––– | |

All \(X\) is all \(Y\) | (not primary) | ––– | \(x = y\) | |

Some \(X\) is \(Y\) | \(v = xy\) | \(vx = vy\) | ||

Some \(X\) is not \(Y\) | \(v = x(1-y)\) | (not primary) | ––– |

Prior to chapter XV, the one on Aristotelian logic, Boole’s
examples only use universal propositions. (One can speculate that he
had encountered difficulties with particular propositions and avoided
them.) Those of the form “All X is Y” are first expressed
as \(x = vy\), and then \(v\) is promptly eliminated, giving \(x =
xy\). (Similarly if \(X\) is replaced by not-\(X\), etc.) Boole said
this was merely a convenient but unnecessary step. For the examples in
the first fourteen chapters he could simply have used the translation
\(x = xy\), skipping the reference to \(v\). It seems that to simplify
notation he used the *same* letter \(v\) when there were
several universal premises, an incorrect step if one accepts
Boole’s claim that it is not necessary to eliminate the
\(v\)’s immediately. Distinct universal propositions require
different \(v\)’s in their translation. Else one can run into
the following situation. Consider the two premises “All \(X\) is
\(Z\)” and “All \(Y\) is \(Z\)”. Using the same
\(v\) for their equational expressions gives \(x = vz\) and \(y =
vz\), leading to the equation \(x = y\), and then to the false
conclusion \(X\) equals \(Y\). In chapter XV he was careful to use
distinct \(v\)’s for the expressions of distinct premises.

Boole used the four categorical propositions as his primary forms in 1847, but in 1854 he eliminated the negative propositional forms, noting that one could change “not \(Y\)” to “not-\(Y\)”. Thus in 1854 he would express “No \(X\) is \(Y\)” by “All \(X\) is not-\(Y\)”, with the translation \(x = v(1-y)\), and then eliminating \(v\) to obtain

\[ x(1 - (1 - y)) = 0, \]which simplifies to \(xy = 0\).

**STEP 3**: After converting the premises into algebraic form one has a
collection of equations, say

Express these as equations with 0 on the right side, that is, as

\[ r_1 = 0, \quad r_2 = 0, \quad \ldots, \quad r_n = 0, \]with

\[ r_1 := p_1 - q_1, \quad r_2 := p_2 - q_2, \quad \dots, \quad r_n := p_n - q_n. \]
**STEP 4**: (REDUCTION) [*LT* (p. 121) ]

Reduce the system of equations

\[ r_1 = 0, \quad r_2 = 0, \quad \ldots, \quad r_n = 0, \]to a single equation \(r = 0\). Boole had three different methods for doing this—he seemed to have a preference for summing the squares:

\[ r := r_1^2 + \cdots + r_n^2 = 0. \]Steps 1 through 4 are mandatory in Boole’s General Method. After executing these steps there are various options for continuing, depending on the goal.

**STEP 5**: (ELIMINATION) [*LT* (p. 101)]

Suppose one wants the most general equational conclusion derived from \(r = 0\) that involves some, but not all, of the class symbols in \(r\). Then one wants to eliminate certain symbols. Suppose \(r\) involves the class symbols

\[ x_1, \ldots, x_j \text{ and } y_1, \ldots, y_k. \]Then one can write \(r\) as \(r(x_1, \ldots, x_j, y_1, \ldots ,y_k)\).

Boole’s procedure to eliminate the symbols \(x_1, \ldots ,x_j\) from

\[ r(x_1, \ldots, x_j, y_1, \ldots, y_k) = 0 \]to obtain

\[ s(y_1, \ldots, y_k) = 0 \]was as follows:

- form all possible expressions \(r(a_1, \ldots, a_j, y_1, \ldots, y_k)\) where \(a_1, \ldots, a_j\) are each either 0 or 1, then
- multiply all of these expressions together to obtain \(s(y_1, \ldots, y_k)\).

For example, eliminating \(x_1, x_2\) from

\[ r(x_1, x_2, y) = 0 \]gives

\[ s(y) = 0 \]where

\[ s(y) := r(0, 0, y) \cdot r(0, 1, y) \cdot r(1, 0, y) \cdot r(1, 1, y). \]
**STEP 6**: (DEVELOPMENT, or EXPANSION)
[*MAL* (p. 60), *LT* (pp. 72, 73)].

Given a term, say \(r(x_1, \ldots, x_j, y_1, \ldots, y_k)\), one can expand the term with respect to a subset of the class symbols. To expand with respect to \(x_1, \ldots, x_j\) gives

\[ r = \text{ sum of the terms } r(a_1, \ldots, a_j, y_1 ,\ldots, y_k) \cdot C(a_1, x_1) \cdots C(a_j, x_j), \]where \(a_1 , \ldots ,a_j\) range over all sequences of 0s and 1s of length \(j\), and where the \(C(a_i, x_i)\) are defined by:

\[ C(1, x_i) := x_i, \text{ and } C(0, x_i) := 1- x_i. \]Boole said the products:

\[ C(a_1, x_1) \cdots C(a_j, x_j) \]
were the *constituents* of \(x_1 , \ldots ,x_j\). There are \(2^j\)
different constituents for \(j\) symbols. The regions of a Venn
diagram give a popular way to visualize constituents.

**STEP 7**: (DIVISION: SOLVING FOR A CLASS SYMBOL)
[*MAL* (p. 73), *LT* (pp. 86, 87)] ]

Given an equation \(r = 0\), suppose one wants to solve this equation for one of the class symbols, say \(x\), in terms of the other class symbols, say they are \(y_1 , \ldots ,y_k\). To solve:

\[ r(x, y_1 , \ldots ,y_k) = 0 \]for \(x\), first let:

\[\begin{align} N(y_1 , \ldots ,y_k) &= - r(0, y_1 , \ldots ,y_k) \\ D(y_1 , \ldots ,y_k) &= r(1, y_1 , \ldots ,y_k) - r(0, y_1 , \ldots ,y_k). \end{align}\]Then:

\[\tag{*} x = s(y_1 ,\ldots ,y_k) \]where \(s(y_1 ,\ldots ,y_k)\) is:

the sum of all constituents \(C(a_1, y_1) \cdots C(a_k, y_k)\) where \(a_1 , \ldots ,a_k\) range over all sequences of 0s and 1s for which:

\[ N(a_1 , \ldots ,a_k) = D(a_1 , \ldots ,a_k) \ne 0, \]

plus

the sum of all the terms of the form \(V_{a_1 \ldots a_k} \cdot C(a_1, y_1) \cdots C(a_k, y_k)\) for which:

\[ N(a_1 , \ldots ,a_k) = D(a_1 , \ldots ,a_k) = 0. \]

The \(V_{a_1 \ldots a_k}\) are parameters, denoting arbitrary classes (similar to what one sees in the study of linear differential equations, a subject in which Boole was an expert).

To the equation (*) for \(x\) adjoin the side-conditions (that we
will call *constituent equations*)

whenever

\[ D(a_1 , \ldots ,a_k) \ne N(a_1 , \ldots ,a_k) \ne 0. \]Note that one is to evaluate the terms:

\[ D(a_1 , \ldots ,a_k) \text{ and } N(a_1 , \ldots ,a_k) \]using ordinary arithmetic. Thus solving an equation \(r = 0\) for a class symbol \(x\) gives an equation

\[ x = s(y_1 ,\ldots ,y_k), \]perhaps with side-condition constituent equations.

**STEP 8**: (INTERPRETATION) [*MAL* pp. 64–65, *LT*
(Chap. VI, esp. pp. 82–83)]

Suppose the equation \(r(y_1 , \ldots ,y_k) = 0\) has been obtained by Boole’s method from a given collection of premise equations. Then this equation is equivalent to the collection of constituent equations

\[ C(a_1, y_1) \cdots C(a_k, y_k) = 0 \]for which \(r(a_1 , \ldots ,a_k)\) is not 0. A constituent equation merely asserts that a certain intersection of the original classes and their complements is empty. For example,

\[ y_1 (1-y_2)(1-y_3) = 0 \]expresses the proposition “All \(Y_1\) is \(Y_2\) or \(Y_3\),” or equivalently, “All \(Y_1\) and not \(Y_2\) is \(Y_3\).” It is routine to convert constituent equations into propositions.

### 6.3. Boole’s General Method for Secondary Propositions

Secondary propositions were Boole’s version of the propositions that one encounters in the study of hypothetical syllogisms in Aristotelian logic, statements like “If \(X\) or \(Y\) then \(Z\).” The symbols \(X, Y, Z\), etc. of secondary propositions did not refer to classes, but rather they referred to (primary) propositions. In keeping with the incomplete nature of the Aristotelian treatment of hypothetical propositions, Boole did not give a precise description of possible forms for his secondary propositions.

The key (but not original) observation that Boole used was simply that
one can convert secondary propositions into primary propositions. In
*MAL* he adopted the convention found in Whately (1826), that
given a propositional symbol \(X\), the symbol \(x\) will
denote “the cases in which \(X\) is true”, whereas in
*LT* Boole let \(x\) denote “the times for which
\(X\) is true”. With this the secondary proposition
“If \(X\) or \(Y\) then \(Z\)” becomes
simply “All \(x\) or \(y\) is \(z\)”. The
equation \(x = 1\) is the equational translation of
“\(X\) is true” (in all cases, or for all times), and
\(x = 0\) says “\(X\) is false” (in
all cases, or for all times). The concepts of *all cases* and
*all times* depend on the choice of the universe of
discourse.

With this translation scheme it is clear that Boole’s treatment of secondary propositions can be analyzed by the methods he had developed for primary propositions. This was Boole’s propositional logic.

Boole worked mainly with Aristotelian propositions in *MAL*,
using the traditional division into categoricals and hypotheticals.
One does not consider “\(X\) and \(Y\),”
“\(X\) or \(Y\),” etc., in categorial
propositions, only in hypothetical propositions. In *LT* this
division was replaced by the similar but more general primary versus
secondary classification, where the subject and predicate were allowed
to become complex names, and the number of propositions in an argument
became unrestricted. With this the parallels between the logic of
primary propositions and that of secondary propositions became clear,
with one notable difference, namely it seems that the secondary
propositions that Boole considered always translated into universal
primary propositions.

Secondary Propositions |
MAL (1847) |
LT (1854) |
||

\(X\) is true | \(x = 1\) | p.51 | \(x = 1\) | p.172 |

\(X\) is false | \(x = 0\) | p.51 | \(x = 0\) | p.172 |

\(X\) and \(Y\) | \(xy = 1\) | p.51 | \(xy = 1\) | p.172 |

\(X\) or \(Y\) (inclusive) | \(x + y -xy = 1\) | p.52 | ––– | |

\(X\) or \(Y\) (exclusive) | \(x -2xy+ y = 1\) | p.53 | \(x(1 - y) + y(1 - x) = 1\) | p.173 |

If \(X\) then \(Y\) | \(x(1-y) = 0\) | p.54 | \(x = vy\) | p.173 |

## Bibliography

### Primary Literature

- Boole, G., 1841, “Researches on the Theory of Analytical
Transformations, with a special application to the Reduction of the
General Equation of the Second Order,”
*The Cambridge Mathematical Journal*, 2: 64–73. - –––, 1841, “On Certain Theorems in the
Calculus of Variations,”
*The Cambridge Mathematical Journal*, 2: 97–102. - –––, 1841, “On the Integration of Linear
Differential Equations with Constant Coefficients,”
*The Cambridge Mathematical Journal*, 2: 114–119. - –––, 1847,
*The Mathematical Analysis of Logic, Being an Essay Towards a Calculus of Deductive Reasoning*, Originally published in Cambridge by Macmillan, Barclay, & Macmillan. Reprinted in Oxford by Basil Blackwell, 1951. - –––, 1848, “The Calculus of Logic,”
*The Cambridge and Dublin Mathematical Journal*, 3: 183–198. - –––, 1854,
*An Investigation of The Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities*, Originally published by Macmillan, London. Reprint by Dover, 1958. - –––, 1859,
*A Treatise on Differential Equations*, Cambridge: Macmillan. - –––, 1860,
*A Treatise on the Calculus of Finite Differences*, Cambridge: Macmillan. - De Morgan, A., 1839, “On the foundation of algebra,”
*Transactions of the Cambridge Philosophical Society*, VII, 174–187. - –––, 1841, “On the foundation of algebra,
No. II,”
*Transactions of the Cambridge Philosophical Society*VII, 287–300. - –––, 1847,
*Formal Logic: or, the Calculus of Inference, Necessary and Probable*, Originally published in London by Taylor and Walton. Reprinted in London by The Open Court Company, 1926. - –––,
*On the Syllogism, and Other Logical Writings*, P. Heath (ed.), New Haven: Yale University Press, 1966. (A posthumous collection of De Morgan’s papers on logic.) - Gregory, D.F., 1839, “Demonstrations in the differential
calculus and the calculus of finite differences,”
*The Cambridge Mathematical Journal*, Vol. I, 212–222. - –––, 1839, “I.–On the elementary
principles of the application of algebraical symbols to
geometry,”
*The Cambridge Mathematical Journal*, Vol. II, No. VII, 1–9. - –––, 1840, “On the real nature of
symbolical algebra.”
*Transactions of the Royal Society of Edinburgh*, 14: 208–216. Also in [Gregory 1865, pp. 1–13]. - –––, 1865,
*The Mathematical Writings of Duncan Farquharson Gregory*, M.A., W. Walton (ed.), Cambridge, UK: Deighton, Bell. - Jevons, W.S., 1864,
*Pure Logic, or the Logic of Quality apart from Quantity: with Remarks on Boole’s System and on the Relation of Logic and Mathematics*, London: Edward Stanford. Reprinted 1971 in*Pure Logic and Other Minor Works*, R. Adamson and H.A. Jevons (eds.), New York: Lennox Hill Pub. & Dist. Co. - Lacroix, S.F, 1797/1798,
*Traité du calcul différentiel et du calcul integral*, Paris: Chez Courcier. - Lagrange, J.L., 1797,
*Théorie des fonctions analytique*, Paris: Imprimerie de la Republique. - –––, 1788,
*Méchanique Analytique*, Paris: Desaint. - Peacock, G., 1830,
*Treatise on Algebra*, 2nd ed., 2 vols., Cambridge: J.&J.J. Deighton, 1842/1845. - –––, 1833, “Report on the Recent Progress
and Present State of certain Branches of Analysis”,
In
*Report of the Third Meeting of the British Association for the Advancement of Science*held at Cambridge in 1833, pp. 185–352. London: John Murray. - Schröder, E., 1890–1910,
*Algebra der Logik, Vols. I–III*. Leipzig, B.G. Teubner; reprint Chelsea 1966. - Whately, R., 1826,
*Elements of Logic*, London: J. Mawman.

### Secondary Literature

#### Cited Works

- Boole, G., 1997,
*Selected Manuscripts on Logic and its Philosophy*(Science Networks Historical Studies: Volume 20), edited by Ivor Grattan-Guinness and Gérard Bornet. Basel, Boston, and Berlin: Birkhäuser Verlag. - Brown, F.W, 2009, “ George Boole’s deductive system”,
*Notre Dame Journal of Logic*, 50: 303–330. - Burris, S. and Sankappanavar, H.P., 2013, “The Horn theory
of Boole’s partial algebras”,
*The Bulletin of Symbolic Logic*, 19: 97–105. - Ewald, W. (ed.), 1996,
*From Kant to Hilbert. A Source Book in the History of Mathematics*, 2 volumes, Oxford: Oxford University Press. - Grattan-Guiness, I., 2001,
*The Search for Mathematical Roots*, Princeton, NJ: Princeton University Press. - Hailperin, T., 1976,
*Boole’s Logic and Probability*, (Series: Studies in Logic and the Foundations of Mathematics, 85), Amsterdam, New York, Oxford: Elsevier North-Holland. 2nd edition, Revised and enlarged, 1986. - –––, 1981, “Boole’s algebra isn’t Boolean
algebra”,
*Mathematics Magazine*, 54: 172–184. - Jourdain, P.E.B., 1914, “The development of the theories of
mathematical logic and the principles of mathematics. William Stanley
Jevons”,
*Quarterly Journal of Pure and Applied Mathematics*, 44: 113–128. - MacHale, D., 1985,
*George Boole, His Life and Work*, Dublin: Boole Press. 2nd ed. 2014, Cork University Press.

#### Other Important Literature

- Aiken, H.A., 1951,
*Synthesis of Electronic Computing and Control Circuits*, Harvard University Press, Cambridge, Mass. - Burris, S.N., 2015, “George Boole and Boolean
Algebra”,
*European Mathematical Society Newsletter*, 98: 27–31. - Couturat, L., 1905,
*L’algèbre de la Logique*, 2d edition, Librairie Scientifique et Technique Albert Blanchard, Paris. English translation by Lydia G. Robinson: Open Court Publishing Co., Chicago & London, 1914. Reprinted by Dover Publications, Mineola, 2006. - Dummett, M., 1959, “Review of Studies in Logic and
Probability by George Boole”, Watts & Co., London, 1952,
edited by R. Rhees.
*The Journal of Symbolic Logic*, 24: 203–209. - Frege, G., 1880, “Boole’s logical calculus and the
concept-script”, in
*Gottlob Frege: Posthumous Writings*, Basil Blackwell, Oxford, 1979. English translation of*Nachgelassene Schriften*, vol. 1, edited by H. Hermes, F. Kambartel, and F. Kaulbach, Felix Meiner, Hamburg, 1969. - Kneale, W., and M. Kneale, 1962,
*The Development of Logic*, The Clarendon Press, Oxford. - Lewis, C. I., 1918,
*A Survey of Symbolic Logic*, University of California Press, Berkeley. Reprinted by Dover Publications, Inc., New York, 1960. Chap. II, “The Classic, or Boole-Schröder Algebra of Logic.” - Peirce, C. S., 1880, “On the Algebra of Logic”,
*American Journal of Mathematics*, 3: 15–57. - Smith, G. C., 1983, “Boole’s annotations on
*The Mathematical Analysis of Logic*”,*History and Philosophy of Logic*, 4: 27–39. - Styazhkin, N. I., 1969,
*Concise History of Mathematical Logic from Leibniz to Peano*, Cambridge, MA: The MIT Press. - van Evra, J. W., 1977, “A reassessment of George Boole’s
theory of logic”,
*Notre Dame Journal of Formal Logic*, 18: 363–77. - Venn, J., 1894,
*Symbolic Logic*, 2d edition, Macmillan, London. Reprinted, revised and rewritten. Bronx: Chelsea Publishing Co., 1971. - Whitney, H., 1933, “Characteristic functions and the algebra
of logic”,
*Annals of Mathematics*, Second Series, 34: 405–414.

## Academic Tools

How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.

## Other Internet Resources

- George Boole, The MacTutor History of Mathematics Archive
- Augustus De Morgan, Duncan Farquharson Gregory, William Jevons, George Peacock, Ernst Schröder, The MacTutor History of Mathematics Archive
- Algebraic Logic Group, Alfred Reyni Institute of Mathematics, Hungarian Academy of Sciences
- George Boole 200, maintained at University College Cork.