This is an old revision of this page, as edited by CitationCleanerBot(talk | contribs) at 13:18, 10 November 2016(clean up, url redundant with jstor, and/or remove accessdate if no url using AWB). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 13:18, 10 November 2016 by CitationCleanerBot(talk | contribs)(clean up, url redundant with jstor, and/or remove accessdate if no url using AWB)
The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models.
Definition
A Boolean algebra is a six-tuple consisting of a setA, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold:[2]
Note, however, that the absorption law can be excluded from the set of axioms as it can be derived by the other axioms.
A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (Some authors require 0 and 1 to be distinct elements in order to exclude this case.)
It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that
a = b ∧ a if and only if a ∨ b = b.
The relation ≤ defined by a ≤ b if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤.
The first four pairs of axioms constitute a definition of a bounded lattice.
It follows from the first five pairs of axioms that any complement is unique.
The set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.[3]
Examples
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1, and is defined by the rules:
∧
0
1
0
0
0
1
0
1
∨
0
1
0
0
1
1
1
1
a
0
1
¬a
1
0
It has applications in logic, interpreting 0 as false, 1 as true, ∧ as and, ∨ as or, and ¬ as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.
The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input-output behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression.
The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras:
(a ∨ b) ∧ (¬a ∨ c) ∧ (b ∨ c) ≡ (a ∨ b) ∧ (¬a ∨ c)
(a ∧ b) ∨ (¬a ∧ c) ∨ (b ∧ c) ≡ (a ∧ b) ∨ (¬a ∧ c)
The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with the two operations ∨ := ∪ (union) and ∧ := ∩ (intersection). The smallest element 0 is the empty set and the largest element 1 is the set S itself.
After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:
∧
0
a
b
1
0
0
0
0
0
a
0
a
0
a
b
0
0
b
b
1
0
a
b
1
∨
0
a
b
1
0
0
a
b
1
a
a
a
1
1
b
b
1
b
1
1
1
1
1
1
x
0
a
b
1
¬x
1
b
a
0
The set of all subsets of S that are either finite or cofinite is a Boolean algebra, an algebra of sets.
Starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo tautology). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra.
Given any linearly ordered set L with a least element, the interval algebra is the smallest algebra of subsets of L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to ∞. Interval algebras are useful in the study of Lindenbaum-Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.
For any natural numbern, the set of all positive divisors of n, defining a≤b if adividesb, forms a distributive lattice. This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top element of this Boolean algebra is the natural number 1 and n, respectively. The complement of a is given by n/a. The meet and the join of a and b is given by the greatest common divisor (gcd) and the least common multiple (lcm) of a and b, respectively. The ring addition a+b is given by lcm(a,b)/gcd(a,b). The picture shows an example for n = 30. As a counter-example, considering the non-square-free n=60, the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1.
Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the collection of all subsets of X which are both open and closed forms a Boolean algebra with the operations ∨ := ∪ (union) and ∧ := ∩ (intersection).
If R is an arbitrary ring and we define the set of central idempotents by A = { e ∈ R : e2 = e, ex = xe, ∀x ∈ R } then the set A becomes a Boolean algebra with the operations e ∨ f := e + f - ef and e ∧ f := ef.
Homomorphisms and isomorphisms
A homomorphism between two Boolean algebras A and B is a functionf : A → B such that for all a, b in A:
f(a ∨ b) = f(a) ∨ f(b),
f(a ∧ b) = f(a) ∧ f(b),
f(0) = 0,
f(1) = 1.
It then follows that f(¬a) = ¬f(a) for all a in A. The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices.
Every Boolean algebra (A, ∧, ∨) gives rise to a ring (A, +, ·) by defining a + b := (a ∧ ¬b) ∨ (b ∧ ¬a) = (a ∨ b) ∧ ¬(a ∧ b) (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and a · b := a ∧ b. The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the 1 of the Boolean algebra. This ring has the property that a · a = a for all a in A; rings with this property are called Boolean rings.
Conversely, if a Boolean ring A is given, we can turn it into a Boolean algebra by defining x ∨ y := x + y + (x · y) and x ∧ y := x · y.
[4][5]
Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map f : A → B is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent.[6]
Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring.
More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions.
Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving.
An ideal of the Boolean algebra A is a subset I such that for all x, y in I we have x ∨ y in I and for all a in A we have a ∧ x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called prime if I ≠ A and if a ∧ b in I always implies a in I or b in I. Furthermore, for every a ∈ A we have that a ∧ -a = 0 ∈ I and then a ∈ I or -a ∈ I for every a ∈ A, if I is prime. An ideal I of A is called maximal if I ≠ A and if the only ideal properly containing I is A itself. For an ideal I, if a ∉ I and -a ∉ I, then I ∪ {a} or I ∪ {-a} is properly contained in another ideal J. Hence, that an I is not maximal and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring A.
The dual of an ideal is a filter. A filter of the Boolean algebra A is a subset p such that for all x, y in p we have x ∧ y in p and for all a in A we have a ∨ x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the Ultrafilter Theorem and can not be proved in ZF, if ZF is consistent. Within ZF, it is strictly weaker than the axiom of choice.
The Ultrafilter Theorem has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
Representations
It can be shown that every finite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two.
The first axiomatization of Boolean lattices/algebras in general was given by Alfred North Whitehead in 1898.[7][8]
It included the above axioms and additionally x∨1=1 and x∧0=0.
In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on ∧, ∨, ¬, even proving the associativity laws (see box).[9]
He also proved that these axioms are independent of each other.[10]
In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation + and a unary functional symboln, to be read as 'complement', which satisfy the following laws:
do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the automated reasoning programEQP he designed. For a simplification of McCune's proof, see Dahn (1998).
Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive latticeB is a generalized Boolean lattice, if it has a smallest element 0 and for any elements a and b in B such that a ≤ b, there exists an element x such that a ∧ x = 0 and a ∨ x = b. Defining a ∖ b as the unique x such that (a ∧ b) ∨ x = a and (a ∧ b) ∧ x = 0, we say that the structure (B,∧,∨,∖,0) is a generalized Boolean algebra, while (B,∨,0) is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals of Boolean lattices.
A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed subspaces for separable Hilbert spaces.
Brown, Stephen; Vranesic, Zvonko (2002), Fundamentals of Digital Logic with VHDL Design (2nd ed.), McGraw–Hill, ISBN978-0-07-249938-4. See Section 2.5.
Dahn, B. I. (1998), "Robbins Algebras are Boolean: A Revision of McCune's Computer-Generated Solution of the Robbins Problem", Journal of Algebra, 208 (2): 526–532, doi:10.1006/jabr.1998.7467.
B.A. Davey; H.A. Priestley (1990). Introduction to Lattices and Order. Cambridge Mathematical Textbooks. Cambridge University Press.