Jump to content

Zhegalkin polynomial

From Wikipedia, the free encyclopedia

Zhegalkin (also Žegalkin, Gégalkine or Shegalkin[1]) polynomials (Russian: полиномы Жегалкина), also known as algebraic normal form, are a representation of functions in Boolean algebra. Introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in 1927,[2] they are the polynomial ring over the integers modulo 2. The resulting degeneracies of modular arithmetic result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coefficients nor exponents. Coefficients are redundant because 1 is the only nonzero coefficient. Exponents are redundant because in arithmetic mod 2, x2 = x. Hence a polynomial such as 3x2y5z is congruent to, and can therefore be rewritten as, xyz.

Boolean equivalent[edit]

Prior to 1927, Boolean algebra had been considered a calculus of logical values with logical operations of conjunction, disjunction, negation, and so on. Zhegalkin showed that all Boolean operations could be written as ordinary numeric polynomials, representing the false and true values as 0 and 1, the integers mod 2. Logical conjunction is written as xy, and logical exclusive-or as arithmetic addition mod 2, (written here as xy to avoid confusion with the common use of + as a synonym for inclusive-or ∨). The logical complement ¬x is then x⊕1. Since ∧ and ¬ form a basis for Boolean algebra, all other logical operations are compositions of these basic operations, and so the polynomials of ordinary algebra can represent all Boolean operations, allowing Boolean reasoning to be performed using elementary algebra.

For example, the Boolean 2-out-of-3 threshold or median operation is written as the Zhegalkin polynomial xyyzzx.

Formal properties[edit]

Formally a Zhegalkin monomial is the product of a finite set of distinct variables (hence square-free), including the empty set whose product is denoted 1. There are 2n possible Zhegalkin monomials in n variables, since each monomial is fully specified by the presence or absence of each variable. A Zhegalkin polynomial is the sum (exclusive-or) of a set of Zhegalkin monomials, with the empty set denoted by 0. A given monomial's presence or absence in a polynomial corresponds to that monomial's coefficient being 1 or 0 respectively. The Zhegalkin monomials, being linearly independent, span a 2n-dimensional vector space over the Galois field GF(2) (NB: not GF(2n), whose multiplication is quite different). The 22n vectors of this space, i.e. the linear combinations of those monomials as unit vectors, constitute the Zhegalkin polynomials. The exact agreement with the number of Boolean operations on n variables, which exhaust the n-ary operations on {0,1}, furnishes a direct counting argument for completeness of the Zhegalkin polynomials as a Boolean basis.

This vector space is not equivalent to the free Boolean algebra on n generators because it lacks complementation (bitwise logical negation) as an operation (equivalently, because it lacks the top element as a constant). This is not to say that the space is not closed under complementation or lacks top (the all-ones vector) as an element, but rather that the linear transformations of this and similarly constructed spaces need not preserve complement and top. Those that do preserve them correspond to the Boolean homomorphisms, e.g. there are four linear transformations from the vector space of Zhegalkin polynomials over one variable to that over none, only two of which are Boolean homomorphisms.

Method of computation[edit]

There are various known methods generally used for the computation of the Zhegalkin polynomial:

The method of indeterminate coefficients[edit]

Using the method of indeterminate coefficients, a linear system consisting of all the tuples of the function and their values is generated. Solving the linear system gives the coefficients of the Zhegalkin polynomial.


Given the Boolean function , express it as a Zhegalkin polynomial. This function can be expressed as a column vector

This vector should be the output of left-multiplying a vector of undetermined coefficients

by an 8x8 logical matrix which represents the possible values that all the possible conjunctions of A, B, C can take. These possible values are given in the following truth table:

0 0 0 1 0 0 0 0 0 0 0
0 0 1 1 1 0 0 0 0 0 0
0 1 0 1 0 1 0 0 0 0 0
0 1 1 1 1 1 1 0 0 0 0
1 0 0 1 0 0 0 1 0 0 0
1 0 1 1 1 0 0 1 1 0 0
1 1 0 1 0 1 0 1 0 1 0
1 1 1 1 1 1 1 1 1 1 1

The information in the above truth table can be encoded in the following logical matrix:

where the 'S' here stands for "Sierpiński", as in Sierpiński triangle, and the subscript 3 gives the exponents of its size: .

It can be proven through mathematical induction and block-matrix multiplication that any such "Sierpiński matrix" is its own inverse.[nb 1]

Then the linear system is

which can be solved for :[clarification needed]
and the Zhegalkin polynomial corresponding to is .

Using the canonical disjunctive normal form[edit]

Using this method, the canonical disjunctive normal form (a fully expanded disjunctive normal form) is computed first. Then the negations in this expression are replaced by an equivalent expression using the mod 2 sum of the variable and 1. The disjunction signs are changed to addition mod 2, the brackets are opened, and the resulting Boolean expression is simplified. This simplification results in the Zhegalkin polynomial.

Using tables[edit]

Computing the Zhegalkin polynomial for an example function P by the table method

Let be the outputs of a truth table for the function P of n variables, such that the index of the 's corresponds to the binary indexing of the minterms.[nb 2] Define a function ζ recursively by:

Note that
where is the binomial coefficient reduced modulo 2.


is the i th coefficient of a Zhegalkin polynomial whose literals in the i th monomial are the same as the literals in the i th minterm, except that the negative literals are removed (or replaced by 1).

The ζ-transformation is its own inverse, so the same kind of table can be used to compute the coefficients given the coefficients . Just let

In terms of the table in the figure, copy the outputs of the truth table (in the column labeled P) into the leftmost column of the triangular table. Then successively compute columns from left to right by applying XOR to each pair of vertically adjacent cells in order to fill the cell immediately to the right of the top cell of each pair. When the entire triangular table is filled in then the top row reads out the coefficients of a linear combination which, when simplified (removing the zeroes), yields the Zhegalkin polynomial.

To go from a Zhegalkin polynomial to a truth-table, it is possible to fill out the top row of the triangular table with the coefficients of the Zhegalkin polynomial (putting in zeroes for any combinations of positive literals not in the polynomial). Then successively compute rows from top to bottom by applying XOR to each pair of horizontally adjacent cells in order to fill the cell immediately to the bottom of the leftmost cell of each pair. When the entire triangular table is filled then the leftmost column of it can be copied to column P of the truth table.

As an aside, this method of calculation corresponds to the method of operation of the elementary cellular automaton called Rule 102. For example, start such a cellular automaton with eight cells set up with the outputs of the truth table (or the coefficients of the canonical disjunctive normal form) of the Boolean expression: 10101001. Then run the cellular automaton for seven more generations while keeping a record of the state of the leftmost cell. The history of this cell then turns out to be: 11000010, which shows the coefficients of the corresponding Zhegalkin polynomial.[3][4]

The Pascal method[edit]

Using the Pascal method to compute the Zhegalkin polynomial for the Boolean function . The line in Russian at the bottom says:
– bitwise operation "Exclusive OR"

The most economical in terms of the amount of computation and expedient for constructing the Zhegalkin polynomial manually is the Pascal method.

We build a table consisting of columns and rows, where N is the number of variables in the function. In the top row of the table we place the vector of function values, that is, the last column of the truth table.

Each row of the resulting table is divided into blocks (black lines in the figure). In the first line, the block occupies one cell, in the second line — two, in the third — four, in the fourth — eight, and so on. Each block in a certain line, which we will call "lower block", always corresponds to exactly two blocks in the previous line. We will call them "left upper block" and "right upper block".

The construction starts from the second line. The contents of the left upper blocks are transferred without change into the corresponding cells of the lower block (green arrows in the figure). Then, the operation "addition modulo two" is performed bitwise over the right upper and left upper blocks and the result is transferred to the corresponding cells of the right side of the lower block (red arrows in the figure). This operation is performed with all lines from top to bottom and with all blocks in each line. After the construction is completed, the bottom line contains a string of numbers, which are the coefficients of the Zhegalkin polynomial, written in the same sequence as in the triangle method described above.

The summation method[edit]

Graphic representation of the coefficients of the Zhegalkin polynomial for functions with different numbers of variables.

According to the truth table, it is easy to calculate the individual coefficients of the Zhegalkin polynomial. To do this, sum up modulo 2 the values of the function in those rows of the truth table where variables that are not in the conjunction (that corresponds to the coefficient being calculated) take zero values.

Suppose, for example, that we need to find the coefficient of the xz conjunction for the function of three variables . There is no variable y in this conjunction. Find the input sets in which the variable y takes a zero value. These are the sets 0, 1, 4, 5 (000, 001, 100, 101). Then the coefficient at conjunction xz is

Since there are no variables with the constant term,

For a term which includes all variables, the sum includes all values of the function:

Let us graphically represent the coefficients of the Zhegalkin polynomial as sums modulo 2 of values of functions at certain points. To do this, we construct a square table, where each column represents the value of the function at one of the points, and the row is the coefficient of the Zhegalkin polynomial. The point at the intersection of some column and row means that the value of the function at this point is included in the sum for the given coefficient of the polynomial (see figure). We call this table , where N is the number of variables of the function.

There is a pattern that allows you to get a table for a function of N variables, having a table for a function of variables. The new table is arranged as a 2 × 2 matrix of tables, and the right upper block of the matrix is cleared.

Lattice-theoretic interpretation[edit]

Consider the columns of a table as corresponding to elements of a Boolean lattice of size . For each column express number M as a binary number , then if and only if , where denotes bitwise OR.

If the rows of table are numbered, from top to bottom, with the numbers from 0 to , then the tabular content of row number R is the ideal generated by element of the lattice.

Note incidentally that the overall pattern of a table is that of a logical matrix Sierpiński triangle. Also, the pattern corresponds to an elementary cellular automaton called Rule 60, starting with the leftmost cell set to 1 and all other cells cleared.

Using a Karnaugh map[edit]

Converting a Karnaugh map to a Zhegalkin polynomial.

The figure shows a function of three variables, P(A, B, C) represented as a Karnaugh map, which the reader may consider as an example of how to convert such maps into Zhegalkin polynomials; the general procedure is given in the following steps:

  • We consider all the cells of the Karnaugh map in ascending order of the number of units in their codes. For the function of three variables, the sequence of cells will be 000–100–010–001–110–101–011–111. Each cell of the Karnaugh map is associated with a member of the Zhegalkin polynomial depending on the positions of the code in which there are ones. For example, cell 111 corresponds to the member ABC, cell 101 corresponds to the member AC, cell 010 corresponds to the member B, and cell 000 corresponds to member 1.
  • If the cell in question is 0, go to the next cell.
  • If the cell in question is 1, add the corresponding term to the Zhegalkin polynomial, then invert all cells in the Karnaugh map where this term is 1 (or belonging to the ideal generated by this term, in a Boolean lattice of monomials) and go to the next cell. For example, if, when examining cell 110, a one appears in it, then the term AB is added to the Zhegalkin polynomial and all cells of the Karnaugh map are inverted, for which A = 1 and B = 1. If unit is in cell 000, then a term 1 is added to the Zhegalkin polynomial and the entire Karnaugh map is inverted.
  • The transformation process can be considered complete when, after the next inversion, all cells of the Karnaugh map become zero, or a don't care condition.

Möbius transformation[edit]

The Möbius inversion formula relates the coefficients of a Boolean sum-of-minterms expression and a Zhegalkin polynomial. This is the partial order version of the Möbius formula, not the number theoretic. The Möbius inversion formula for partial orders is:[5]

where , |x| being the Hamming distance of x from 0. Since in the Zhegalkin algebra, the Möbius function collapses to being the constant 1.

The set of divisors of a given number x is also the order ideal generated by that number: . Since summation is modulo 2, the formula can be restated as


As an example, consider the three-variable case. The following table shows the divisibility relation:

x divisors of x
000 000
001 000, 001
010 000, 010
011 000, 001, 010, 011
100 000, 100
101 000, 001, 100, 101
110 000, 010, 100, 110
111 000, 001, 010, 011, 100, 101, 110, 111


The above system of equations can be solved for f, and the result can be summarized as being obtainable by exchanging g and f throughout the above system.

The table below shows the binary numbers along with their associated Zhegalkin monomials and Boolean minterms:

Boolean minterm ABC Zhegalkin monomial
000 1
001 C
010 B
011 BC
100 A
101 AC
110 AB
111 ABC

The Zhegalkin monomials are naturally ordered by divisibility, whereas the Boolean minterms do not so naturally order themselves; each one represents an exclusive eighth of the three-variable Venn diagram. The ordering of the monomials transfers to the bit strings as follows: given and , a pair of bit triplets, then .

The correspondence between a three-variable Boolean sum-of-minterms and a Zhegalkin polynomial is then:

The system of equations above may be summarized as a logical matrix equation:

which N. J. Wildberger calls a Boole–Möbius transformation.

Below is shown the “XOR spreadsheet” form of the transformation, going in the direction of g to f:

Related work[edit]

In 1927, in the same year as Zhegalkin's paper,[2] the American mathematician Eric Temple Bell published a sophisticated arithmetization of Boolean algebra based on Richard Dedekind's ideal theory and general modular arithmetic (as opposed to arithmetic mod 2).[6] The much simpler arithmetic character of Zhegalkin polynomials was first noticed in the west (independently, communication between Soviet and Western mathematicians being very limited in that era) by the American mathematician Marshall Stone in 1936[7] when he observed while writing up his celebrated Stone duality theorem that the supposedly loose analogy between Boolean algebras and rings could in fact be formulated as an exact equivalence holding for both finite and infinite algebras, leading him to substantially reorganize his paper over the next few years.

See also[edit]


  1. ^ As base case,
    where is here taken to denote the identity matrix of size . The inductive assumption is
    Then the inductive step is:
    where denotes the Kronecker product,
    or, in terms of the Kronecker product:
  2. ^ A minterm is the Boolean counterpart of a Zhegalkin monomial. For an n-variable context, there are Zhegalkin monomials and Boolean minterms as well. A minterm for an n-variable context consists of an AND-product of n literals, each literal either being a variable in the context or the NOT-negation of such a variable. Moreover, for each variable in the context there must appear exactly once in each minterm a corresponding literal (either the assertion or negation of that variable). A truth table for a Boolean function of n variables has exactly rows, the inputs of each row corresponding naturally to a minterm whose context is the set of independent variables of that Boolean function. (Each 0-input corresponds to a negated variable; each 1-input corresponds to an asserted variable.)
        Any Boolean expression may be converted to sum-of-minterms form by repeatedly distributing AND with respect to OR, NOT with respect to AND or OR (through the De Morgan identities), cancelling out double negations (cf. negation normal form); and then, when a sum-of-products has been obtained, multiply products with missing literals with instances of the law of excluded middle that contain the missing literals; then — lastly — distribute AND with respect to OR again.
        Note that there is a formal correspondence, for a given context, between Zhegalkin monomials and Boolean minterms. However, the correspondence is not logical equivalence. For example, for the context {A, B, C}, there is a formal correspondence between the Zhegalkin monomial AB and the Boolean minterm , but they are not logically equivalent. (For more of this example, see the second table in the section "Möbius transformation". The same set of bitstrings is used to index both the set of Boolean minterms and the set of Zhegalkin monomials.)


  1. ^ Steinbach, Bernd [in German]; Posthoff, Christian (2009). "Preface". Written at Freiberg, Germany. Logic Functions and Equations - Examples and Exercises (1st ed.). Dordrecht, Netherlands: Springer Science + Business Media B. V. p. xv. ISBN 978-1-4020-9594-8. LCCN 2008941076.
  2. ^ a b Жега́лкин [Zhegalkin], Ива́н Ива́нович [Ivan Ivanovich] (1927). "O Tekhnyke Vychyslenyi Predlozhenyi v Symbolytscheskoi Logykye" О технике вычислений предложений в символической логике [On the technique of calculating propositions in symbolic logic (Sur le calcul des propositions dans la logique symbolique)]. Matematicheskii Sbornik (in Russian and French). 34 (1). Moscow, Russia: 9–28. Mi msb7433. Archived from the original on 2017-10-12. Retrieved 2017-10-12.
  3. ^ Suprun [Супрун], Valeriy P. [Валерий Павлович] (1987). "Tablichnyy metod polinomial'nogo razlozheniya bulevykh funktsiy" Табличный метод полиномиального разложения булевых функций [The tabular method of polynomial decomposition of Boolean functions]. Kibernetika [Кибернетика] (Cybernetics) (in Russian) (1): 116–117.
  4. ^ Suprun [Супрун], Valeriy P. [Валерий Павлович] (2017). "Osnovy teorii bulevykh funktsiy" Основы теории булевых функций [Fundamentals of the theory of Boolean functions]. М.: Lenand [Ленанд] / URSS (in Russian): 208.
  5. ^ "Möbius inversion". Encyclopedia of Mathematics. 2021-02-17 [2011-02-07]. Archived from the original on 2020-07-16. Retrieved 2021-03-27.
  6. ^ Bell, Eric Temple (1927). "Arithmetic of Logic". Transactions of the American Mathematical Society. 29 (3): 597–611. doi:10.2307/1989098. JSTOR 1989098.
  7. ^ Stone, Marshall (1936). "The Theory of Representations for Boolean Algebras". Transactions of the American Mathematical Society. 40 (1): 37–111. doi:10.2307/1989664. ISSN 0002-9947. JSTOR 1989664.

Further reading[edit]