# Zhegalkin polynomial

Jump to navigation Jump to search

Zhegalkin (also Žegalkin, Gégalkine or Shegalkin) polynomials form one of many possible representations of the operations of Boolean algebra. Introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in 1927, they are the polynomial ring over the integers modulo 2. The resulting degeneracies of modular arithmetic result in Zhegalkin polynomials being simpler than ordinary polynomials, requiring neither coefficients nor exponents. Coefficients are redundant because 1 is the only nonzero coefficient. Exponents are redundant because in arithmetic mod 2, x2 = x. Hence a polynomial such as 3x2y5z is congruent to, and can therefore be rewritten as, xyz.

## Boolean equivalent

Prior to 1927 Boolean algebra had been considered a calculus of logical values with logical operations of conjunction, disjunction, negation, etc. Zhegalkin showed that all Boolean operations could be written as ordinary numeric polynomials, thinking of the logical constants 0 and 1 as integers mod 2. The logical operation of conjunction is realized as the arithmetic operation of multiplication xy, and logical exclusive-or as arithmetic addition mod 2, (written here as xy to avoid confusion with the common use of + as a synonym for inclusive-or ∨). Logical complement ¬x is then derived from 1 and ⊕ as x⊕1. Since ∧ and ¬ form a sufficient basis for the whole of Boolean algebra, meaning that all other logical operations are obtainable as composites of these basic operations, it follows that the polynomials of ordinary algebra can represent all Boolean operations, allowing Boolean reasoning to be performed reliably by appealing to the familiar laws of elementary algebra without the distraction of the differences from high school algebra that arise with disjunction in place of addition mod 2.

An example application is the representation of the Boolean 2-out-of-3 threshold or median operation as the Zhegalkin polynomial xyyzzx, which is 1 when at least two of the variables are 1 and 0 otherwise.

## Formal properties

Formally a Zhegalkin monomial is the product of a finite set of distinct variables (hence square-free), including the empty set whose product is denoted 1. There are 2n possible Zhegalkin monomials in n variables, since each monomial is fully specified by the presence or absence of each variable. A Zhegalkin polynomial is the sum (exclusive-or) of a set of Zhegalkin monomials, with the empty set denoted by 0. A given monomial's presence or absence in a polynomial corresponds to that monomial's coefficient being 1 or 0 respectively. The Zhegalkin monomials, being linearly independent, span a 2n-dimensional vector space over the Galois field GF(2) (NB: not GF(2n), whose multiplication is quite different). The 22n vectors of this space, i.e. the linear combinations of those monomials as unit vectors, constitute the Zhegalkin polynomials. The exact agreement with the number of Boolean operations on n variables, which exhaust the n-ary operations on {0,1}, furnishes a direct counting argument for completeness of the Zhegalkin polynomials as a Boolean basis.

This vector space is not equivalent to the free Boolean algebra on n generators because it lacks complementation (bitwise logical negation) as an operation (equivalently, because it lacks the top element as a constant). This is not to say that the space is not closed under complementation or lacks top (the all-ones vector) as an element, but rather that the linear transformations of this and similarly constructed spaces need not preserve complement and top. Those that do preserve them correspond to the Boolean homomorphisms, e.g. there are four linear transformations from the vector space of Zhegalkin polynomials over one variable to that over none, only two of which are Boolean homomorphisms.

## Method of computation

There are various known methods generally used for the computation of the Zhegalkin polynomial.

• Using the method of indeterminate coefficients
• By constructing the canonical disjunctive normal form
• By using tables
• Pascal method
• Summation method
• Using a Karnaugh map

### The method of indeterminate coefficients

Using the method of indeterminate coefficients, a linear system consisting of all the tuples of the function and their values is generated. Solving the linear system gives the coefficients of the Zhegalkin polynomial.

#### Example

Given the Boolean function $f(A,B,C)={\bar {A}}{\bar {B}}{\bar {C}}+{\bar {A}}B{\bar {C}}+A{\bar {B}}{\bar {C}}+ABC$ it is wanted to express it as a Zhegalkin polynomial. This function can be expressed as a column vector

${\vec {f}}={\begin{pmatrix}1\\0\\1\\0\\1\\0\\0\\1\end{pmatrix}}$ .

This vector should be the output of left-multiplying a vector of undetermined coefficients

${\vec {c}}={\begin{pmatrix}c_{0}\\c_{1}\\c_{2}\\c_{3}\\c_{4}\\c_{5}\\c_{6}\\c_{7}\end{pmatrix}}$ by an 8x8 logical matrix which represents the possible values that all the possible conjunctions of A, B, C can take. These possible values are given in the following truth table:

${\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|}A&B&C&\;\;\;\;\;\;\;\;&1&C&B&BC&A&AC&AB&ABC\\\hline 0&0&0&&1&0&0&0&0&0&0&0\\0&0&1&&1&1&0&0&0&0&0&0\\0&1&0&&1&0&1&0&0&0&0&0\\0&1&1&&1&1&1&1&0&0&0&0\\1&0&0&&1&0&0&0&1&0&0&0\\1&0&1&&1&1&0&0&1&1&0&0\\1&1&0&&1&0&1&0&1&0&1&0\\1&1&1&&1&1&1&1&1&1&1&1\end{array}}$ .

The information in the above truth table can be encoded in the following logical matrix:

$S_{3}={\begin{pmatrix}1&0&0&0&0&0&0&0\\1&1&0&0&0&0&0&0\\1&0&1&0&0&0&0&0\\1&1&1&1&0&0&0&0\\1&0&0&0&1&0&0&0\\1&1&0&0&1&1&0&0\\1&0&1&0&1&0&1&0\\1&1&1&1&1&1&1&1\end{pmatrix}}$ where the 'S' here stands for "Sierpiński", as in Sierpiński triangle, and the subscript 3 gives the exponents of its size: $2^{3}\times 2^{3}$ .

It can be proven through mathematical induction and block-matrix multiplication that any such "Sierpiński matrix" $S_{n}$ is its own inverse.[note 1]

Then the linear system is

$S_{3}{\vec {c}}={\vec {f}}$ which can be solved for ${\vec {c}}$ :

${\vec {c}}=S_{3}^{-1}{\vec {f}}=S_{3}{\vec {f}}={\begin{pmatrix}1&0&0&0&0&0&0&0\\1&1&0&0&0&0&0&0\\1&0&1&0&0&0&0&0\\1&1&1&1&0&0&0&0\\1&0&0&0&1&0&0&0\\1&1&0&0&1&1&0&0\\1&0&1&0&1&0&1&0\\1&1&1&1&1&1&1&1\end{pmatrix}}{\begin{pmatrix}1\\0\\1\\0\\1\\0\\0\\1\end{pmatrix}}={\begin{pmatrix}1\\1\\1\oplus 1\\1\oplus 1\\1\oplus 1\\1\oplus 1\\1\oplus 1\oplus 1\\1\oplus 1\oplus 1\oplus 1\end{pmatrix}}={\begin{pmatrix}1\\1\\0\\0\\0\\0\\1\\0\end{pmatrix}}$ ,

and the Zhegalkin polynomial corresponding to ${\vec {c}}$ is $1\oplus C\oplus AB$ .

### Using the canonical disjunctive normal form

Using this method, the canonical disjunctive normal form (a fully expanded disjunctive normal form) is computed first. Then the negations in this expression are replaced by an equivalent expression using the mod 2 sum of the variable and 1. The disjunction signs are changed to addition mod 2, the brackets are opened, and the resulting Boolean expression is simplified. This simplification results in the Zhegalkin polynomial.

### Using tables

Let $c_{0},...,c_{2^{n}-1}$ be the outputs of a truth table for the function P of n variables, such that the index of the $c_{i}$ 's corresponds to the binary indexing of the minterms[clarify]. Define a function ζ recursively by:

$\zeta (c_{i}):=c_{i}$ $\zeta (c_{0},...,c_{k}):=\zeta (c_{0},...,c_{k-1})\oplus \zeta (c_{1},...,c_{k})$ .

Note that

$\zeta (c_{0},...,c_{m})=\bigoplus _{k=0}^{m}{m \choose k}_{2}c_{k}$ where ${m \choose k}_{2}$ is the binomial coefficient reduced modulo 2.

Then

$g_{i}=\zeta (c_{0},...,c_{i})$ is the i th coefficient of a Zhegalkin polynomial whose literals in the i th monomial are the same as the literals in the i th minterm, except that the negative literals are removed (or replaced by 1).

The ζ-transformation is its own inverse, so the same kind of table can be used to compute the coefficients $c_{0},...,c_{2^{n}-1}$ given the coefficients $g_{0},...,g_{2^{n}-1}$ . Just let

$c_{i}=\zeta (g_{0},...,g_{i})$ .

In terms of the table in the figure, copy the outputs of the truth table (in the column labeled P) into the leftmost column of the triangular table. Then successively compute columns from left to right by applying XOR to each pair of vertically adjacent cells in order to fill the cell immediately to the right of the top cell of each pair. When the entire triangular table is filled in then the top row reads out the coefficients of a linear combination which, when simplified (removing the zeroes), yields the Zhegalkin polynomial.

To go from a Zhegalkin polynomial to a truth-table, it is possible to fill out the top row of the triangular table with the coefficients of the Zhegalkin polynomial (putting in zeroes for any combinations of positive literals not in the polynomial). Then successively compute rows from top to bottom by applying XOR to each pair of horizontally adjacent cells in order to fill the cell immediately to the bottom of the leftmost cell of each pair. When the entire triangular table is filled then the leftmost column of it can be copied to column P of the truth table.

As an aside, note that this method of calculation corresponds to the method of operation of the elementary cellular automaton called Rule 102. For example, start such a cellular automaton with eight cells set up with the outputs of the truth table (or the coefficients of the canonical disjunctive normal form) of the Boolean expression: 10101001. Then run the cellular automaton for seven more generations while keeping a record of the state of the leftmost cell. The history of this cell then turns out to be: 11000010, which shows the coefficients of the corresponding Zhegalkin polynomial. 

### The Pascal method Using the Pascal method to compute the Zhegalkin polynomial for the Boolean function ${\bar {a}}{\bar {b}}{\bar {c}}+{\bar {a}}b{\bar {c}}+{\bar {a}}bc+ab{\bar {c}}$ . The line in Russian at the bottom says:
$\oplus$ – bitwise operation "Exclusive OR"

The most economical in terms of the amount of computation and expedient for constructing the Zhegalkin polynomial manually is the Pascal method.

We build a table consisting of $2^{N}$ columns and $N+1$ rows, where N is the number of variables in the function. In the top row of the table we place the vector of function values, that is, the last column of the truth table.

Each row of the resulting table is divided into blocks (black lines in the figure). In the first line, the block occupies one cell, in the second line — two, in the third — four, in the fourth — eight, and so on. Each block in a certain line, which we will call "lower block", always corresponds to exactly two blocks in the previous line. We will call them "left upper block" and "right upper block".

The construction starts from the second line. The contents of the left upper blocks are transferred without change into the corresponding cells of the lower block (green arrows in the figure). Then, the operation "addition modulo two" is performed bitwise over the right upper and left upper blocks and the result is transferred to the corresponding cells of the right side of the lower block (red arrows in the figure). This operation is performed with all lines from top to bottom and with all blocks in each line. After the construction is completed, the bottom line contains a string of numbers, which are the coefficients of the Zhegalkin polynomial, written in the same sequence as in the triangle method described above.

### The summation method

According to the truth table, it is easy to calculate the individual coefficients of the Zhegalkin polynomial. To do this, sum up modulo 2 the values of the function in those rows of the truth table where variables that are not in the conjunction (that corresponds to the coefficient being calculated) take zero values.

Suppose, for example, that we need to find the coefficient of the xz conjunction for the function of three variables $f(x,y,z)$ . There is no variable y in this conjunction. Find the input sets in which the variable y takes a zero value. These are the sets 0, 1, 4, 5 (000, 001, 100, 101). Then the coefficient at conjunction xz is

$a_{5}=f_{0}\oplus f_{1}\oplus f_{4}\oplus f_{5}=f(0,0,0)\oplus f(0,0,1)\oplus f(1,0,0)\oplus f(1,0,1)$ Since there are no variables with the constant term,

$a_{0}=f_{0}$ .

For a term which includes all variables, the sum includes all values of the function:

$a_{N-1}=f_{0}\oplus f_{1}\oplus f_{2}\oplus ...\oplus f_{N-2}\oplus f_{N-1}$ Let us graphically represent the coefficients of the Zhegalkin polynomial as sums modulo 2 of values of functions at certain points. To do this, we construct a square table, where each column represents the value of the function at one of the points, and the row is the coefficient of the Zhegalkin polynomial. The point at the intersection of some column and row means that the value of the function at this point is included in the sum for the given coefficient of the polynomial (see figure). We call this table $T_{N}$ , where N is the number of variables of the function.

There is a pattern that allows you to get a table for a function of N variables, having a table for a function of $N-1$ variables. The new table $T_{N}+1$ is arranged as a 2 × 2 matrix of $T_{N}$ tables, and the right upper block of the matrix is cleared.

#### Lattice-theoretic interpretation

Consider the columns of a table $T_{N}$ as corresponding to elements of a Boolean lattice of size $2^{N}$ . For each column $f_{M}$ express number M as a binary number $M_{2}$ , then $f_{M}\leq f_{K}$ if and only if $M_{2}\vee K_{2}=K_{2}$ , where $\vee$ denotes bitwise OR.

If the rows of table $T_{N}$ are numbered, from top to bottom, with the numbers from 0 to $2^{N}-1$ , then the tabular content of row number R is the ideal generated by element $f_{R}$ of the lattice.

Note incidentally that the overall pattern of a table $T_{N}$ is that of a logical matrix Sierpiński triangle. Also, the pattern corresponds to an elementary cellular automaton called Rule 60, starting with the leftmost cell set to 1 and all other cells cleared.

### Using a Karnaugh map

The figure shows a function of three variables, P(A, B, C) represented as a Karnaugh map, which the reader may consider as an example of how to convert such maps into Zhegalkin polynomials; the general procedure is given in the following steps:

• We consider all the cells of the Karnaugh map in ascending order of the number of units in their codes. For the function of three variables, the sequence of cells will be 000–100–010–001–110–101–011–111. Each cell of the Karnaugh map is associated with a member of the Zhegalkin polynomial depending on the positions of the code in which there are ones. For example, cell 111 corresponds to the member ABC, cell 101 corresponds to the member AC, cell 010 corresponds to the member B, and cell 000 corresponds to member 1.
• If the cell in question is 0, go to the next cell.
• If the cell in question is 1, add the corresponding term to the Zhegalkin polynomial, then invert all cells in the Karnaugh map where this term is 1 (or belonging to the ideal generated by this term, in a Boolean lattice of monomials) and go to the next cell. For example, if, when examining cell 110, a one appears in it, then the term AB is added to the Zhegalkin polynomial and all cells of the Karnaugh map are inverted, for which A = 1 and B = 1. If unit is in cell 000, then a term 1 is added to the Zhegalkin polynomial and the entire Karnaugh map is inverted.
• The transformation process can be considered complete when, after the next inversion, all cells of the Karnaugh map become zero.

## Related work

In the same year as Zhegalkin's paper (1927) the American mathematician Eric Temple Bell published a sophisticated arithmetization of Boolean algebra based on Richard Dedekind's ideal theory and general modular arithmetic (as opposed to arithmetic mod 2). The much simpler arithmetic character of Zhegalkin polynomials was first noticed in the west (independently, communication between Soviet and Western mathematicians being very limited in that era) by the American mathematician Marshall Stone in 1936 when he observed while writing up his celebrated Stone duality theorem that the supposedly loose analogy between Boolean algebras and rings could in fact be formulated as an exact equivalence holding for both finite and infinite algebras, leading him to substantially reorganize his paper.