Fourier–Motzkin elimination

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions.

The algorithm is named after Joseph Fourier and Theodore Motzkin.


The elimination of a set of variables, say V, from a system of relations (here linear inequalities) refers to the creation of another system of the same sort, but without the variables in V, such that both systems have the same solutions over the remaining variables.

If all variables are eliminated from a system of linear inequalities, then one obtains a system of constant inequalities. It is then trivial to decide whether the resulting system is true or false. It is true if and only if the original system has solutions. As a consequence, elimination of all variables can be used to detect whether a system of inequalities has solutions or not.

Consider a system of inequalities with variables to , with the variable to be eliminated. The linear inequalities in the system can be grouped into three classes depending on the sign (positive, negative or null) of the coefficient for .

  • those inequalities that are of the form ; denote these by , for ranging from 1 to where is the number of such inequalities;
  • those inequalities that are of the form ; denote these by , for ranging from 1 to where is the number of such inequalities;
  • those inequalities in which plays no role, grouped into a single conjunction .

The original system is thus equivalent to


Elimination consists in producing a system equivalent to . Obviously, this formula is equivalent to


The inequality

is equivalent to inequalities , for and .

We have therefore transformed the original system into another system where is eliminated. Note that the output system has inequalities. In particular, if , then the number of output inequalities is .


Running an elimination step over inequalities can result in at most inequalities in the output, thus running successive steps can result in at most , a double exponential complexity. This is due to the algorithm producing many unnecessary constraints (constraints that are implied by other constraints). The number of necessary constraints grows as a single exponential.[1] Unnecessary constraints may be detected using linear programming.

Imbert's acceleration theorems[edit]

Two "acceleration" theorems due to Imbert[2] permit the elimination of redundant inequalities based solely on syntactic properties of the formula derivation tree, thus curtailing the need to solve linear programs or compute matrix ranks.

Define the history of an inequality as the set of indexes of inequalities from the initial system used to produce . Thus, for inequalities of the initial system. When adding a new inequality (by eliminating ), the new history is constructed as .

Suppose that the variables have been officially eliminated. Each inequality partitions the set into :

  • , the set of effectively eliminated variables, i.e. on purpose. A variable is in the set as soon as at least one inequality in the history of results from the elimination of .
  • , the set of implicitly eliminated variables, i.e. by accident. A variable is implicitly eliminated when it appears in at least one inequality of , but appears neither in inequality nor in
  • , all remaining variables.

A non-redundant inequality has the property that its history is minimal.[3]

Theorem (Imbert's first acceleration theorem). If the history of an inequality is minimal, then .

An inequality that does not satisfy these bounds is necessarily redundant, and can be removed from the system without changing its solution set.

The second acceleration theorem detects minimal history sets:

Theorem (Imbert's second acceleration theorem). If the inequality is such that , then is minimal.

This theorem provides a quick detection criterion and is used in practice to avoid more costly checks, such as those based on matrix ranks. See the reference for implementation details.[3]

Applications in Information Theory[edit]

Information-theoretic achievability proofs result in conditions under which the existence of a well-preforming coding scheme is guaranteed. These conditions are often described by linear system of inequalities. The variables of the system include both the transmission rates (that are part of the problem's formulation) and additional auxiliary rates used for the design of the scheme. Commonly, one aims to describe the fundamental limits of communication in terms of the problem's parameters only. This gives rise to the need of eliminating the aforementioned auxiliary rates, which is executed via Fourier–Motzkin elimination. However, the elimination process results in a new system that possibly contains more inequalities than the original. Yet, often some of the inequalities in the reduced system are redundant. Redundancy may be implied by other inequalities or by inequalities in information theory (a.k.a. Shannon type inequalities). A recently developed open-source software for MATLAB [4] performs the elimination, while identifying and removing redundant inequalities. Consequently, the software's outputs a simplified system (without redundancies) that involves the communication rates only.

Redundant constraint can be identified by solving a linear program as follows. Given a linear constraints system, if the -th inequality is satisfied for any solution of all other inequalities, then it is redundant. Similarly, STIs refers to inequalities that are implied by the non-negativity of information theoretic measures and basic identities they satisfy. For instance, the STI is a consequence of the identity and the non-negativity of conditional entropy, i.e., . Shannon-type inequalities define a cone in , where is the number of random variables appearing in the involved information measures. Consequently, any STI can be proven via linear programming by checking if it is implied by the basic identities and non-negativity constraints. The described algorithm first preforms Fourier–Motzkin elimination to remove the auxiliary rates. Then, it imposes the information theoretic non-negativity constraints on the reduced output system and removes redundant inequalities.

See also[edit]

  • Real closed field – the cylindrical algebraic decomposition algorithm performs quantifier elimination over polynomial inequalities, not just linear.


  1. ^ David Monniaux, Quantifier elimination by lazy model enumeration, Computer aided verification (CAV) 2010.
  2. ^ Jean-Louis Imbert, About Redundant Inequalities Generated by Fourier's Algorithm, Artificial Intelligence IV: Methodology, Systems, Applications, 1990.
  3. ^ a b Jean-Louis Imbert, Fourier Elimination: Which to Choose?.
  4. ^ Gattegno, Ido B.; Goldfeld, Ziv; Permuter, Haim H. (2015-09-25). "Fourier-Motzkin Elimination Software for Information Theoretic Inequalities". arXiv:1610.03990Freely accessible. 

Further reading[edit]

External links[edit]