# Farkas' lemma

Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas.[1] Farkas' lemma is the key result underpinning the linear programming duality and has played a central role in the development of mathematical optimization (alternatively, mathematical programming). It is used amongst other things in the proof of the Karush–Kuhn–Tucker theorem in nonlinear programming.[2]

Generalizations of the Farkas' lemma are about the solvability theorem for convex inequalities,[3] i.e., infinite system of linear inequalities. Farkas' lemma belongs to a class of statements called "theorems of the alternative": a theorem stating that exactly one of two systems has a solution.

## Statement of the lemma

There are a number of slightly different (but equivalent) formulations of the lemma in the literature. The one given here is due to Gale, Kuhn and Tucker (1951).[4]

Farkas' lemma —  Let ${\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}$ and ${\displaystyle \mathbf {b} \in \mathbb {R} ^{m}}$ . Then exactly one of the following two statements is true:

1. There exists an ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ such that ${\displaystyle \mathbf {Ax} =\mathbf {b} }$ and ${\displaystyle \mathbf {x} \geq 0}$.
2. There exists a ${\displaystyle \mathbf {y} \in \mathbb {R} ^{m}}$ such that ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} \geq 0}$ and ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$.

Here, the notation ${\displaystyle \mathbf {x} \geq 0}$ means that all components of the vector ${\displaystyle \mathbf {x} }$ are nonnegative.

## Example

Let n, m = 2, A = [6, 4; 3, 0], and b = [b1, b2]. The lemma says that exactly one of the following two statements must be true (depending on b1 and b2):

1. There exist x1 ≥ 0, x2 ≥ 0 such that 6 x1 + 4 x2 = b1 and 3 x1 = b2, or
2. There exist y1, y2 such that 6 y1 + 3 y2 ≥ 0, 4 y1 ≥ 0, and b1 y1 + b2 y2 < 0.

Here is a proof of the lemma in this special case:

• If b2 ≥ 0 and b1 − 2b2 ≥ 0, then option 1 is true, since the solution of the linear equations is x1 = b2/3 and x2 = b1-2b2. Option 2 is false, since b1 y1 + b2 y2b2 (2 y1 + y2) = b2 (6 y1 + 3 y2) / 3, so if the right-hand side is positive, the left-hand side must be positive too.
• Otherwise, option 1 is false, since the unique solution of the linear equations is not weakly positive. But in this case, option 2 is true:
• If b2 < 0, then we can take e.g. y1 = 0 and y2 = 1.
• If b1 − 2b2 < 0, then, for some number B > 0, b1 = 2b2 − B, so: b1 y1 + b2 y2 = 2 b2 y1 + b2 y2B y1 = b2 (6 y1 + 3 y2) / 3 − B y1. Thus we can take, for example, y1 = 1, y2 = −2.

## Geometric interpretation

Denote the convex cone generated by the columns of ${\displaystyle \mathbf {A} }$ by ${\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \geq 0\}}$. Then ${\displaystyle C(\mathbf {A} )}$ is a closed convex cone. The vector ${\displaystyle \mathbf {x} }$ proves that ${\displaystyle \mathbf {b} }$ lies in ${\displaystyle C(\mathbf {A} )}$, while the vector ${\displaystyle \mathbf {y} }$ gives a linear functional that separates ${\displaystyle \mathbf {b} }$ from ${\displaystyle C(\mathbf {A} )}$.

Let ${\displaystyle \mathbf {a} _{1},\dots ,\mathbf {a} _{n}\in \mathbb {R} ^{m}}$ denote the columns of ${\displaystyle \mathbf {A} }$. In terms of these vectors, Farkas' lemma states that exactly one of the following two statements is true:

1. There exist coefficients ${\displaystyle x_{1},\dots ,x_{n}\in \mathbb {R} ,\ x_{1},\dots ,x_{n}\geq 0}$, such that ${\displaystyle \mathbf {b} =x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}$, i.e., b lies in the cone of A.
2. There exists a vector ${\displaystyle \mathbf {y} \in \mathbb {R} ^{m}}$ such that ${\displaystyle \mathbf {a} _{i}^{\mathsf {T}}\mathbf {y} \geq 0}$ for ${\displaystyle i=1,\dots ,n}$, and ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$, i.e., there is a hyperplane through the origin, separating the vector b from the cone of A.

The vectors ${\displaystyle x_{1}\mathbf {a} _{1}+\dots +x_{n}\mathbf {a} _{n}}$ with nonnegative coefficients constitute the convex cone of the set ${\displaystyle \{\mathbf {a} _{1},\dots ,\mathbf {a} _{n}\}}$, so the first statement says that ${\displaystyle \mathbf {b} }$ is in this cone.

The second statement says that there exists a vector ${\displaystyle \mathbf {y} }$ such that the angle of ${\displaystyle \mathbf {y} }$ with the vectors ${\displaystyle \mathbf {a} _{i}}$ is at most 90°, while the angle of ${\displaystyle \mathbf {y} }$ with the vector ${\displaystyle \mathbf {b} }$ is more than 90°. The hyperplane normal to this vector has the vectors ${\displaystyle \mathbf {a} _{i}}$ on one side and the vector ${\displaystyle \mathbf {b} }$ on the other side. Hence, this hyperplane separates the vectors in the cone of ${\displaystyle \{\mathbf {a} _{1},\dots ,\mathbf {a} _{n}\}}$ and the vector ${\displaystyle \mathbf {b} }$.

For example, let n, m = 2, a1 = (1, 0)T, and a2 = (1, 1)T. The convex cone spanned by a1 and a2 can be seen as a wedge-shaped slice of the first quadrant in the xy plane. Now, suppose b = (0, 1). Certainly, b is not in the convex cone a1x1 + a2x2. Hence, there must be a separating hyperplane. Let y = (1, −1)T. We can see that a1 · y = 1, a2 · y = 0, and b · y = −1. Hence, the hyperplane with normal y indeed separates the convex cone a1x1 + a2x2 from b.

## Logic interpretation

A particularly suggestive and easy-to-remember version is the following: if a set of inequalities has no solution, then a contradiction can be produced from it by linear combination with nonnegative coefficients. In formulas: if ${\displaystyle Ax}$${\displaystyle b}$ is unsolvable then ${\displaystyle y^{\mathsf {T}}A=0}$, ${\displaystyle y^{\mathsf {T}}b=-1}$, ${\displaystyle y}$${\displaystyle 0}$ has a solution.[5] Note that ${\displaystyle y^{\mathsf {T}}A}$ is a combination of the left-hand sides, ${\displaystyle y^{\mathsf {T}}b}$ a combination of the right-hand side of the inequalities. Since the positive combination produces a zero vector on the left and a −1 on the right, the contradiction is apparent.

Thus, Farkas' lemma can be viewed as a theorem of logical completeness: ${\displaystyle Ax}$${\displaystyle b}$ is a set of "axioms", the linear combinations are the "derivation rules", and the lemma says that, if the set of axioms is inconsistent, then it can be refuted using the derivation rules.[6]:92–94

## Variants

The Farkas Lemma has several variants with different sign-constraints (the first one is the original version):[6]:92

• Either the system ${\displaystyle \mathbf {Ax} =\mathbf {b} }$ has a solution with ${\displaystyle \mathbf {x} \geq 0}$ , or the system ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} \geq 0}$ has a solution with ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$.
• Either the system ${\displaystyle \mathbf {Ax} \leq \mathbf {b} }$ has a solution with ${\displaystyle \mathbf {x} \geq 0}$ , or the system ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} \geq 0}$ has a solution with ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$ and ${\displaystyle \mathbf {y} \geq 0}$.
• Either the system ${\displaystyle \mathbf {Ax} \leq \mathbf {b} }$ has a solution with ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ , or the system ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} =0}$ has a solution with ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$ and ${\displaystyle \mathbf {y} \geq 0}$.
• Either the system ${\displaystyle \mathbf {Ax} =\mathbf {b} }$ has a solution with ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ , or the system ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} =0}$ has a solution with ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} \neq 0}$.

The latter variant is mentioned for completeness; it is not actually a "Farkas lemma" since it contains only equalities. Its proof is a simple exercise in linear algebra.

## Generalizations

Generalized Farkas' lemma —  Let ${\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}$, ${\displaystyle \mathbf {b} \in \mathbb {R} ^{m}}$, ${\displaystyle \mathbf {S} }$ is a closed convex cone in ${\displaystyle \mathbb {R} ^{n}}$, and the dual cone of ${\displaystyle \mathbf {S} }$ is ${\displaystyle \mathbf {S} ^{*}=\{\mathbf {z} \in \mathbb {R} ^{n}\mid \mathbf {z} ^{\mathsf {T}}\mathbf {x} \geq 0,\forall \mathbf {x} \in \mathbf {S} \}}$. If convex cone ${\displaystyle C(\mathbf {A} )=\{\mathbf {A} \mathbf {x} \mid \mathbf {x} \in \mathbf {S} \}}$ is closed, then exactly one of the following two statements is true:

1. There exists an ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ such that ${\displaystyle \mathbf {Ax} =\mathbf {b} }$ and ${\displaystyle \mathbf {x} \in \mathbf {S} }$.
2. There exists a ${\displaystyle \mathbf {y} \in \mathbb {R} ^{m}}$ such that ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} \in \mathbf {S} ^{*}}$ and ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} <0}$.

Generalized Farkas' lemma can be interpreted geometrically as follows: either a vector is in a given closed convex cone, or there exists a hyperplane separating the vector from the cone; there are no other possibilities. The closedness condition is necessary, see Separation theorem I in Hyperplane separation theorem. For original Farkas' lemma, ${\displaystyle \mathbf {S} }$ is the nonnegative orthant ${\displaystyle \mathbb {R} _{+}^{n}}$, hence the closedness condition holds automatically. Indeed, for polyhedral convex cone, i.e., there exists a ${\displaystyle \mathbf {B} \in \mathbb {R} ^{n\times k}}$ such that ${\displaystyle \mathbf {S} =\{\mathbf {B} \mathbf {x} \mid \mathbf {x} \in \mathbb {R} _{+}^{k}\}}$, the closedness condition holds automatically. In convex optimization, various kinds of constraint qualification, e.g. Slater's condition, are responsible for closedness of the underlying convex cone ${\displaystyle C(\mathbf {A} )}$.

By setting ${\displaystyle \mathbf {S} =\mathbb {R} ^{n}}$ and ${\displaystyle \mathbf {S} ^{*}=\{0\}}$ in generalized Farkas' lemma, we obtain the following corollary about the solvability for a finite system of linear equalities:

Corollary —  Let ${\displaystyle \mathbf {A} \in \mathbb {R} ^{m\times n}}$ and ${\displaystyle \mathbf {b} \in \mathbb {R} ^{m}}$. Then exactly one of the following two statements is true:

1. There exists an ${\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}$ such that ${\displaystyle \mathbf {Ax} =\mathbf {b} }$.
2. There exists a ${\displaystyle \mathbf {y} \in \mathbb {R} ^{m}}$ such that ${\displaystyle \mathbf {A} ^{\mathsf {T}}\mathbf {y} =0}$ and ${\displaystyle \mathbf {b} ^{\mathsf {T}}\mathbf {y} \neq 0}$.

## Further implications

Farkas's lemma can be varied to many further theorems of alternative by simple modifications, such as Gordan's theorem: Either ${\displaystyle Ax<0}$ has a solution x, or ${\displaystyle A^{\mathsf {T}}y=0}$ has a nonzero solution y with y ≥ 0.

Common applications of Farkas' lemma include proving the strong duality theorem associated with linear programming, game theory at a basic level,[clarification needed] and the Kuhn–Tucker constraints. An extension of Farkas' lemma can be used to analyze the strong duality conditions for and construct the dual of a semidefinite program. It is sufficient to prove the existence of the Kuhn–Tucker constraints using the Fredholm alternative but for the condition to be necessary, one must apply Von Neumann's minimax theorem to show the equations derived by Cauchy are not violated.

## Notes

1. ^ Farkas, Julius (Gyula) (1902), "Theorie der Einfachen Ungleichungen", Journal für die Reine und Angewandte Mathematik, 124 (124): 1–27, doi:10.1515/crll.1902.124.1
2. ^ Takayama, Akira (1985). Mathematical Economics (2nd ed.). New York: Cambridge University Press. p. 48. ISBN 0-521-31498-4.
3. ^ Dinh, N.; Jeyakumar, V. (2014), "Farkas' lemma three decades of generalizations for mathematical optimization", TOP, 22 (1): 1–22, doi:10.1007/s11750-014-0319-y
4. ^ Gale, David; Kuhn, Harold; Tucker, Albert W. (1951), "Linear Programming and the Theory of Games - Chapter XII" (PDF), in Koopmans (ed.), Activity Analysis of Production and Allocation, Wiley. See Lemma 1 on page 318.
5. ^ Boyd, Stephen P.; Vandenberghe, Lieven (2004), "Section 5.8.3" (pdf), Convex Optimization, Cambridge University Press, ISBN 978-0-521-83378-3, retrieved October 15, 2011.
6. ^ a b Gärtner, Bernd; Matoušek, Jiří (2006). Understanding and Using Linear Programming. Berlin: Springer. ISBN 3-540-30697-8. Pages 81–104.