# Linear complementarity problem

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.[1][2][3]

## Formulation

Given a real matrix M and vector q, the linear complementarity problem LCP(q, M) seeks vectors z and w which satisfy the following constraints:

• ${\displaystyle w,z\geqslant 0,}$ (that is, each component of these two vectors is non-negative)
• ${\displaystyle z^{T}w=0}$ or equivalently ${\displaystyle \sum \nolimits _{i}w_{i}z_{i}=0.}$ This is the complementarity condition, since it implies that, for all ${\displaystyle i}$, at most one of ${\displaystyle w_{i}}$ and ${\displaystyle z_{i}}$ can be positive.
• ${\displaystyle w=Mz+q}$

A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite. If M is such that LCP(q, M) have a solution for every q, then M is a Q-matrix. If M is such that LCP(q, M) have a unique solution for every q, then M is a P-matrix. Both of these characterizations are sufficient and necessary.[4]

The vector w is a slack variable,[5] and so is generally discarded after z is found. As such, the problem can also be formulated as:

• ${\displaystyle Mz+q\geqslant 0}$
• ${\displaystyle z\geqslant 0}$
• ${\displaystyle z^{\mathrm {T} }(Mz+q)=0}$ (the complementarity condition)

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function

${\displaystyle f(z)=z^{T}(Mz+q)}$

subject to the constraints

${\displaystyle {Mz}+q\geqslant 0}$
${\displaystyle z\geqslant 0}$

These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem.

If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice.

Also, a quadratic-programming problem stated as minimize ${\displaystyle f(x)=c^{T}x+{\tfrac {1}{2}}x^{T}Qx}$ subject to ${\displaystyle Ax\geqslant b}$ as well as ${\displaystyle x\geqslant 0}$ with Q symmetric

is the same as solving the LCP with

${\displaystyle q={\begin{bmatrix}c\\-b\end{bmatrix}},\qquad M={\begin{bmatrix}Q&-A^{T}\\A&0\end{bmatrix}}}$

This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as:

${\displaystyle {\begin{cases}v=Qx-A^{T}{\lambda }+c\\s=Ax-b\\x,{\lambda },v,s\geqslant 0\\x^{T}v+{\lambda }^{T}s=0\end{cases}}}$

with v the Lagrange multipliers on the non-negativity constraints, λ the multipliers on the inequality constraints, and s the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables (x, s) with its set of KKT vectors (optimal Lagrange multipliers) being (v, λ). In that case,

${\displaystyle z={\begin{bmatrix}x\\\lambda \end{bmatrix}},\qquad w={\begin{bmatrix}v\\s\end{bmatrix}}}$

If the non-negativity constraint on the x is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as Q is non-singular (which is guaranteed if it is positive definite). The multipliers v are no longer present, and the first KKT conditions can be rewritten as:

${\displaystyle Qx=A^{T}{\lambda }-c}$

or:

${\displaystyle x=Q^{-1}(A^{T}{\lambda }-c)}$

pre-multiplying the two sides by A and subtracting b we obtain:

${\displaystyle Ax-b=AQ^{-1}(A^{T}{\lambda }-c)-b\,}$

The left side, due to the second KKT condition, is s. Substituting and reordering:

${\displaystyle s=(AQ^{-1}A^{T}){\lambda }+(-AQ^{-1}c-b)\,}$

Calling now

{\displaystyle {\begin{aligned}M&:=(AQ^{-1}A^{T})\\q&:=(-AQ^{-1}c-b)\end{aligned}}}

we have an LCP, due to the relation of complementarity between the slack variables s and their Lagrange multipliers λ. Once we solve it, we may obtain the value of x from λ through the first KKT condition.

Finally, it is also possible to handle additional equality constraints:

${\displaystyle A_{eq}x=b_{eq}}$

This introduces a vector of Lagrange multipliers μ, with the same dimension as ${\displaystyle b_{eq}}$.

It is easy to verify that the M and Q for the LCP system ${\displaystyle s=M{\lambda }+Q}$ are now expressed as:

{\displaystyle {\begin{aligned}M&:={\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\\0\end{bmatrix}}\\q&:=-{\begin{bmatrix}A&0\end{bmatrix}}{\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}c\\b_{eq}\end{bmatrix}}-b\end{aligned}}}

From λ we can now recover the values of both x and the Lagrange multiplier of equalities μ:

${\displaystyle {\begin{bmatrix}x\\\mu \end{bmatrix}}={\begin{bmatrix}Q&A_{eq}^{T}\\-A_{eq}&0\end{bmatrix}}^{-1}{\begin{bmatrix}A^{T}\lambda -c\\-b_{eq}\end{bmatrix}}}$

In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods.[1][2] LCP problems can be solved also by the criss-cross algorithm,[6][7][8][9] conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix.[8][9] A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive.[8][9][10] Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.[11][12][13]