# Linear complementarity problem

Jump to: navigation, search

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.[1][2][3]

## Formulation

Given a real matrix M and vector q, the linear complementarity problem seeks vectors z and w which satisfy the following constraints:

• ${w} = {Mz} + {q} \,$
• ${w} \ge 0, {z} \ge 0\,$ (that is, each component of these two vectors is non-negative)
• ${w}_i {z}_i = 0\,$ for all i. (The complementarity condition)

A sufficient condition for existence and uniqueness of a solution to this problem is that M be symmetric positive-definite.

The vector ${w}\,$ is a slack variable,[4] and so is generally discarded after ${z}\,$ is found. As such, the problem can also be formulated as:

• ${Mz}+{q} \ge {0}\,$
• ${z} \ge {0}\,$
• ${z}^{\mathrm{T}}({Mz}+{q}) = 0\,$ (the complementarity condition)

## Convex quadratic-minimization: Minimum conditions

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function

$f({z}) = {z}^{\mathrm{T}}({Mz}+{q})\,$

subject to the constraints

${Mz}+{q} \ge {0}\,$
${z} \ge {0}\,$

These constraints ensure that f is always non-negative. The minimum of f is 0 at z if and only if z solves the linear complementarity problem.

If M is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice.

Also, a quadratic-programming problem stated as minimize $f({x})={c}^T{x}+\frac{1}{2}{x}^T{Qx}\,$ subject to ${Ax} \geq {b} \,$ as well as ${x} \ge {0}\,$ with Q symmetric

is the same as solving the LCP with

• ${q} = \left[\begin{array}{c}{c}\\-{b}\end{array}\right]\,$
• ${M} = \left[\begin{array}{cc} {Q} & -{A}^{T}\\ {A} & 0\end{array}\right]\,$

This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as:

• ${v} = {Q} {x} - {A}^{T} {\lambda} + {c}\,$
• ${s} = {A} {x} - {b}\,$
• ${x}, {\lambda}, {v}, {s} \ge {0}\,$
• ${x}^{T}{v} + {\lambda}^{T}{s} = {0}\,$

...being ${v} \,$ the Lagrange multipliers on the non-negativity constraints,${\lambda} \,$ the multipliers on the inequality constraints, and ${s} \,$ the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables (${x}, {s}\,$) with its set of KKT vectors (optimal Lagrange multipliers) being (${v}, {\lambda}\,$).

In that case,

${z} = \left[\begin{array}{c}{x}\\ {\lambda}\end{array}\right]\,$
${w} = \left[\begin{array}{c}{v}\\ {s}\end{array}\right]\,$

If the non-negativity constraint on the ${x}\,$ is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as ${Q}\,$ is non-singular (which is guaranteed if it is positive definite). The multipliers ${v}\,$ are no longer present, and the first KKT conditions can be rewritten as:

• ${Q} {x} = {A}^{T} {\lambda} - {c}\,$

or:

${x} = {Q}^{-1}({A}^{T} {\lambda} - {c})\,$

pre-multiplying the two sides by ${A}\,$ and subtracting ${b}\,$ we obtain:

${A} {x} - {b} = {A} {Q}^{-1}({A}^{T} {\lambda} - {c}) -{b} \,$

The left side, due to the second KKT condition, is ${s}\,$. Substituting and reordering:

${s} = ({A} {Q}^{-1} {A}^{T}) {\lambda} + (- {A} {Q}^{-1} {c} - {b} )\,$

Calling now ${M} \,\overset{\underset{\mathrm{def}}{}}{=}\, ({A} {Q}^{-1} {A}^{T})\,$ and ${q} \,\overset{\underset{\mathrm{def}}{}}{=}\, (- {A} {Q}^{-1} {c} - {b})\,$ we have an LCP, due to the relation of complementarity between the slack variables ${s}\,$ and their Lagrange multipliers ${\lambda}\,$. Once we solve it, we may obtain the value of ${x}\,$ from ${\lambda}\,$ through the first KKT condition.

Finally, it is also possible to handle additional equality constraints:

${A}_{eq}{x} = {b}_{eq} \,$

This introduces a vector of Lagrange multipliers ${\mu}\,$, with the same dimension as ${b}_{eq}\,$.

It is easy to verify that the ${M}\,$ and ${q}\,$ for the LCP system ${s} = {M} {\lambda} + {q} \,$ are now expressed as:

${M} ~\overset{\underset{\mathrm{def}}{}}{=}~ \left(\left[\begin{array}{cc}{A} & {0}\end{array}\right] \left[\begin{array}{cc} {Q} & {A}_{eq}^{T}\\ -{A}_{eq} & {0}\end{array}\right]^{-1} \left[\begin{array}{cc}{A}^{T} \\ {0}\end{array}\right]\right)\,$
${q} ~\overset{\underset{\mathrm{def}}{}}{=}~ \left(- \left[\begin{array}{cc}{A} & {0}\end{array}\right] \left[\begin{array}{cc} {Q} & {A}_{eq}^{T}\\ -{A}_{eq} & {0}\end{array}\right]^{-1} \left[\begin{array}{c}{c}\\ {b}_{eq}\end{array}\right] - {b}\right)\,$

From ${\lambda}\,$ we can now recover the values of both ${x}\,$ and the Lagrange multiplier of equalities ${\mu}\,$:

$\left[\begin{array}{c}{x}\\ {\mu}\end{array}\right] = \left[\begin{array}{cc} {Q} & {A}_{eq}^{T}\\ -{A}_{eq} & {0}\end{array}\right]^{-1} \left[\begin{array}{c} {A}^{T}{\lambda}-{c}\\-{b}_{eq}\end{array}\right] \,$

In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods.[1][2] LCP problems can be solved also by the criss-cross algorithm,[5][6][7][8] conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix.[7][8] A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive.[7][8][9] Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.[10][11][12]

## Notes

1. ^ a b
2. ^ a b
3. ^ R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1:103-125, 1968.
4. ^ Taylor, Joshua Adam (2015), Convex Optimization of Power Systems, Cambridge University Press, p. 172, ISBN 9781107076877.
5. ^
6. ^
7. ^ a b c den Hertog, D.; Roos, C.; Terlaky, T. (1 July 1993). "The linear complementarity problem, sufficient matrices, and the criss-cross method" (PDF). Linear Algebra and its Applications 187: 1–14. doi:10.1016/0024-3795(93)90124-7.
8. ^ a b c Csizmadia, Zsolt; Illés, Tibor (2006). "New criss-cross type algorithms for linear complementarity problems with sufficient matrices" (PDF). Optimization Methods and Software 21 (2): 247–266. doi:10.1080/10556780500095009.
9. ^ Cottle, R. W.; Pang, J.-S.; Venkateswaran, V. (March–April 1989). "Sufficient matrices and the linear complementarity problem". Linear Algebra and its Applications. 114–115: 231–249. doi:10.1016/0024-3795(89)90463-1. MR 986877.
10. ^
11. ^ Terlaky & Zhang (1993): Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. Degeneracy in optimization problems (Springer Netherlands). 46–47 (1): 203–233. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. CiteSeerX: 10.1.1.36.7658.
12. ^ Björner, Anders; Las Vergnas, Michel; Sturmfels, Bernd; White, Neil; Ziegler, Günter (1999). "10 Linear programming". Oriented Matroids. Cambridge University Press. pp. 417–479. doi:10.1017/CBO9780511586507. ISBN 978-0-521-77750-6. MR 1744046.

## Further reading

• R. W. Cottle and G. B. Dantzig. Complementary pivot theory of mathematical programming. Linear Algebra and its Applications, 1:103-125, 1968.
• Terlaky, Tamás; Zhang, Shu Zhong (1993). "Pivot rules for linear programming: A Survey on recent theoretical developments". Annals of Operations Research. Degeneracy in optimization problems (Springer Netherlands). 46–47 (1): 203–233. doi:10.1007/BF02096264. ISSN 0254-5330. MR 1260019. CiteSeerX: 10.1.1.36.7658.