# Interior-point method Example search for a solution. Blue lines show constraints, red points show iterated solutions.

Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems.

An interior point method was discovered by Soviet mathematician I. I. Dikin in 1967 and reinvented in the U.S. in the mid-1980s. In 1984, Narendra Karmarkar developed a method for linear programming called Karmarkar's algorithm, which runs in provably polynomial time and is also very efficient in practice. It enabled solutions of linear programming problems that were beyond the capabilities of the simplex method. Contrary to the simplex method, it reaches a best solution by traversing the interior of the feasible region. The method can be generalized to convex programming based on a self-concordant barrier function used to encode the convex set.

Any convex optimization problem can be transformed into minimizing (or maximizing) a linear function over a convex set by converting to the epigraph form. The idea of encoding the feasible set using a barrier and designing barrier methods was studied by Anthony V. Fiacco, Garth P. McCormick, and others in the early 1960s. These ideas were mainly developed for general nonlinear programming, but they were later abandoned due to the presence of more competitive methods for this class of problems (e.g. sequential quadratic programming).

Yurii Nesterov, and Arkadi Nemirovski came up with a special class of such barriers that can be used to encode any convex set. They guarantee that the number of iterations of the algorithm is bounded by a polynomial in the dimension and accuracy of the solution.

Karmarkar's breakthrough revitalized the study of interior-point methods and barrier problems, showing that it was possible to create an algorithm for linear programming characterized by polynomial complexity and, moreover, that was competitive with the simplex method. Already Khachiyan's ellipsoid method was a polynomial-time algorithm; however, it was too slow to be of practical interest.

The class of primal-dual path-following interior-point methods is considered the most successful. Mehrotra's predictor–corrector algorithm provides the basis for most implementations of this class of methods.

## Primal-dual interior-point method for nonlinear optimization

The primal-dual method's idea is easy to demonstrate for constrained nonlinear optimization. For simplicity, consider the all-inequality version of a nonlinear optimization problem:

minimize $f(x)$ subject to $c_{i}(x)\geq 0~{\text{for}}~i=1,\ldots ,m,~x\in \mathbb {R} ^{n},$ where $f:\mathbb {R} ^{n}\to \mathbb {R} ,c_{i}:\mathbb {R} ^{n}\rightarrow \mathbb {R} \quad (1).$ This inequality-constrained optimization problem is then solved by converting it into an unconstrained objective function whose minimum we hope to find efficiently. Specifically, the logarithmic barrier function associated with (1) is

$B(x,\mu )=f(x)-\mu \sum _{i=1}^{m}\log(c_{i}(x)).\quad (2)$ Here $\mu$ is a small positive scalar, sometimes called the "barrier parameter". As $\mu$ converges to zero the minimum of $B(x,\mu )$ should converge to a solution of (1).

$g_{b}(x,\mu ):=\nabla B(x,\mu )=g(x)-\mu \sum _{i=1}^{m}{\frac {1}{c_{i}(x)}}\nabla c_{i}(x),\quad (3)$ where $g(x):=\nabla f(x)$ is the gradient of the original function $f(x)$ , and $\nabla c_{i}$ is the gradient of $c_{i}$ .

In addition to the original ("primal") variable $x$ we introduce a Lagrange multiplier-inspired dual variable $\lambda \in \mathbb {R} ^{m}$ $c_{i}(x)\lambda _{i}=\mu ,\forall i=1,\ldots ,m.\quad (4)$ (4) is sometimes called the "perturbed complementarity" condition, for its resemblance to "complementary slackness" in KKT conditions.

We try to find those $(x_{\mu },\lambda _{\mu })$ for which the gradient of the barrier function is zero.

Applying (4) to (3), we get an equation for the gradient:

$g-A^{T}\lambda =0,\quad (5)$ where the matrix $A$ is the Jacobian of the constraints $c(x)$ .

The intuition behind (5) is that the gradient of $f(x)$ should lie in the subspace spanned by the constraints' gradients. The "perturbed complementarity" with small $\mu$ (4) can be understood as the condition that the solution should either lie near the boundary $c_{i}(x)=0$ , or that the projection of the gradient $g$ on the constraint component $c_{i}(x)$ normal should be almost zero.

Applying Newton's method to (4) and (5), we get an equation for $(x,\lambda )$ update $(p_{x},p_{\lambda })$ :

${\begin{pmatrix}W&-A^{T}\\\Lambda A&C\end{pmatrix}}{\begin{pmatrix}p_{x}\\p_{\lambda }\end{pmatrix}}={\begin{pmatrix}-g+A^{T}\lambda \\\mu 1-C\lambda \end{pmatrix}},$ where $W$ is the Hessian matrix of $B(x,\mu )$ , $\Lambda$ is a diagonal matrix of $\lambda$ , and $C$ is a diagonal matrix with $C_{ii}=c_{i}(x)$ .

Because of (1), (4) the condition

$\lambda \geq 0$ should be enforced at each step. This can be done by choosing appropriate $\alpha$ :

$(x,\lambda )\to (x+\alpha p_{x},\lambda +\alpha p_{\lambda }).$ Trajectory of the iterates of x by using the interior point method.