Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.

SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem.

## Algorithm basics

Consider a nonlinear programming problem of the form:

${\begin{array}{rl}\min \limits _{x}&f(x)\\{\mbox{subject to}}&h(x)\geq 0\\&g(x)=0.\end{array}}$ The Lagrangian for this problem is

${\mathcal {L}}(x,\lambda ,\sigma )=f(x)-\lambda h(x)-\sigma g(x),$ where $\lambda$ and $\sigma$ are Lagrange multipliers.

The standard Newton's Method searches for the solution by iterating the following equation:

${\begin{bmatrix}x_{k+1}\\\lambda _{k+1}\\\sigma _{k+1}\end{bmatrix}}={\begin{bmatrix}x_{k}\\\lambda _{k}\\\sigma _{k}\end{bmatrix}}-\underbrace {{\begin{bmatrix}\nabla _{xx}^{2}{\mathcal {L}}&\nabla h&\nabla g\\\nabla h^{T}&0&0\\\nabla g^{T}&0&0\end{bmatrix}}^{-1}} _{\nabla ^{2}{\mathcal {L}}}\underbrace {\begin{bmatrix}\nabla f+\lambda _{k}\nabla h+\sigma _{k}\nabla g\\h\\g\end{bmatrix}} _{\nabla {\mathcal {L}}}$ .

However, because the matrix $\nabla ^{2}{\mathcal {L}}$ is generally singular (and therefore non-invertible), the Newton step $d_{k}=\left(\nabla ^{2}{\mathcal {L}}\right)^{-1}\nabla {\mathcal {L}}$ cannot be calculated directly. Instead the basic sequential quadratic programming algorithm defines an appropriate search direction $d_{k}$ at an iterate $(x_{k},\lambda _{k},\sigma _{k})$ , as a solution to the quadratic programming subproblem

${\begin{array}{rl}\min \limits _{d}&f(x_{k})+\nabla f(x_{k})^{T}d+{\tfrac {1}{2}}d^{T}\nabla _{xx}^{2}{\mathcal {L}}(x_{k},\lambda _{k},\sigma _{k})d\\\mathrm {s.t.} &h(x_{k})+\nabla h(x_{k})^{T}d\geq 0\\&g(x_{k})+\nabla g(x_{k})^{T}d=0.\end{array}}$ Note that the term $f(x_{k})$ in the expression above may be left out for the minimization problem, since it is constant under the $\min \limits _{d}$ operator.

Together, the SQP algorithm starts by first choosing the initial iterate $(x_{0},\lambda _{0},\sigma _{0})$ , then calculating $\nabla ^{2}{\mathcal {L}}(x_{0},\lambda _{0},\sigma _{0})$ and $\nabla {\mathcal {L}}(x_{0},\lambda _{0},\sigma _{0})$ . Then the QP subproblem is built and solved to find the Newton step direction $d_{0}$ which is used to update the parent problem iterate using $\left[x_{k+1},\lambda _{k+1},\sigma _{k+1}\right]^{T}=\left[x_{k},\lambda _{k},\sigma _{k}\right]^{T}+d_{k}$ . This process is repeated for $k=0,1,2,\ldots$ until the parent problem satisfies a convergence test.

## Implementations

SQP methods have been implemented in well known numerical environments such as MATLAB and GNU Octave. There also exist numerous software libraries, including open source:

• SciPy (de facto standard for scientific Python) has scipy.optimize.minimize(method=’SLSQP’) solver.
• NLopt (C/C++ implementation, with numerous interfaces including Julia, Python, R, MATLAB/Octave), implemented by Dieter Kraft as part of a package for optimal control, and modified by S. G. Johnson.
• LabVIEW
• KNITRO (C, C++, C#, Java, Python, Julia, Fortran)
• NPSOL (Fortran)
• SNOPT (Fortran)
• NLPQL (Fortran)
• MATLAB
• SuanShu (Java)