Sequential quadratic programming (SQP) is an iterative method for nonlinear optimization. SQP methods are used on problems for which the objective function and the constraints are twice continuously differentiable.

SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem. SQP methods have been implemented in many packages, including NPSOL, SNOPT, NLPQL, OPSYC, OPTIMA, MATLAB, GNU Octave and SQP.

## Algorithm basics

Consider a nonlinear programming problem of the form:

$\begin{array}{rl} \min\limits_{x} & f(x) \\ \mbox{s.t.} & b(x) \ge 0 \\ & c(x) = 0. \end{array}$

The Lagrangian for this problem is;

$\mathcal{L}(x,\lambda,\sigma) = f(x) - \lambda^T b(x) - \sigma^T c(x),$

where $\lambda$ and $\sigma$ are Lagrange multipliers. At an iterate $x_k$, a basic sequential quadratic programming algorithm defines an appropriate search direction $d_k$ as a solution to the quadratic programming subproblem

$\begin{array}{rl} \min\limits_{d} & f(x_k) + \nabla f(x_k)^Td + \tfrac{1}{2} d^T \nabla_{xx}^2 \mathcal{L}(x_k,\lambda_k,\sigma_k) d \\ \mathrm{s.t.} & b(x_k) + \nabla b(x_k)^Td \ge 0 \\ & c(x_k) + \nabla c(x_k)^T d = 0. \end{array}$

Note that the term $f(x_k)$ in the expression above may be left out for the minimization problem, since it is constant.