Jump to content

Active-set method

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Headbomb (talk | contribs) at 01:14, 13 February 2017 (clean up, replaced: }} {{MR| → |MR= using AWB). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematical optimization, a problem is defined using an objective function to minimize or maximize, and a set of constraints

that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point in the feasible region, a constraint

is called active at if and inactive at if Equality constraints are always active. The active set at is made up of those constraints that are active at the current point (Nocedal & Wright 2006, p. 308).

The active set is particularly important in optimization theory as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.

Active set methods

In general an active set algorithm has the following structure:

Find a feasible starting point
repeat until "optimal enough"
solve the equality problem defined by the active set (approximately)
compute the Lagrange multipliers of the active set
remove a subset of the constraints with negative Lagrange multipliers
search for infeasible constraints
end repeat

Methods that can be described as active set methods include:[1]

References

  1. ^ Nocedal & Wright 2006, pp. 467–480

Bibliography

  • Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. MR 0949214. {{cite book}}: Invalid |ref=harv (help)
  • Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1. {{cite book}}: Invalid |ref=harv (help).