# Perturbation function

In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the initial problem. In many cases this takes the form of shifting the constraints.[1]

In some texts the value function is called the perturbation function, and the perturbation function is called the bifunction.[2]

## Definition

Given two dual pairs separated locally convex spaces ${\displaystyle \left(X,X^{*}\right)}$ and ${\displaystyle \left(Y,Y^{*}\right)}$. Then given the function ${\displaystyle f:X\to \mathbb {R} \cup \{+\infty \}}$, we can define the primal problem by

${\displaystyle \inf _{x\in X}f(x).\,}$

If there are constraint conditions, these can be built into the function ${\displaystyle f}$ by letting ${\displaystyle f\leftarrow f+I_{\mathrm {constraints} }}$ where ${\displaystyle I}$ is the indicator function. Then ${\displaystyle F:X\times Y\to \mathbb {R} \cup \{+\infty \}}$ is a perturbation function if and only if ${\displaystyle F(x,0)=f(x)}$.[1][3]

## Use in duality

The duality gap is the difference of the right and left hand side of the inequality

${\displaystyle \sup _{y^{*}\in Y^{*}}-F^{*}(0,y^{*})\leq \inf _{x\in X}F(x,0),}$

where ${\displaystyle F^{*}}$ is the convex conjugate in both variables.[3][4]

For any choice of perturbation function F weak duality holds. There are a number of conditions which if satisfied imply strong duality.[3] For instance, if F is proper, jointly convex, lower semi-continuous with ${\displaystyle 0\in \operatorname {core} (\operatorname {Pr} _{Y}(\operatorname {dom} F))}$ (where ${\displaystyle \operatorname {core} }$ is the algebraic interior and ${\displaystyle \operatorname {Pr} _{Y}}$ is the projection onto Y defined by ${\displaystyle \operatorname {Pr} _{Y}(x,y)=y}$) and X, Y are Fréchet spaces then strong duality holds.[1]

## Examples

### Lagrangian

Let ${\displaystyle (X,X^{*})}$ and ${\displaystyle (Y,Y^{*})}$ be dual pairs. Given a primal problem (minimize f(x)) and a related perturbation function (F(x,y)) then the Lagrangian ${\displaystyle L:X\times Y^{*}\to \mathbb {R} \cup \{+\infty \}}$ is the negative conjugate of F with respect to y (i.e. the concave conjugate). That is the Lagrangian is defined by

${\displaystyle L(x,y^{*})=\inf _{y\in Y}\left\{F(x,y)-y^{*}(y)\right\}.}$

In particular the weak duality minmax equation can be shown to be

${\displaystyle \sup _{y^{*}\in Y^{*}}-F^{*}(0,y^{*})=\sup _{y^{*}\in Y^{*}}\inf _{x\in X}L(x,y^{*})\leq \inf _{x\in X}\sup _{y^{*}\in Y^{*}}L(x,y^{*})=\inf _{x\in X}F(x,0).}$

If the primal problem is given by

${\displaystyle \inf _{x:g(x)\leq 0}f(x)=\inf _{x\in X}{\tilde {f}}(x)}$

where ${\displaystyle {\tilde {f}}(x)=f(x)+I_{\mathbb {R} _{+}^{d}}(-g(x))}$. Then if the perturbation is given by

${\displaystyle \inf _{x:g(x)\leq y}f(x)}$

then the perturbation function is

${\displaystyle F(x,y)=f(x)+I_{\mathbb {R} _{+}^{d}}(y-g(x))}$.

Thus the connection to Lagrangian duality can be seen, as L can be trivially seen to be

${\displaystyle L(x,y^{*})={\begin{cases}f(x)+y^{*}(g(x))&{\text{if }}y^{*}\in \mathbb {R} _{+}^{d}\\-\infty &{\text{else}}\end{cases}}}$.

### Fenchel duality

Main article: Fenchel duality

Let ${\displaystyle (X,X^{*})}$ and ${\displaystyle (Y,Y^{*})}$ be dual pairs. Assume there exists a linear map ${\displaystyle T:X\to Y}$ with adjoint operator ${\displaystyle T^{*}:Y^{*}\to X^{*}}$. Assume the primal objective function ${\displaystyle f(x)}$ (including the constraints by way of the indicator function) can be written as ${\displaystyle f(x)=J(x,Tx)}$ such that ${\displaystyle J:X\times Y\to \mathbb {R} \cup \{+\infty \}}$. Then the perturbation function is given by

${\displaystyle F(x,y)=J(x,Tx-y)}$.

In particular if the primal objective is ${\displaystyle f(x)+g(Tx)}$ then the perturbation function is given by ${\displaystyle F(x,y)=f(x)+g(Tx-y)}$, which is the traditional definition of Fenchel duality.[5]

## References

1. ^ a b c Radu Ioan Boţ; Gert Wanka; Sorin-Mihai Grad (2009). Duality in Vector Optimization. Springer. ISBN 978-3-642-02885-4.
2. ^ J. P. Ponstein (2004). Approaches to the Theory of Optimization. Cambridge University Press. ISBN 978-0-521-60491-8.
3. ^ a b c Zălinescu, C. (2002). Convex analysis in general vector spaces. River Edge, NJ,: World Scientific Publishing  Co., Inc. pp. 106–113. ISBN 981-238-067-1. MR 1921556.
4. ^ Ernö Robert Csetnek (2010). Overcoming the failure of the classical generalized interior-point regularity conditions in convex optimization. Applications of the duality theory to enlargements of maximal monotone operators. Logos Verlag Berlin GmbH. ISBN 978-3-8325-2503-3.
5. ^ Radu Ioan Boţ (2010). Conjugate Duality in Convex Optimization. Springer. p. 68. ISBN 978-3-642-04899-9.