# Danskin's theorem

In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form

${\displaystyle f(x)=\max _{z\in Z}\phi (x,z).}$

The theorem has applications in optimization, where it sometimes is used to solve minimax problems. The original theorem given by J. M. Danskin in his 1967 monograph [1] provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function.

An extension to more general conditions was proven 1971 by Dimitri Bertsekas.

## Statement

The following version is proven in "Nonlinear programming" (1991).[2] Suppose ${\displaystyle \phi (x,z)}$ is a continuous function of two arguments,

${\displaystyle \phi :\mathbb {R} ^{n}\times Z\to \mathbb {R} }$
where ${\displaystyle Z\subset \mathbb {R} ^{m}}$ is a compact set. Further assume that ${\displaystyle \phi (x,z)}$ is convex in ${\displaystyle x}$ for every ${\displaystyle z\in Z.}$

Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function

${\displaystyle f(x)=\max _{z\in Z}\phi (x,z).}$
To state these results, we define the set of maximizing points ${\displaystyle Z_{0}(x)}$ as
${\displaystyle Z_{0}(x)=\left\{{\overline {z}}:\phi (x,{\overline {z}})=\max _{z\in Z}\phi (x,z)\right\}.}$

Danskin's theorem then provides the following results.

Convexity
${\displaystyle f(x)}$ is convex.
Directional derivatives
The directional derivative of ${\displaystyle f(x)}$ in the direction ${\displaystyle y}$, denoted ${\displaystyle D_{y}\ f(x),}$ is given by
${\displaystyle D_{y}f(x)=\max _{z\in Z_{0}(x)}\phi '(x,z;y),}$
where ${\displaystyle \phi '(x,z;y)}$ is the directional derivative of the function ${\displaystyle \phi (\cdot ,z)}$ at ${\displaystyle x}$ in the direction ${\displaystyle y.}$
Derivative
${\displaystyle f(x)}$ is differentiable at ${\displaystyle x}$ if ${\displaystyle Z_{0}(x)}$ consists of a single element ${\displaystyle {\overline {z}}}$. In this case, the derivative of ${\displaystyle f(x)}$ (or the gradient of ${\displaystyle f(x)}$ if ${\displaystyle x}$ is a vector) is given by
${\displaystyle {\frac {\partial f}{\partial x}}={\frac {\partial \phi (x,{\overline {z}})}{\partial x}}.}$
Subdifferential
If ${\displaystyle \phi (x,z)}$ is differentiable with respect to ${\displaystyle x}$ for all ${\displaystyle z\in Z,}$ and if ${\displaystyle \partial \phi /\partial x}$ is continuous with respect to ${\displaystyle z}$ for all ${\displaystyle x}$, then the subdifferential of ${\displaystyle f(x)}$ is given by
${\displaystyle \partial f(x)=\mathrm {conv} \left\{{\frac {\partial \phi (x,z)}{\partial x}}:z\in Z_{0}(x)\right\}}$
where ${\displaystyle \mathrm {conv} }$ indicates the convex hull operation.

## Extension

The 1971 Ph.D. Thesis by Bertsekas (Proposition A.22) [3] proves a more general result, which does not require that ${\displaystyle \phi (\cdot ,z)}$ is differentiable. Instead it assumes that ${\displaystyle \phi (\cdot ,z)}$ is an extended real-valued closed proper convex function for each ${\displaystyle z}$ in the compact set ${\displaystyle Z,}$ that ${\displaystyle \operatorname {int} (\operatorname {dom} (f)),}$ the interior of the effective domain of ${\displaystyle f,}$ is nonempty, and that ${\displaystyle \phi }$ is continuous on the set ${\displaystyle \operatorname {int} (\operatorname {dom} (f))\times Z.}$ Then for all ${\displaystyle x}$ in ${\displaystyle \operatorname {int} (\operatorname {dom} (f)),}$ the subdifferential of ${\displaystyle f}$ at ${\displaystyle x}$ is given by

${\displaystyle \partial f(x)=\operatorname {conv} \left\{\partial \phi (x,z):z\in Z_{0}(x)\right\}}$
where ${\displaystyle \partial \phi (x,z)}$ is the subdifferential of ${\displaystyle \phi (\cdot ,z)}$ at ${\displaystyle x}$ for any ${\displaystyle z}$ in ${\displaystyle Z.}$