# LogSumExp

The LogSumExp (LSE) function is a smooth maximum – a smooth approximation to the maximum function, mainly used by machine learning algorithms.[1] It's defined as the logarithm of the sum of the exponentials of the arguments:

${\displaystyle \mathrm {LSE} (x_{1},\dots ,x_{n})=\log \left(\exp(x_{1})+\cdots +\exp(x_{n})\right)}$

In tropical analysis, this is the sum in the log semiring.

## Properties

The LogSumExp function domain is ${\displaystyle \mathbb {R} ^{n}}$, the real coordinate space, and its range is ${\displaystyle \mathbb {R} }$, the real line. The larger the values of ${\displaystyle x_{k}}$ or their deviation, the better the approximation becomes. The LogSumExp function is convex, and is strictly monotonically increasing everywhere in its domain[2] (but not strictly convex everywhere[3]).

LSE is a smooth maximum because, applying the tangent line approximation ${\displaystyle \log(X+a)\approx \log X+a/X,}$ if one term, ${\displaystyle x_{j},}$ is much larger than the rest, the second term is small because it has ${\displaystyle x_{j}}$ in the denominator, and one gets:

${\textstyle \mathrm {LSE} (\mathbf {x} )=\log {\bigl (}\sum _{i}\exp x_{i}{\bigr )}\approx \log(\exp x_{j})+{\bigl (}\sum _{i\neq j}\exp x_{i}{\bigr )}/\exp x_{j}\approx x_{j}=\max _{i}x_{i}.}$

Indeed, there are the following tight bounds (if ${\displaystyle n>1}$, otherwise the first inequality is not strict):

${\displaystyle \max {\{x_{1},\dots ,x_{n}\}}<\mathrm {LSE} (x_{1},\dots ,x_{n})\leq \max {\{x_{1},\dots ,x_{n}\}}+\log(n).}$

The upper bound is equality if and only if all ${\displaystyle x_{i}}$ are equal.

This is because ${\textstyle \sum _{i=1}^{n}x_{i}\leq n\cdot \max _{i}x_{i}}$ (a sum is at most its maximum term each time), and for positive numbers, ${\textstyle \sum _{i=1}^{n}x_{i}\geq x_{i}}$ for any term, include the maximum (since it's adding positive numbers), and in fact is strict if ${\displaystyle n>1}$ (since you're adding a positive number). Combining with logarithms and exponents, one gets:

{\displaystyle {\begin{aligned}\max {\{x_{i}\dots ,x_{n}\}}&=\log \left(\exp(\max x_{i})\right)\\&\leq \log \left(\exp(x_{1})+\cdots +\exp(x_{n})\right)\\&\leq \log \left(n\cdot \exp(\max x_{i})\right)\\&=\max {\{x_{1},\dots ,x_{n}\}}+\log(n).\end{aligned}}}

The lower bound is met only for ${\displaystyle n=1}$, otherwise it is strict but approached when all but one of the arguments approach negative infinity, and the upper bound is met when all the arguments are equal.

Writing ${\displaystyle \mathbf {x} =(x_{1},\dots ,x_{n}),}$ the partial derivatives are:

${\displaystyle {\frac {\partial }{\partial x_{i}}}{LSE(\mathbf {x} )}={\frac {\exp x_{i}}{\sum _{j}\exp {x_{j}}}}.}$

This can be calculated via logarithmic differentiation.

Expressing the partial derivatives as a vector with the gradient yields the softmax function, the multivariable analog of the logistic function.

## log-sum-exp trick for log-domain calculations

The LSE function is often encountered when the usual arithmetic computations are performed on a logarithmic scale, as in log probability.

Like multiplication operation in linear-scale becoming simple addition in log-scale; an addition operation in linear-scale becomes the LSE in the log-domain.

A common purpose of using log-domain computations is to increase accuracy and avoid underflow and overflow problems when very small or very large numbers are represented directly (i.e. in a linear domain) using a limited-precision, floating point numbers.

Unfortunately, the use of LSE directly in this case can again cause overflow/underflow problems. Therefore, the following equivalent must be used instead (especially when the accuracy of the above 'max' approximation is not sufficient). Therefore, many math libraries such as IT++ provide a default routine of LSE and use this formula internally.

${\displaystyle LSE(x_{1},\dots ,x_{n})=x^{*}+\log \left(\exp(x_{1}-x^{*})+\cdots +\exp(x_{n}-x^{*})\right)}$

where ${\displaystyle x^{*}=\max {\{x_{1},\dots ,x_{n}\}}}$

## A strictly convex log-sum-exp type function

LSE is convex but not strictly convex. We can define a strictly convex log-sum-exp type function[4] by adding an extra argument set to zero:

${\displaystyle LSE_{0}^{+}(x_{1},...,x_{n})=LSE(0,x_{1},...,x_{n})}$

This function is a proper Bregman generator (strictly convex and differentiable). It is encountered in machine learning, for example, as the cumulant of the multinomial/binomial family.