# LogSumExp

The LogSumExp (LSE) function is a smooth approximation to the maximum function, mainly used by machine learning algorithms[1]. It's defined as the logarithm of the sum of the exponentials of the arguments:

${\displaystyle LSE(x_{1},\dots ,x_{n})=\log \left(\exp(x_{1})+\cdots +\exp(x_{n})\right)}$

The LogSumExp function domain is ${\displaystyle \mathbb {R} ^{n}}$, the real coordinate space, and its range is ${\displaystyle \mathbb {R} }$, the real line. The larger the values of ${\displaystyle x_{k}}$ or their deviation, the better the approximation becomes. The LogSumExp function is convex, and is strictly monotonically increasing everywhere in its domain[2] (but not strictly convex everywhere [3]).

On the other hand, when directly encountered, LSE can be well-approximated by ${\displaystyle \max {\{x_{1},\dots ,x_{n}\}}}$, owing to the following tight bounds.

${\displaystyle \max {\{x_{1},\dots ,x_{n}\}}\leq LSE(x_{1},\dots ,x_{n})\leq \max {\{x_{1},\dots ,x_{n}\}}+\log(n)}$

The lower bound is met when all but one of the arguments approach negative infinity, and the upper bound is met when all the arguments are equal.

## log-sum-exp trick for log-domain calculations

The LSE function is often encountered when the usual arithmetic computations are performed in log-domain or log-scale.

Like multiplication operation in linear-scale becoming simple addition in log-scale; an addition operation in linear-scale becomes the LSE in the log-domain.

A common purpose of using log-domain computations is to increase accuracy and avoid underflow and overflow problems when very small or very large numbers are represented directly (i.e. in a linear domain) using a limited-precision, floating point numbers.

Unfortunately, the use of LSE directly in this case can again cause overflow/underflow problems. Therefore, the following equivalent must be used instead (especially when the accuracy of the above 'max' approximation is not sufficient). Therefore, many math libraries such as IT++ provide a default routine of LSE and use this formula internally.

${\displaystyle LSE(x_{1},\dots ,x_{n})=x^{*}+\log \left(\exp(x_{1}-x^{*})+\cdots +\exp(x_{n}-x^{*})\right)}$

where ${\displaystyle x^{*}=\max {\{x_{1},\dots ,x_{n}\}}}$

## A strictly convex log-sum-exp type function

LSE is convex but not strictly convex. We can define a strictly convex log-sum-exp type function[4] by adding an extra argument set to zero:

${\displaystyle LSE_{0}^{+}(x_{1},...,x_{n})=LSE(0,x_{1},...,x_{n})}$

This function is a proper Bregman generator (strictly convex and differentiable). It is met in machine learning. For example, as the cumulant of the multinomial/binomial family.