In mathematics, in particular probability theory and related fields, the softmax function is a generalization of the logistic function that maps a length-p vector of real values to a length-K vector of values, defined as:
Since the vector sums to one and all its elements are strictly between zero and one, they represent a categorical probability distribution. For this reason, the softmax function is used in various probabilistic multiclass classification methods including multinomial logistic regression, multiclass linear discriminant analysis, naive Bayes classifiers and neural networks. Specifically, in multinomial LR and LDA, the input to the function is the result of K distinct linear functions, and the predicted probability for the j'th class given a sample vector x is:
Artificial neural networks
In neural network simulations, the softmax function is often implemented at the final layer of a network used for classification. Such networks are then trained under a log loss (or cross-entropy) regime, giving a non-linear variant of multinomial logistic regression.
Since the function maps a vector and a specific index i to a real value, the derivative needs to take the index into account:
See Multinomial logit for a probability model which uses the softmax activation function.
where the action value corresponds to the expected reward of following action a and is called a temperature parameter (in allusion to chemical kinetics). For high temperatures (), all actions have nearly the same probability and the lower the temperature, the more expected rewards affect the probability. For a low temperature (), the probability of the action with the highest expected reward tends to 1.
Smooth approximation of maximum
When parameterized by some constant, , the following formulation becomes a smooth, differentiable approximation of the maximum function:
has the following properties:
- is the average of its inputs
The gradient of softmax is given by:
which makes the softmax function useful for optimization techniques that use gradient descent.
||This section possibly contains original research. (June 2013)|
The softmax function is also used to standardize data which is positively skewed and includes many values around zero. It will take a variable such as revenue or age and transform the values to a scale from zero to one. This type of data transformation is needed especially when the data spans many magnitudes.
- Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. pp. 206–209.
- ai-faq What is a softmax activation function?
- Sutton, R. S. and Barto A. G. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998.Softmax Action Selection
- Pyle (1999). Data Preparation for Data Mining. pp. 271–274, 355–359.