# Regularization (mathematics)

(Redirected from Regularization (machine learning))

Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, refers to a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm.

A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.

The same idea arose in many fields of science. For example, the least-squares method can be viewed as a very simple form of regularization. A simple form of regularization applied to integral equations, generally termed Tikhonov regularization after Andrey Nikolayevich Tikhonov, is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization have become popular.

## Regularization in statistics and machine learning

In statistics and machine learning, regularization is used to prevent overfitting. Typical examples of regularization in statistical machine learning include ridge regression, lasso, and L2-norm in support vector machines.

Regularization methods are also used for model selection, where they work by implicitly or explicitly penalizing models based on the number of their parameters. For example, Bayesian learning methods make use of a prior probability that (usually) gives lower probability to more complex models. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). Alternative methods of controlling overfitting not involving regularization include cross-validation.

Regularization can be used to fine tune model complexity using an augmented error function with cross-validation. The data sets used in complex models can produce a levelling-off of validation as complexity of the models increases. Training data sets errors decrease while the validation data set error remains constant. Regularization introduces a second factor which weights the penalty against more complex models with an increasing variance in the data errors. This gives an increasing penalty as model complexity increases.[1]

Examples of applications of different methods of regularization to the linear model are:

Model Fit measure Entropy measure
AIC/BIC $\|Y-X\beta\|_2$ $\|\beta\|_0$
Ridge regression $\|Y-X\beta\|_2$ $\|\beta\|_2$
Lasso[2] $\|Y-X\beta\|_2$ $\|\beta\|_1$
Basis pursuit denoising $\|Y-X\beta\|_2$ $\lambda\|\beta\|_1$
Rudin-Osher-Fatemi model (TV) $\|Y-X\beta\|_2$ $\lambda\|\nabla\beta\|_1$
Potts model $\|Y-X\beta\|_2$ $\lambda\|\nabla\beta\|_0$
RLAD[3] $\|Y-X\beta\|_1$ $\|\beta\|_1$
Dantzig Selector[4] $\|X^\top (Y-X\beta)\|_\infty$ $\|\beta\|_1$

A combination of the LASSO and ridge regression methods is elastic net regularization.