In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment.
In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios).
In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters.
There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios.
|Standard score||Normalizing errors when population parameters are known. Works well for populations that are normally distributed|
|Student's t-statistic||the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error.|
|Studentized residual||Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis.|
|Standardized moment||Normalizing moments, using the standard deviation as a measure of scale.|
|Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution.|
|Min-max feature scaling||Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example .|
Note that some other ratios, such as the variance-to-mean ratio , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant.
Other non-dimensional normalizations that can be used with no assumptions on the distribution include:
- Assignment of percentiles. This is common on standardized tests. See also quantile normalization.
- Normalization by adding and/or multiplying by constants so values fall between 0 and 1. This is used for probability density functions, with applications in fields such as physical chemistry in assigning probabilities to |ψ|2.