Jump to content

Feature scaling: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
Line 15: Line 15:
The simplest method is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:
The simplest method is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:


:<math>x' = \frac{x - min}{max-min}</math>
:<math>x' = \frac{x - \text{min}(x)}{\text{max}(x)-\text{min}(x)}</math>


where <math>x</math> is an original value, <math>x'</math> is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).
where <math>x</math> is an original value, <math>x'</math> is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).

Revision as of 13:55, 25 November 2013

Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.

Motivation

Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization. For example, the majority of classifiers calculate the distance between two points by the distance. If one of the features has a broad range of values, the distance will be governed by this particular feature. Therefore, the range of all features should be normalized so that each feature contributes approximately proportionately to the final distance.

Methods

Rescaling

The simplest method is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [−1, 1]. Selecting the target range depends on the nature of the data. The general formula is given as:

where is an original value, is the normalized value. For example, suppose that we have the students' weight data, and the students' weights span [160 pounds, 200 pounds]. To rescale this data, we first subtract 160 from each student's weight and divide the result by 40 (the difference between the maximum and minimum weights).

Standardization

In machine learning, we can handle various types of data, e.g., audio signals, pixel values for image data, and etc., and this data can include multiple dimensions. Feature standardization makes the values of each feature in the data have zero-mean and unit-variance. This method is widely used for normalization in many machine learning algorithms (e.g., support vector machines, logistic regression, and neural networks). In general, we first calculate the mean and standard deviation for each feature, and then, subtract the mean in each feature. Then, we divide the values (mean is already subtracted) of each feature by its standard deviation.

Scaling to unit length

Another option that is widely used in machine-learning is to scale the components of a feature vector such that the complete vector has length one. This usually means dividing each component by the Euclidean length of the vector. In some applications (e.g. Histogram features) it can be more practical to use the L1 norm (i.e. Manhattan or City-Block Length) of the feature vector:

This is especially important if in the following learning steps the Scalar Metric is used as a distance measure.

Application

In gradient descent, feature scaling can improve the convergence speed of the algorithm. In SVM,[1] it reduces the time to find support vectors and helps the data points be properly placed in the space of kernel function.

References

  1. ^ Juszczak, P. (2002). "Feature scaling in support vector data descriptions". Proc. 8th Annu. Conf. Adv. School Comput. Imaging: 95–10. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  • S. Theodoridis, K. Koutroumbas. (2008) “Pattern Recognition”, Academic Press, 4 edition, ISBN 978-1-59749-272-0