# Oversampling and undersampling in data analysis

Oversampling and undersampling in data analysis are techniques used to adjust the class distribution of a data set (i.e. the ratio between the different classes/categories represented).

Oversampling and undersampling are opposite and roughly equivalent techniques. They both involve using a bias to select more samples from one class than from another.

The usual reason for oversampling is to correct for a bias in the original dataset. One scenario where it is useful is when training a classifier using labelled training data from a biased source, since labelled training data is valuable but often comes from un-representative sources.

For example, suppose we have a sample of 1000 people of which 66.7% are male. We know the general population is 50% female, and we may wish to adjust our dataset to represent this. Simple oversampling will select each female example twice, and this copying will produce a balanced dataset of 1333 samples with 50% female. Simple undersampling will drop some of the male samples at random to give a balanced dataset of 667 samples, again with 50% female.[clarification needed]

There are also more complex oversampling techniques, including the creation of artificial data points [1].

## Oversampling techniques for classification problems

### SMOTE

There are a number of methods available to oversample a dataset used in a typical classification problem (using a classification algorithm to classify a set of images, given a labelled training set of images). The most common technique is known as SMOTE: Synthetic Minority Over-sampling Technique.[2] To illustrate how this technique works consider some training data which has s samples, and f features in the feature space of the data. Note that these features, for simplicity, are continuous. As an example, consider a dataset of birds for clarification. The feature space for the minority class for which we want to oversample could be beak length, wingspan, and weight (all continuous). To then oversample, take a sample from the dataset, and consider its k nearest neighbors (in feature space). To create a synthetic data point, take the vector between one of those k neighbors, and the current data point. Multiply this vector by a random number x which lies between 0, and 1. Add this to the current data point to create the new, synthetic data point.

The adaptive synthetic sampling approach, or ADASYN algorithm,[3] builds on the methodology of SMOTE, by shifting the importance of the classification boundary to those minority classes which are difficult to learn. ADASYN uses a weighted distribution for different minority class examples according to their level of difficulty in learning, where more synthetic data is generated for minority class examples that are harder to learn.

## Undersampling techniques for classification problems

### Cluster centroids

Cluster centroids is a method that replaces cluster of samples by the cluster centroid of a K-means algorithm, where the number of clusters is set by the level of undersampling.

Tomek links remove unwanted overlap between classes where majority class links are removed until all minimally distanced nearest neighbor pairs are of the same class. A Tomek link is defined as follows: given an instance pair ${\displaystyle (x_{i},x_{j})}$, where ${\displaystyle x_{i}\in S_{\min },x_{j}\in S_{\operatorname {max} }}$ and ${\displaystyle d(x_{i},x_{j})}$ is the distance between ${\displaystyle x_{i}}$ and ${\displaystyle x_{j}}$, then the pair ${\displaystyle (x_{i},x_{j})}$ is called a Tomek link if there's no instance ${\displaystyle x_{k}}$ such that ${\displaystyle d(x_{i},x_{k}) or ${\displaystyle d(x_{j},x_{k}). In this way, if two instances form a Tomek link then either one of these instances is noise or both are near a border. Thus, one can use Tomek links to clean up overlap between classes. By removing overlapping examples, one can establish well-defined clusters in the training set and lead to improved classification performance.