Jump to content

Margin classifier

From Wikipedia, the free encyclopedia

In machine learning (ML), a margin classifier is a type of classification model which is able to give an associated distance from the decision boundary for each data sample. For instance, if a linear classifier is used, the distance (typically Euclidean, though others may be used) of a sample from the separating hyperplane is the margin of that sample.

The notion of margins is important in several ML classification algorithms, as it can be used to bound the generalization error of these classifiers. These bounds are frequently shown using the VC dimension. The generalization error bound in boosting algorithms and support vector machines is particularly prominent.

Margin for boosting algorithms

[edit]

The margin for an iterative boosting algorithm given a dataset with two classes can be defined as follows: the classifier is given a sample pair , where is a domain space and is the sample's label. The algorithm then selects a classifier at each iteration where is a space of possible classifiers that predict real values. This hypothesis is then weighted by as selected by the boosting algorithm. At iteration , the margin of a sample can thus be defined as

By this definition, the margin is positive if the sample is labeled correctly, or negative if the sample is labeled incorrectly.

This definition may be modified and is not the only way to define the margin for boosting algorithms. However, there are reasons why this definition may be appealing.[1]

Examples of margin-based algorithms

[edit]

Many classifiers can give an associated margin for each sample. However, only some classifiers utilize information of the margin while learning from a dataset.

Many boosting algorithms rely on the notion of a margin to assign weight to samples. If a convex loss is utilized (as in AdaBoost or LogitBoost, for instance) then a sample with a higher margin will receive less (or equal) weight than a sample with a lower margin. This leads the boosting algorithm to focus weight on low-margin samples. In non-convex algorithms (e.g., BrownBoost), the margin still dictates the weighting of a sample, though the weighting is non-monotone with respect to the margin.

Generalization error bounds

[edit]

One theoretical motivation behind margin classifiers is that their generalization error may be bound by the algorithm parameters and a margin term. An example of such a bound is for the AdaBoost algorithm.[1] Let be a set of data points, sampled independently at random from a distribution . Assume the VC-dimension of the underlying base classifier is and . Then, with probability , we have the bound:[citation needed]

for all .

References

[edit]
  1. ^ a b Robert E. Schapire, Yoav Freund, Peter Bartlett and Wee Sun Lee.(1998) "Boosting the margin: A new explanation for the effectiveness of voting methods", The Annals of Statistics, 26(5):1651–1686