Jump to content

Relevance vector machine: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Noted that RVMs aren't guaranteed to find globally opimal solutions, unlike SVMs
Fixed formatting
Line 1: Line 1:
'''Relevance Vector Machine (RVMs)''' is a [[machine learning]] technique that uses [[Bayesian theory]] to obtain sparse solutions for [[regression]] and [[Statistical classification|classification]]. The RVM has an identical functional form to the [[Support Vector Machine]], but provides probabilistic classification.
'''Relevance Vector Machine (RVMs)''' is a [[machine learning]] technique that uses [[Bayesian theory]] to obtain sparse solutions for [[regression]] and [[Statistical classification|classification]]. The RVM has an identical functional form to the [[Support Vector Machine]], but provides probabilistic classification.


Compared to the SVM the Bayesian formulation allows to avoid the set of free parameters that the SVM have and that usually require cross-validation based post optimizations. However RVMs use a gradient-ascent learning method and are therefore at risk of local minima, unlike the standard [SMO] based algorithms employed by [SVM]s which are guaranteed to find a global optimum.
Compared to the SVM the Bayesian formulation allows to avoid the set of free parameters that the SVM have and that usually require cross-validation based post optimizations. However RVMs use a gradient-ascent learning method and are therefore at risk of local minima, unlike the standard [[SMO]] based algorithms employed by [[SVM]]s which are guaranteed to find a global optimum.


==External links==
==External links==

Revision as of 11:59, 16 April 2008

Relevance Vector Machine (RVMs) is a machine learning technique that uses Bayesian theory to obtain sparse solutions for regression and classification. The RVM has an identical functional form to the Support Vector Machine, but provides probabilistic classification.

Compared to the SVM the Bayesian formulation allows to avoid the set of free parameters that the SVM have and that usually require cross-validation based post optimizations. However RVMs use a gradient-ascent learning method and are therefore at risk of local minima, unlike the standard SMO based algorithms employed by SVMs which are guaranteed to find a global optimum.