| Part of a series on |
| Machine learning anddata mining |
|---|
Learning with humans |
Model diagnostics |
Inmathematics, aRelevance Vector Machine (RVM) is amachine learning technique that usesBayesian inference to obtainparsimonious solutions forregression andprobabilistic classification.[1] A greedy optimisation procedure and thus fast version were subsequently developed.[2][3]The RVM has an identical functional form to thesupport vector machine, but provides probabilistic classification.
It is actually equivalent to aGaussian process model withcovariance function:
where is thekernel function (usually Gaussian), are the variances of the prior on the weight vector, and are the input vectors of thetraining set.[4]
Compared to that ofsupport vector machines (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use anexpectation maximization (EM)-like learning method and are therefore at risk of local minima. This is unlike the standardsequential minimal optimization (SMO)-based algorithms employed bySVMs, which are guaranteed to find a global optimum (of the convex problem).
The relevance vector machine waspatented in the United States byMicrosoft (patent expired September 4, 2019).[5]