相关向量机

2018-08-24 13:59:09     所属分类：机器学习

Compared to the SVM the Bayesian formulation allows avoiding the set of free parameters that the SVM has and that usually require cross-validation based post optimizations. However RVMs use an Expectation Maximization (EM)-like learning method and are therefore at risk of local minima, unlike the standard SMO-based algorithms employed by SVMs which are guaranteed to find a global optimum.[来源请求]

参考

• Tipping, Michael E. Sparse Bayesian Learning and the Relevance Vector Machine. Journal of Machine Learning Research. 2001, 1: 211–244. doi:10.1162/15324430152748236.

软件

• dlib C++ Library
• The Kernel-Machine Library

外部链接

• Tipping's webpage on Sparse Bayesian Models and the RVM
• A Tutorial on RVM by Tristan Fletcher