Neural Computation
Making large-scale support vector machine learning practical
Advances in kernel methods
Prediction games and arcing algorithms
Neural Computation
Machine Learning
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Data mining in metric space: an empirical analysis of supervised learning performance criteria
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
An introduction to ROC analysis
Pattern Recognition Letters - Special issue: ROC analysis in pattern recognition
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
The evidence framework applied to classification networks
Neural Computation
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
The Journal of Machine Learning Research
Probit classifiers with a generalized Gaussian scale mixture prior
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Probabilistic classifiers with a generalized Gaussian scale mixture prior
Pattern Recognition
EEG based foot movement onset detection with the probabilistic classification vector machine
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part IV
Hi-index | 0.00 |
In this paper, a sparse learning algorithm, probabilistic classification vector machines (PCVMs), is proposed. We analyze relevance vector machines (RVMs) for classification problems and observe that adopting the same prior for different classes may lead to unstable solutions. In order to tackle this problem, a signed and truncated Gaussian prior is adopted over every weight in PCVMs, where the sign of prior is determined by the class label, i.e., +1 or -1. The truncated Gaussian prior not only restricts the sign of weights but also leads to a sparse estimation of weight vectors, and thus controls the complexity of the model. In PCVMs, the kernel parameters can be optimized simultaneously within the training algorithm. The performance of PCVMs is extensively evaluated on four synthetic data sets and 13 benchmark data sets using three performance metrics, error rate (ERR), area under the curve of receiver operating characteristic (AUC), and root mean squared error (RMSE). We compare PCVMs with soft-margin support vector machines (SVMSoft), hard-margin support vector machines (SVMHard), SVM with the kernel parameters optimized by PCVMs (SVMPCVM), relevance vector machines (RVMs), and some other baseline classifiers. Through five replications of twofold cross-validation F test, i.e., 5 × 2 cross-validation F test, over single data sets and Friedman test with the corresponding post-hoc test to compare these algorithms over multiple data sets, we notice that PCVMs outperform other algorithms, including SVMSoft, SVMHard, RVM, and SVMPCVM, on most of the data sets under the three metrics, especially under AUC. Our results also reveal that the performance of SVMPCVM is slightly better than SVMSoft, implying that the parameter optimization algorithm in PCVMs is better than cross validation in terms of performance and computational complexity. In this paper, we also discuss the superiority of PCVMs' formulation using maximum a posteriori (MAP) analysis and margin analysis, which explain the empirical success of PCVMs.