The nature of statistical learning theory
The nature of statistical learning theory
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Kernel methods for predicting protein--protein interactions
Bioinformatics
Handling missing values in support vector machine classifiers
Neural Networks - 2005 Special issue: IJCNN 2005
Efficient Margin Maximizing with Boosting
The Journal of Machine Learning Research
Data Analysis and Visualization in Genomics and Proteomics
Data Analysis and Visualization in Genomics and Proteomics
Hi-index | 0.00 |
Combining information from multiple heterogeneous data sources can aid prediction of protein-protein interaction. This information can be arranged into a feature vector for classification. However, missing values in the data can impact on the prediction accuracy. Boosting has emerged as a powerful tool for feature selection and classification. Bayesian methods have traditionally been used to cope with missing data, with boosting being applied to the output of Bayesian classifiers. We explore a variation of Adaboost that deals with the missing values at the level of the boosting algorithm itself, without the need for any density estimation step. Experiments on a publicly available PPI dataset suggest this overall simpler and mathematically coherent approach may be more accurate.