Communications of the ACM
The Strength of Weak Learnability
Machine Learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
An Experimental and Theoretical Comparison of Model SelectionMethods
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Making large-scale support vector machine learning practical
Advances in kernel methods
Balls and bins: a study in negative dependence
Random Structures & Algorithms
Boosting as entropy projection
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Improved Boosting Algorithms Using Confidence-rated Predictions
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Improved Generalization Through Explicit Optimization of Margins
Machine Learning
Machine Learning
Improved bounds on the sample complexity of learning
Journal of Computer and System Sciences
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Feature selection for high-dimensional genomic microarray data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Induction of One-Level Decision Trees
ML '92 Proceedings of the Ninth International Workshop on Machine Learning
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Rotation Forest: A New Classifier Ensemble Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Journal of Machine Learning Research
Ensemble methods for classification of patients for personalized medicine with high-dimensional data
Artificial Intelligence in Medicine
Boosting Threshold Classifiers for High--- Dimensional Data in Functional Genomics
ANNPR '08 Proceedings of the 3rd IAPR workshop on Artificial Neural Networks in Pattern Recognition
Artificial Intelligence in Medicine
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Differential Evolution Classifier in Noisy Settings and with Interacting Variables
Applied Soft Computing
A low variance error boosting algorithm
Applied Intelligence
Bagging for biclustering: application to microarray data
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Multi-platform gene-expression mining and marker gene analysis
International Journal of Data Mining and Bioinformatics
Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering
Computer Methods and Programs in Biomedicine
Hi-index | 0.00 |
We have found one reason why AdaBoost tends not to perform well on gene expression data, and identified simple modifications that improve its ability to find accurate class prediction rules. These modifications appear especially to be needed when there is a strong association between expression profiles and class designations. Cross-validation analysis of six microarray datasets with different characteristics suggests that, suitably modified, boosting provides competitive classification accuracy in general.Sometimes the goal in a microarray analysis is to find a class prediction rule that is not only accurate, but that depends on the level of expression of few genes. Because boosting makes an effort to find genes that are complementary sources of evidence of the correct classification of a tissue sample, it appears especially useful for such gene-efficient class prediction. This appears particularly to be true when there is a strong association between expression profiles and class designations, which is often the case for example when comparing tumor and normal samples.