Computer systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and expert systems
Numerical recipes in C (2nd ed.): the art of scientific computing
Numerical recipes in C (2nd ed.): the art of scientific computing
Neural Computation
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Similarity metric learning for a variable-kernel classifier
Neural Computation
Artificial Intelligence Review - Special issue on lazy learning
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Elementary Numerical Analysis: An Algorithmic Approach
Elementary Numerical Analysis: An Algorithmic Approach
An effective method for correlated selection problems
An effective method for correlated selection problems
Learning in embedded systems
A study of instance-based algorithms for supervised learning tasks: mathematical, empirical, and psychological evaluations
Artificial Intelligence Review - Special issue on lazy learning
Lazy learning: a logical method for supervised learning
New learning paradigms in soft computing
Rule-based anomaly pattern detection for detecting disease outbreaks
Eighteenth national conference on Artificial intelligence
The Journal of Machine Learning Research
A Blocking Strategy to Improve Gene Selection for Classification of Gene Expression Data
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Proceedings of the 25th international conference on Machine learning
Cross-disciplinary perspectives on meta-learning for algorithm selection
ACM Computing Surveys (CSUR)
Raced profiles: efficient selection of competing compiler optimizations
Proceedings of the 2009 ACM SIGPLAN/SIGBED conference on Languages, compilers, and tools for embedded systems
Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Energy-efficient data acquisition by adaptive sampling for wireless sensor networks
Proceedings of the 2009 International Conference on Wireless Communications and Mobile Computing: Connecting the World Wirelessly
Comparing parameter tuning methods for evolutionary algorithms
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
New uncertainty handling strategies in multi-objective evolutionary optimization
PPSN'10 Proceedings of the 11th international conference on Parallel problem solving from nature: Part II
Bandit-based estimation of distribution algorithms for noisy optimization: rigorous runtime analysis
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
Racing to improve on-line, on-board evolutionary robotics
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Review: Measuring instance difficulty for combinatorial optimization problems
Computers and Operations Research
A selecting-the-best method for budgeted model selection
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
An empirical study of Hoeffding racing for model selection in k-nearest neighbor classification
IDEAL'05 Proceedings of the 6th international conference on Intelligent Data Engineering and Automated Learning
Expert Systems with Applications: An International Journal
Selection of prototype rules: context searching via clustering
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
Hoeffding bound based evolutionary algorithm for symbolic regression
Engineering Applications of Artificial Intelligence
Compression in wireless sensor networks: A survey and comparative evaluation
ACM Transactions on Sensor Networks (TOSN)
Region based memetic algorithm for real-parameter optimisation
Information Sciences: an International Journal
Hi-index | 0.00 |
Given a set of models and some training data, we would like to findthe model that best describes the data. Finding the model with thelowest generalization error is a computationally expensive process,especially if the number of testing points is high or if the number ofmodels is large. Optimization techniques such as hill climbing orgenetic algorithms are helpful but can end up with a model that isarbitrarily worse than the best one or cannot be usedbecause there is no distance metric on the space of discrete models.In this paper we develop a technique called ’’racing‘‘ that tests theset of models in parallel, quickly discards those models that areclearly inferior and concentrates the computational effort ondifferentiating among the better models. Racing is especiallysuitable for selecting among lazy learners since training requiresnegligible expense, and incremental testing using leave-one-out crossvalidation is efficient. We use racing to select among various lazylearning algorithms and to find relevant features in applicationsranging from robot juggling to lesion detection in MRI scans.