The Racing Algorithm: Model Selection for Lazy Learners
Artificial Intelligence Review - Special issue on lazy learning
An Optimal Algorithm for Monte Carlo Estimation
SIAM Journal on Computing
A General Method for Scaling Up Machine Learning Algorithms and its Application to Clustering
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
PAC Bounds for Multi-armed Bandit and Markov Decision Processes
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Sampling Methods for Action Selection in Influence Diagrams
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning when to stop thinking and do something!
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
ACC'09 Proceedings of the 2009 conference on American Control Conference
Model-based and model-free reinforcement learning for visual servoing
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Computing label-constraint reachability in graph databases
Proceedings of the 2010 ACM SIGMOD International Conference on Management of data
IEEE Transactions on Evolutionary Computation
Unsupervised Layer-Wise Model Selection in Deep Neural Networks
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Bandit-based estimation of distribution algorithms for noisy optimization: rigorous runtime analysis
LION'10 Proceedings of the 4th international conference on Learning and intelligent optimization
Intelligent agents for the game of go
IEEE Computational Intelligence Magazine
Improving Monte-Carlo tree search in Havannah
CG'10 Proceedings of the 7th international conference on Computers and games
Handling expensive optimization with large noise
Proceedings of the 11th workshop proceedings on Foundations of genetic algorithms
Hierarchical Knowledge Gradient for Sequential Sampling
The Journal of Machine Learning Research
Sub-sampling: Real-time vision for micro air vehicles
Robotics and Autonomous Systems
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Creating an upper-confidence-tree program for havannah
ACG'09 Proceedings of the 12th international conference on Advances in Computer Games
Bandit-Based genetic programming
EuroGP'10 Proceedings of the 13th European conference on Genetic Programming
S-Race: a multi-objective racing algorithm
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Hi-index | 0.00 |
Sampling is a popular way of scaling up machine learning algorithms to large datasets. The question often is how many samples are needed. Adaptive stopping algorithms monitor the performance in an online fashion and they can stop early, saving valuable resources. We consider problems where probabilistic guarantees are desired and demonstrate how recently-introduced empirical Bernstein bounds can be used to design stopping rules that are efficient. We provide upper bounds on the sample complexity of the new rules, as well as empirical results on model selection and boosting in the filtering setting.