Communications of the ACM
The complexity of Boolean functions
The complexity of Boolean functions
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
PAC learning of concept classes through the boundaries of their items
Theoretical Computer Science
Approximate Algorithms for the 0/1 Knapsack Problem
Journal of the ACM (JACM)
Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems
Journal of the ACM (JACM)
Gaining degrees of freedom in subsymbolic learning
Theoretical Computer Science
Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties
On-line Algorithms in Machine Learning
Developments from a June 1996 seminar on Online algorithms: the state of the art
Forward and Backward Selection in Regression Hybrid Network
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
Gambling in a rigged casino: The adversarial multi-armed bandit problem
FOCS '95 Proceedings of the 36th Annual Symposium on Foundations of Computer Science
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Some related problems from network flows, game theory and integer programming
SWAT '72 Proceedings of the 13th Annual Symposium on Switching and Automata Theory (swat 1972)
Cognitive Systems Research
A general framework for learning rules from data
IEEE Transactions on Neural Networks
Hi-index | 5.23 |
We deal with a special class of games against nature which correspond to subsymbolic learning problems where we know a local descent direction in the error landscape but not the amount gained at each step of the learning procedure. Namely, Alice and Bob play a game where the probability of victory grows monotonically by unknown amounts with the resources each employs. For a fixed effort on Alice's part Bob increases his resources on the basis of the results of the individual contests (victory, tie or defeat). Quite unlike the usual ones in game theory, his aim is to stop as soon as the defeat probability goes under a given threshold with high confidence. We adopt such a game policy as an archetypal remedy to the general overtraining threat of learning algorithms. Namely, we deal with the original game in a computational learning framework analogous to the Probably Approximately Correct formulation. Therein, a wise use of a special inferential mechanism (known as twisting argument) highlights relevant statistics for managing different trade-offs between observability and controllability of the defeat probability. With similar statistics we discuss an analogous trade-off at the basis of the stopping criterion of subsymbolic learning procedures. As a conclusion, we propose a principled stopping rule based solely on the behavior of the training session, hence without distracting examples into a test set.