A new polynomial-time algorithm for linear programming
Combinatorica
The design and analysis of coalesced hashing
The design and analysis of coalesced hashing
Application of a general learning algorithm to the control of robotic manipulators
International Journal of Robotics Research
Automatic Pattern Recognition: A Study of the Probability of Error
IEEE Transactions on Pattern Analysis and Machine Intelligence
“Fast learning in multi-resolution hierarchies”
Advances in neural information processing systems 1
Equivalence of models for polynomial learnability
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Planning and control
Vector quantization and signal compression
Vector quantization and signal compression
e-approximations with minimum packing constraint violation (extended abstract)
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Brains, Behavior and Robotics
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering
A Theory of Networks for Approximation and Learning
A Theory of Networks for Approximation and Learning
A GENERALIZATION OF SAUER''S LEMMA
A GENERALIZATION OF SAUER''S LEMMA
Variable rate vector quantization of images
Variable rate vector quantization of images
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Analysis of greedy expert hiring and an application to memory-based learning (extended abstract)
COLT '96 Proceedings of the ninth annual conference on Computational learning theory
Hi-index | 0.00 |
A memory-based learning system is an extended memory management system that decomposes the input space either statically or dynamically into subregions for the purpose of storing and retrieving functional information. The main generalization techniques employed by memory-based learning systems are the nearest-neighbor search, space decomposition techniques, and clustering. Research on memory-based learning is still in its early stage. In particular, there are very few rigorous theoretical results regarding memory requirement, sample size, expected performance, and computational complexity. In this paper, we propose a model for memory-based learning and use it to analyze several methods— &egr;-covering, hashing, clustering, tree-structured clustering, and receptive-fields— for learning smooth functions. The sample size and system complexity are derived for each method. Our model is built upon the generalized PAC learning model of Haussler and is closely related to the method of vector quantization in data compression. Our main result is that we can build memory-based learning systems using new clustering algorithms [LiVb] to PAC-learn in polynomial time using only polynomial storage in typical situations.