Communications of the ACM
The Johnson-Lindenstrauss Lemma and the sphericity of some graphs
Journal of Combinatorial Theory Series A
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
On learning a union of half spaces
Journal of Complexity
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning linear threshold functions in the presence of classification noise
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Efficient distribution-free learning of probabilistic concepts
Journal of Computer and System Sciences - Special issue: 31st IEEE conference on foundations of computer science, Oct. 22–24, 1990
An introduction to computational learning theory
An introduction to computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Two algorithms for nearest-neighbor search in high dimensions
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Database-friendly random projections
PODS '01 Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
On generalization bounds, projection profile, and margin distribution
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Neuroidal Architecture for Cognitive Computation
ICALP '98 Proceedings of the 25th International Colloquium on Automata, Languages and Programming
Learning noisy perceptrons by a perceptron in polynomial time
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
A Random Sampling based Algorithm for Learning the Intersection of Half-spaces
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
An Algorithmic Theory of Learning: Robust Concepts and Random Projection
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
A polynomial-time algorithm for learning noisy linear threshold functions
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Limitations of learning via embeddings in euclidean half spaces
The Journal of Machine Learning Research
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Very sparse stable random projections for dimension reduction in lα (0
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Estimators and tail bounds for dimension reduction in lα (0
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
Efficient bandit algorithms for online multiclass prediction
Proceedings of the 25th international conference on Machine learning
Computational Complexity
ICA '09 Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation
Dimension amnesic pyramid match kernel
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Nonlinear estimators and tail bounds for dimension reduction in l1 using Cauchy random projections
COLT'07 Proceedings of the 20th annual conference on Learning theory
Optimal bounds for sign-representing the intersection of two halfspaces by polynomials
Proceedings of the forty-second ACM symposium on Theory of computing
Compressed fisher linear discriminant analysis: classification of randomly projected data
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
On the hardness of learning intersections of two halfspaces
Journal of Computer and System Sciences
Almost optimal explicit Johnson-Lindenstrauss families
APPROX'11/RANDOM'11 Proceedings of the 14th international workshop and 15th international conference on Approximation, randomization, and combinatorial optimization: algorithms and techniques
Sparser Johnson-Lindenstrauss transforms
Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms
A tight bound on the performance of Fisher's linear discriminant in randomly projected data spaces
Pattern Recognition Letters
Distributed high dimensional information theoretical image registration via random projections
Digital Signal Processing
Random projection ensemble learning with multiple empirical kernels
Knowledge-Based Systems
Sparsity lower bounds for dimensionality reducing maps
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
The approximate rank of a matrix and its algorithmic applications: approximate rank
Proceedings of the forty-fifth annual ACM symposium on Theory of computing
Approximate polytope ensemble for one-class classification
Pattern Recognition
Sparser Johnson-Lindenstrauss Transforms
Journal of the ACM (JACM)
Algorithms and hardness results for parallel large margin learning
The Journal of Machine Learning Research
Hi-index | 0.02 |
We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples despite the fact that each example contains a huge amount of information? We provide a novel algorithmic analysis via a model of robust concept learning (closely related to "margin classifiers"), and show that a relatively small number of examples are sufficient to learn rich concept classes. The new algorithms have several advantages--they are faster, conceptually simpler, and resistant to low levels of noise. For example, a robust half-space can be learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.