Communications of the ACM
Redundant noisy attributes, attribute errors, and linear-threshold learning using winnow
COLT '91 Proceedings of the fourth annual workshop on Computational learning theory
Learning linear threshold functions in the presence of classification noise
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Machine Learning
Two algorithms for nearest-neighbor search in high dimensions
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
A Neuroidal Architecture for Cognitive Computation
ICALP '98 Proceedings of the 25th International Colloquium on Automata, Languages and Programming
Learning noisy perceptrons by a perceptron in polynomial time
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
A Random Sampling based Algorithm for Learning the Intersection of Half-spaces
FOCS '97 Proceedings of the 38th Annual Symposium on Foundations of Computer Science
Random Projection: A New Approach to VLSI Layout
FOCS '98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science
A polynomial-time algorithm for learning noisy linear threshold functions
FOCS '96 Proceedings of the 37th Annual Symposium on Foundations of Computer Science
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Learning an intersection of k halfspaces over a uniform distribution
SFCS '93 Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science
The geometry of graphs and some of its algorithmic applications
SFCS '94 Proceedings of the 35th Annual Symposium on Foundations of Computer Science
Database-friendly random projections
PODS '01 Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
Random projection in dimensionality reduction: applications to image and text data
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
An elementary proof of a theorem of Johnson and Lindenstrauss
Random Structures & Algorithms
Better algorithms for high-dimensional proximity problems via asymmetric embeddings
SODA '03 Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms
Relations Between Communication Complexity, Linear Arrangements, and Computational Complexity
FST TCS '01 Proceedings of the 21st Conference on Foundations of Software Technology and Theoretical Computer Science
ALT '01 Proceedings of the 12th International Conference on Algorithmic Learning Theory
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Limitations of Learning via Embeddings in Euclidean Half-Spaces
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
Journal of Computer and System Sciences - Special issu on PODS 2001
Limitations of learning via embeddings in euclidean half spaces
The Journal of Machine Learning Research
Analysis of privacy preserving random perturbation techniques: further explorations
Proceedings of the 2003 ACM workshop on Privacy in the electronic society
Pseudo-Supervised Clustering for Text Documents
WI '04 Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence
The complexity of low-distortion embeddings between point sets
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
On the impossibility of dimension reduction in l1
Journal of the ACM (JACM)
IEEE Transactions on Knowledge and Data Engineering
Theoretical Computer Science - Algorithmic learning theory(ALT 2002)
InfoScale '06 Proceedings of the 1st international conference on Scalable information systems
Very sparse random projections
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Biomedical Informatics - Special section: JAMA commentaries
Random projection and orthonormality for lossy image compression
Image and Vision Computing
Embeddings of surfaces, curves, and moving points in euclidean space
SCG '07 Proceedings of the twenty-third annual symposium on Computational geometry
Very sparse stable random projections for dimension reduction in lα (0
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning intersections of halfspaces with a margin
Journal of Computer and System Sciences
Fast dimension reduction using Rademacher series on dual BCH codes
Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms
On hardness of learning intersection of two halfspaces
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
Client-Friendly Classification over Random Hyperplane Hashes
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Baum's Algorithm Learns Intersections of Halfspaces with Respect to Log-Concave Distributions
APPROX '09 / RANDOM '09 Proceedings of the 12th International Workshop and 13th International Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
Complexity Lower Bounds using Linear Algebra
Foundations and Trends® in Theoretical Computer Science
Experience-induced neural circuits that achieve high capacity
Neural Computation
Random Projection RBF Nets for Multidimensional Density Estimation
International Journal of Applied Mathematics and Computer Science - Issues in Fault Diagnosis and Fault Tolerant Control
Privacy sensitive distributed data mining from multi-party data
ISI'03 Proceedings of the 1st NSF/NIJ conference on Intelligence and security informatics
A sparse Johnson: Lindenstrauss transform
Proceedings of the forty-second ACM symposium on Theory of computing
An analysis of random projection for changeable and privacy-preserving biometric verification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Compressed learning with regular concept
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
ICAISC'10 Proceedings of the 10th international conference on Artificial intelligence and soft computing: Part I
Privacy preserving facial and fingerprint multi-biometric authentication
IWDW'10 Proceedings of the 9th international conference on Digital watermarking
Experiments with random projection
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Improving random projections using marginal information
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Optimal bounds for Johnson-Lindenstrauss transforms and streaming problems with sub-constant error
Proceedings of the twenty-second annual ACM-SIAM symposium on Discrete Algorithms
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Derandomization of dimensionality reduction and SDP based algorithms
WADS'05 Proceedings of the 9th international conference on Algorithms and Data Structures
Random projection, margins, kernels, and feature-selection
SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
Optimal Bounds for Johnson-Lindenstrauss Transforms and Streaming Problems with Subconstant Error
ACM Transactions on Algorithms (TALG) - Special Issue on SODA'11
Hi-index | 0.00 |
We study the phenomenon of cognitive learning from an algorithmic standpoint. How does the brain effectively learn concepts from a small number of examples, in spite of the fact that each example contains a huge amount of information? We provide a novel analysis for a model of ROBUST concept learning (closely related to ``margin classifiers''), and show that a relatively small number of examples are sufficient to learn rich concept classes (including threshold functions, boolean formulae and polynomial surfaces).As a result, we obtain simple intuitive proofs for the generalization bounds of Support Vector Machines. In addition, the new algorithms have several advantages --- they are faster, conceptually simpler, and highly resistant to noise. For example, a robust half-space can be PAC-learned in linear time using only a constant number of training examples, regardless of the number of attributes. A general (algorithmic) consequence of the model, that "more robust concepts are easier to learn", is supported by a multitude of psychological studies.