The Strength of Weak Learnability
Machine Learning
From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Clustering for edge-cost minimization (extended abstract)
STOC '00 Proceedings of the thirty-second annual ACM symposium on Theory of computing
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Efficient Search for Approximate Nearest Neighbor in High Dimensional Spaces
SIAM Journal on Computing
An elementary proof of a theorem of Johnson and Lindenstrauss
Random Structures & Algorithms
Random Projection: A New Approach to VLSI Layout
FOCS '98 Proceedings of the 39th Annual Symposium on Foundations of Computer Science
An Algorithmic Theory of Learning: Robust Concepts and Random Projection
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Database-friendly random projections: Johnson-Lindenstrauss with binary coins
Journal of Computer and System Sciences - Special issu on PODS 2001
Experiments with random projections for machine learning
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Perceptrons: An Introduction to Computational Geometry
Perceptrons: An Introduction to Computational Geometry
Experiments with random projection
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
A PAC-Style model for learning from labeled and unlabeled data
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Very high accuracy and fast dependency parsing is not a contradiction
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Informed ways of improving data-driven dependency parsing for German
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Approximate convex hulls family for one-class classification
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
Spam detection using Random Boost
Pattern Recognition Letters
Reference point transformation for visualisation
AusDM '09 Proceedings of the Eighth Australasian Data Mining Conference - Volume 101
Approximate polytope ensemble for one-class classification
Pattern Recognition
Algorithms and hardness results for parallel large margin learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Random projection is a simple technique that has had a number of applications in algorithm design. In the context of machine learning, it can provide insight into questions such as “why is a learning problem easier if data is separable by a large margin?” and “in what sense is choosing a kernel much like choosing a set of features?” This talk is intended to provide an introduction to random projection and to survey some simple learning algorithms and other applications to learning based on it. I will also discuss how, given a kernel as a black-box function, we can use various forms of random projection to extract an explicit small feature space that captures much of what the kernel is doing. This talk is based in large part on work in [BB05, BBV04] joint with Nina Balcan and Santosh Vempala.