Random projection, margins, kernels, and feature-selection

  • Authors:
  • Avrim Blum

  • Affiliations:
  • Department of Computer Science, Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • SLSFS'05 Proceedings of the 2005 international conference on Subspace, Latent Structure and Feature Selection
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Random projection is a simple technique that has had a number of applications in algorithm design. In the context of machine learning, it can provide insight into questions such as “why is a learning problem easier if data is separable by a large margin?” and “in what sense is choosing a kernel much like choosing a set of features?” This talk is intended to provide an introduction to random projection and to survey some simple learning algorithms and other applications to learning based on it. I will also discuss how, given a kernel as a black-box function, we can use various forms of random projection to extract an explicit small feature space that captures much of what the kernel is doing. This talk is based in large part on work in [BB05, BBV04] joint with Nina Balcan and Santosh Vempala.