Fourier kernel learning

  • Authors:
  • Eduard Gabriel Băzăvan;Fuxin Li;Cristian Sminchisescu

  • Affiliations:
  • Institute of Mathematics of the Romanian Academy, Romania;College of Computing, Georgia Institute of Technology;Faculty of Mathematics and Natural Science, University of Bonn, Germany,Institute of Mathematics of the Romanian Academy, Romania

  • Venue:
  • ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Approximations based on random Fourier embeddings have recently emerged as an efficient and formally consistent methodology to design large-scale kernel machines [23]. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections, sampled from the Fourier transform of the kernel, with inner products that are Monte Carlo approximations of the original non-linear model. Based on the observation that different kernel-induced Fourier sampling distributions correspond to different kernel parameters, we show that a scalable optimization process in the Fourier domain can be used to identify the different frequency bands that are useful for prediction on training data. This approach allows us to design a family of linear prediction models where we can learn the hyper-parameters of the kernel together with the weights of the feature vectors jointly. Under this methodology, we recover efficient and scalable linear reformulations for both single and multiple kernel learning. Experiments show that our linear models produce fast and accurate predictors for complex datasets such as the Visual Object Challenge 2011 and ImageNet ILSVRC 2011.