Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning
The Journal of Machine Learning Research
The Pyramid Match Kernel: Efficient Learning with Sets of Features
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
LIBLINEAR: A Library for Large Linear Classification
The Journal of Machine Learning Research
Support vector machines for histogram-based image classification
IEEE Transactions on Neural Networks
Object Recognition by Sequential Figure-Ground Ranking
International Journal of Computer Vision
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part II
Probabilistic Joint Image Segmentation and Labeling by Figure-Ground Composition
International Journal of Computer Vision
Hi-index | 0.00 |
Approximations based on random Fourier features have recently emerged as an efficient and elegant methodology for designing large-scale kernel machines [4]. By expressing the kernel as a Fourier expansion, features are generated based on a finite set of random basis projections with inner products that are Monte Carlo approximations to the original kernel. However, the original Fourier features are only applicable to translation-invariant kernels and are not suitable for histograms that are always non-negative. This paper extends the concept of translation-invariance and the random Fourier feature methodology to arbitrary, locally compact Abelian groups. Based on empirical observations drawn from the exponentiated χ2 kernel, the state-of-the-art for histogram descriptors, we propose a new group called the skewed-multiplicative group and design translation-invariant kernels on it. Experiments show that the proposed kernels outperform other kernels that can be similarly approximated. In a semantic segmentation experiment on the PASCAL VOC 2009 dataset, the approximation allows us to train large-scale learning machines more than two orders of magnitude faster than previous nonlinear SVMs.