Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
The Pyramid Match Kernel: Efficient Learning with Sets of Features
The Journal of Machine Learning Research
Matrix Sparsification for Rank and Determinant Computations via Nested Dissection
FOCS '08 Proceedings of the 2008 49th Annual IEEE Symposium on Foundations of Computer Science
Gaussian Processes for Object Categorization
International Journal of Computer Vision
A fast dual method for HIK SVM learning
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
One-shot learning of object categories using dependent Gaussian processes
Proceedings of the 32nd DAGM conference on Pattern recognition
Learning parameterized histogram kernels on the simplex manifold for image and action classification
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Learning Image Similarity from Flickr Groups Using Fast Kernel Machines
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rapid uncertainty computation with gaussian processes and histogram intersection kernels
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part II
One-class classification with Gaussian processes
Pattern Recognition
Hi-index | 0.00 |
We present how to perform exact large-scale multi-class Gaussian process classification with parameterized histogram intersection kernels. In contrast to previous approaches, we use a full Bayesian model without any sparse approximation techniques, which allows for learning in sub-quadratic and classification in constant time. To handle the additional model flexibility induced by parameterized kernels, our approach is able to optimize the parameters with large-scale training data. A key ingredient of this optimization is a new efficient upper bound of the negative Gaussian process log-likelihood. Experiments with image categorization tasks exhibit high performance gains with flexible kernels as well as learning within a few minutes and classification in microseconds for databases, where exact Gaussian process inference was not possible before.