COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bayesian methods for adaptive models
Bayesian methods for adaptive models
Machine Learning
Learning internal representations
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Machine Learning - Special issue on inductive transfer
Estimating a Kernel Fisher Discriminant in the Presence of Label Noise
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Incorporating Invariances in Support Vector Learning Machines
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
A family of algorithms for approximate bayesian inference
A family of algorithms for approximate bayesian inference
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Human action recognition by feature-reduced Gaussian process classification
Pattern Recognition Letters
Classification of Protein Interaction Sentences via Gaussian Processes
PRIB '09 Proceedings of the 4th IAPR International Conference on Pattern Recognition in Bioinformatics
Semi-supervised classification using sparse Gaussian process regression
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
One-shot learning of object categories using dependent Gaussian processes
Proceedings of the 32nd DAGM conference on Pattern recognition
Protein interaction detection in sentences via Gaussian Processes: a preliminary evaluation
International Journal of Data Mining and Bioinformatics
Hi-index | 0.00 |
The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block-diagonal covariance matrix, for “learning to learn” from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.