Learning to Perceive and Act by Trial and Error
Machine Learning
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Artificial Intelligence Review - Special issue on lazy learning
Local dimensionality reduction
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Mode-Finding for Mixtures of Gaussian Distributions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Introduction to Robotics: Mechanics and Control
Introduction to Robotics: Mechanics and Control
Statistical Learning for Humanoid Robots
Autonomous Robots
Classifiers that approximate functions
Natural Computing: an international journal
A Greedy EM Algorithm for Gaussian Mixture Learning
Neural Processing Letters
Sparse on-line Gaussian processes
Neural Computation
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Accurate on-line support vector regression
Neural Computation
A tutorial on support vector regression
Statistics and Computing
Learning discontinuities with products-of-sigmoids for switching between local models
ICML '05 Proceedings of the 22nd international conference on Machine learning
Incremental Online Learning in High Dimensions
Neural Computation
Online Model Selection Based on the Variational Bayes
Neural Computation
On-line EM Algorithm for the Normalized Gaussian Network
Neural Computation
Constructive Incremental Learning from Only Local Information
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
Adaptive mixtures of local experts
Neural Computation
Efficient learning and feature selection in high-dimensional regression
Neural Computation
Parallel Robots
Hi-index | 0.00 |
We present a supervised learning algorithm for estimation of generic input-output relations in a real-time, online fashion. The proposed method is based on a generalized expectation-maximization approach to fit an infinite mixture of linear experts IMLE to an online stream of data samples. This probabilistic model, while not fully Bayesian, can efficiently choose the number of experts that are allocated to the mixture, this way effectively controlling the complexity of the resulting model. The result is an incremental, online, and localized learning algorithm that performs nonlinear, multivariate regression on multivariate outputs by approximating the target function by a linear relation within each expert input domain and that can allocate new experts as needed. A distinctive feature of the proposed method is the ability to learn multivalued functions: one-to-many mappings that naturally arise in some robotic and computer vision learning domains, using an approach based on a Bayesian generative model for the predictions provided by each of the mixture experts. As a consequence, it is able to directly provide forward and inverse relations from the same learned mixture model. We conduct an extensive set of experiments to evaluate the proposed algorithm performance, and the results show that it can outperform state-of-the-art online function approximation algorithms in single-valued regression, while demonstrating good estimation capabilities in a multivalued function approximation context.