Optimally combining sampling techniques for Monte Carlo rendering
SIGGRAPH '95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques
An efficient MDL-based construction of RBF networks
Neural Networks
Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Recursive Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Variational approximations in Bayesian model selection for finite mixture distributions
Computational Statistics & Data Analysis
Sequential Kernel Density Approximation and Its Application to Real-Time Visual Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Product of Gaussians for speech recognition
Computer Speech and Language
Rapid online learning of objects in a biologically motivated recognition architecture
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Probability density estimation from optimally condensed data samples
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multivariate online kernel density estimation with Gaussian kernels
Pattern Recognition
A novel classification learning framework based on estimation of distribution algorithms
International Journal of Computing Science and Mathematics
Hi-index | 0.00 |
In this paper we propose a Gaussian-kernel-based online kernel density estimation which can be used for applications of online probability density estimation and online learning. Our approach generates a Gaussian mixture model of the observed data and allows online adaptation from positive examples as well as from the negative examples. The adaptation from the negative examples is realized by a novel concept of unlearning in mixture models. Low complexity of the mixtures is maintained through a novel compression algorithm. In contrast to the existing approaches, our approach does not require fine-tuning parameters for a specific application, we do not assume specific forms of the target distributions and temporal constraints are not assumed on the observed data. The strength of the proposed approach is demonstrated with examples of online estimation of complex distributions, an example of unlearning, and with an interactive learning of basic visual concepts.