Mixture of experts classification using a hierarchical mixture model
Neural Computation
A Bayesian Regularization Method for the Probabilistic RBF Network
SETN '02 Proceedings of the Second Hellenic Conference on AI: Methods and Applications of Artificial Intelligence
Class Conditional Density Estimation Using Mixtures with Constrained Component Sharing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Modelling in Classification
ICDM '08 Proceedings of the 8th industrial conference on Advances in Data Mining: Medical Applications, E-Commerce, Marketing, and Theoretical Aspects
Genetic Programming and Evolvable Machines
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
So near and yet so far: New insight into properties of some well-known classifier paradigms
Information Sciences: an International Journal
Efficiently explaining decisions of probabilistic RBF classification networks
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
Gossip-Based greedy gaussian mixture learning
PCI'05 Proceedings of the 10th Panhellenic conference on Advances in Informatics
Active learning with the probabilistic RBF classifier
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
A novel approach for high dimension 3D object representation using Multi-Mother Wavelet Network
Multimedia Tools and Applications
Quality of classification explanations with PRBF
Neurocomputing
Intelligent Data Analysis
Hi-index | 0.00 |
We present probabilistic models which are suitable for class conditional density estimation and can be regarded as shared kernel models where sharing means that each kernel may contribute to the estimation of the conditional densities of an classes. We first propose a model that constitutes an adaptation of the classical radial basis function (RBF) network (with full sharing of kernels among classes) where the outputs represent class conditional densities. In the opposite direction is the approach of separate mixtures model where the density of each class is estimated using a separate mixture density (no sharing of kernels among classes). We present a general model that allows for the expression of intermediate cases where the degree of kernel sharing can be specified through an extra model parameter. This general model encompasses both the above mentioned models as special cases. In all proposed models the training process is treated as a maximum likelihood problem and expectation-maximization algorithms have been derived for adjusting the model parameters