IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks
Advances in neural information processing systems 2
Adaptive mixtures: recursive nonparametric pattern recognition
Pattern Recognition
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
Multimodal Data Representations with Parameterized Local Structures
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Asymmetric Gaussian and Its Application to Pattern Recognition
Proceedings of the Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Enhancing Density-Based Data Reduction Using Entropy
Neural Computation
A New Singly Connected Network Classifier based on Mutual Information
Intelligent Data Analysis
Analysing company performance using templates
Intelligent Data Analysis
Object recognition using proportion-based prior information: Application to fisheries acoustics
Pattern Recognition Letters
Knowledge transfer based on feature representation mapping for text classification
Expert Systems with Applications: An International Journal
A fast soft bit error rate estimation method
EURASIP Journal on Wireless Communications and Networking
Speaker identification based on subtractive clustering algorithm with estimating number of clusters
TSD'05 Proceedings of the 8th international conference on Text, Speech and Dialogue
Traffic video segmentation using adaptive-k gaussian mixture model
IWICPAS'06 Proceedings of the 2006 Advances in Machine Vision, Image Processing, and Pattern Analysis international conference on Intelligent Computing in Pattern Analysis/Synthesis
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Hi-index | 0.15 |
Many pattern recognition systems need to estimate an underlying probability density function (pdf). Mixture models are commonly used for this purpose in which an underlying pdf is estimated by a finite mixing of distributions. The basic computational element of a density mixture model is a component with a nonlinear mapping function, which takes part in mixing. Selecting an optimal set of components for mixture models is important to ensure an efficient and accurate estimate of an underlying pdf. Previous work has commonly estimated an underlying pdf based on the information contained in patterns. In this paper, mutual information theory is employed to measure whether two components are statistically dependent. If a component has small mutual information, it is statistically independent of the other components. Hence, that component makes a significant contribution to the system pdf and should not be removed. However, if a particular component has large mutual information, it is unlikely to be statistically independent of the other components and may be removed without significant damage to the estimated pdf. Continuing to remove components with large and positive mutual information will give a density mixture model with an optimal structure, which is very close to the true pdf.