Why Discretization Works for Naive Bayesian Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
This paper presents an extension of the naive Bayesian classifier, called "homologous naive Bayes (HNB)," which is applied to the problem of text-independent, close-set speaker recognition. Unlike the standard naive Bayes, HNB can take advantage of the prior information that a sequence of input feature vectors belongs to the same unknown class. We refer to such a sequence a homologous set, which is naturally available in speaker recognition. We empirically compare HNB with the Gaussian mixture model (GMM), the most widely used approach to speaker recognition. Results show that, in spite of its simplisity, HNB can achieve comparable classification accuracies for up to a hundred speakers while taking much less resources in terms of time and code size for both training and classification.