On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Unsupervised Learning by Probabilistic Latent Semantic Analysis
Machine Learning
An Introduction to Copulas (Springer Series in Statistics)
An Introduction to Copulas (Springer Series in Statistics)
Integration of Stochastic Models by Minimizing α-Divergence
Neural Computation
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
Nonnegative matrix factorization via generalized product rule and its application for classification
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Hi-index | 0.00 |
In this paper, generalized statistical independence is proposed from the viewpoint of generalized multiplication characterized by a monotonically increasing function and its inverse function, and it is implemented in naive Bayes models. This paper also proposes an idea of their estimation method which directly uses empirical marginal distributions to retain simplicity of calculation. Our method is interpreted as an optimization of a rough approximation of the Bregman divergence so that it is expected to have a kind of robust property. Effectiveness of our proposed models is shown by numerical experiments on some benchmark data sets.