On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Technical Note: Naive Bayes for Regression
Machine Learning
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Hi-index | 0.00 |
This paper extends the framework of independent component analysis (ICA) to supervised learning. The key idea is to find a conditionally independent representation of input variables for given output. The representation is useful for the naive Bayes learning which has been reported to perform as well as more sophisticated methods. The learning algorithm is derived in a similar criterion to ICA. Two dimensional entropy takes an important role, while one dimensional entropy does in ICA.