The Strength of Weak Learnability
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
Boosting a weak learning algorithm by majority
Information and Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Boosting as entropy projection
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Robust blind source separation by beta divergence
Neural Computation
MadaBoost: A Modification of AdaBoost
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Logistic Regression, AdaBoost and Bregman Distances
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Robustifying AdaBoost by Adding the Naive Error Rate
Neural Computation
Robustifying AdaBoost by Adding the Naive Error Rate
Neural Computation
Interpreting Kullback-Leibler divergence with the Neyman-Pearson lemma
Journal of Multivariate Analysis - Special issue dedicated to Professor Yasunori Fujikoshi
Robust Loss Functions for Boosting
Neural Computation
Integration of Stochastic Models by Minimizing α-Divergence
Neural Computation
Robust boosting algorithm against mislabeling in multiclass problems
Neural Computation
Robust parameter estimation with a small bias against heavy contamination
Journal of Multivariate Analysis
Boosting method for local learning in statistical pattern recognition
Neural Computation
Multiclass Boosting Algorithms for Shrinkage Estimators of Class Probability
IEICE - Transactions on Information and Systems
Information Geometry and Its Applications: Convex Function and Dually Flat Manifold
Emerging Trends in Visual Computing
Intrinsic Geometries in Learning
Emerging Trends in Visual Computing
Tutorial series on brain-inspired computing: part 6: geometrical structure of boosting algorithm
New Generation Computing
A multiclass classification method based on decoding of binary classifiers
Neural Computation
Adaptive fuzzy filtering in a deterministic setting
IEEE Transactions on Fuzzy Systems
α-divergence is unique, belonging to both f-divergence and Bregman divergence classes
IEEE Transactions on Information Theory
On-line ensemble-teacher learning through a perceptron rule with a margin
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
An estimation of generalized bradley-terry models based on the em algorithm
Neural Computation
Ensemble-teacher learning through a perceptron rule with a margin
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Error concealment by means of motion refinement and regularized bregman divergence
IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
Dreaming of mathematical neuroscience for half a century
Neural Networks
Density estimation with minimization of U-divergence
Machine Learning
A unified framework of binary classifiers ensemble for multi-class classification
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Improving Logitboost with prior knowledge
Information Fusion
A Bregman extension of quasi-Newton updates II: Analysis of robustness properties
Journal of Computational and Applied Mathematics
Hi-index | 0.08 |
We aim at an extension of AdaBoost to U-Boost, in the paradigm to build a stronger classification machine from a set of weak learning machines. A geometric understanding of the Bregman divergence defined by a generic convex function U leads to the U-Boost method in the framework of information geometry extended to the space of the finite measures over a label set. We propose two versions of U-Boost learning algorithms by taking account of whether the domain is restricted to the space of probability functions. In the sequential step, we observe that the two adjacent and the initial classifiers are associated with a right triangle in the scale via the Bregman divergence, called the Pythagorean relation. This leads to a mild convergence property of the U-Boost algorithm as seen in the expectation-maximization algorithm. Statistical discussions for consistency and robustness elucidate the properties of the U-Boost methods based on a stochastic assumption for training data.