Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Information geometry of U-Boost and Bregman divergence
Neural Computation
Ensemble-teacher learning through a perceptron rule with a margin
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hi-index | 0.00 |
Ensemble learning improves the performance of a learning machine by using a majority vote of many weak-learners. As an alternative, Miyoshi and Okada proposed ensemble-teacher learning. In this method, the student learns from many quasi-optimal teachers and performs better than the quasi-optimal teachers when a linear perceptron is used. When a non-linear perceptron is used, a Hebbian rule is effective; however, a perceptron rule is not effective in this case and the student cannot perform better than the quasi-optimal teachers. In this paper, we analyze ensemble-teacher learning and explain why a perceptron rule is not effective in ensemble-teacher learning. We propose a method to overcome this problem.