Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Information geometry of U-Boost and Bregman divergence
Neural Computation
Online learning of a simple perceptron learning with margin
Systems and Computers in Japan
On-line ensemble-teacher learning through a perceptron rule with a margin
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
In ensemble-teacher learning, a student learns from a quasi-optimal-teacher selected randomly from a pool of many quasi-optimal-teachers, and the student performs better than the quasi-optimal teachers after the learning. The student performance is improved by using many quasi-optimal-teachers when a Hebbian rule is used. However, a perceptron rule cannot improve the student performance. We previously proposed a novel ensemble-teacher learning using a perceptron rule with a margin. A perceptron rule with a margin is mid-way between a Hebbian rule and a perceptron rule. We have found through computer simulation that a perceptron rule with a margin can improve student performance. In this paper, we provide theoretical support to the proposed method by using statistical mechanics methods.