A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Exact simplification of support vector solutions
The Journal of Machine Learning Research
Ultraconservative online algorithms for multiclass problems
The Journal of Machine Learning Research
Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Object Recognition Using Composed Receptive Field Histograms of Higher Dimensionality
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Prediction, Learning, and Games
Prediction, Learning, and Games
Online multiclass learning by interclass hypothesis sharing
ICML '06 Proceedings of the 23rd international conference on Machine learning
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
A Discriminative Kernel-Based Approach to Rank Images from Text Queries
IEEE Transactions on Pattern Analysis and Machine Intelligence
The projectron: a bounded kernel-based Perceptron
Proceedings of the 25th international conference on Machine learning
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Multi kernel learning with online-batch optimization
The Journal of Machine Learning Research
Information Sciences: an International Journal
Adaptive regularization of weight vectors
Machine Learning
Hi-index | 0.00 |
We propose an online learning algorithmto tackle the problem of learning under limited computational resources in a teacher-student scenario, over multiple visual cues. For each separate cue, we train an online learning algorithm that sacrifices performance in favor of bounded memory growth and fast update of the solution. We then recover back performance by using multiple cues in the online setting. To this end, we use a two-layers structure. In the first layer, we use a budget online learning algorithm for each single cue. Thus, each classifier provides confidence interpretations for target categories. On top of these classifiers, a linear online learning algorithm is added to learn the combination of these cues. As in standard online learning setups, the learning takes place in rounds. On each round, a new hypothesis is estimated as a function of the previous one.We test our algorithm on two student-teacher experimental scenarios and in both cases results show that the algorithm learns the new concepts in real time and generalizes well.