The weighted majority algorithm
Information and Computation
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Machine Learning - Special issue on context sensitivity and concept drift
Quantum computation and quantum information
Quantum computation and quantum information
Tracking a small set of experts by mixing past posteriors
The Journal of Machine Learning Research
Convex Optimization
Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection
The Journal of Machine Learning Research
Improved second-order bounds for prediction with expert advice
COLT'05 Proceedings of the 18th annual conference on Learning Theory
A combinatorial, primal-dual approach to semidefinite programs
Proceedings of the thirty-ninth annual ACM symposium on Theory of computing
Online kernel PCA with entropic matrix updates
Proceedings of the 24th international conference on Machine learning
Proceedings of the 24th international conference on Machine learning
When is there a free matrix lunch?
COLT'07 Proceedings of the 20th annual conference on Learning theory
Proceedings of the forty-second ACM symposium on Theory of computing
Communications of the ACM
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Fast SDP algorithms for constraint satisfaction problems
SODA '10 Proceedings of the twenty-first annual ACM-SIAM symposium on Discrete Algorithms
Linear Algorithms for Online Multitask Classification
The Journal of Machine Learning Research
Journal of the ACM (JACM)
Risk-Sensitive online learning
ALT'06 Proceedings of the 17th international conference on Algorithmic Learning Theory
Quantum interactive proofs with weak error bounds
Proceedings of the 3rd Innovations in Theoretical Computer Science Conference
Regularization techniques for learning with matrices
The Journal of Machine Learning Research
Hi-index | 0.02 |
We design algorithms for two online variance minimization problems. Specifically, in every trial t our algorithms get a covariance matrix ${\mathcal{C}}_t$and try to select a parameter vector wtsuch that the total variance over a sequence of trials $\sum_t {\boldsymbol{w}}_t^{\top}{\mathcal{C}}_t{\boldsymbol{w}}_t$is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces are considered – the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy. In the second case the algorithm maintains a mixture of unit vectors which is represented as a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each case we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.