Logarithmic regret algorithms for online convex optimization
Machine Learning
CORN: Correlation-driven nonparametric learning approach for portfolio selection
ACM Transactions on Intelligent Systems and Technology (TIST)
Lower bounds on individual sequence regret
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
On ensemble techniques for AIXI approximation
AGI'12 Proceedings of the 5th international conference on Artificial General Intelligence
Adaptive regularization of weight vectors
Machine Learning
Online portfolio selection: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
In this thesis we study algorithms for online convex optimization and their relation to approximate optimization. In the first part, we propose a new algorithm for a general online optimization framework called online convex optimization. Whereas previous efficient algorithms are mostly gradient-descent based, the new algorithm is inspired by the Newton-Raphson method for convex optimization, and hence called ONLINE NEWTON STEP. We prove that in certain scenarios ONLINE NEWTON STEP guarantees logarithmic regret, as opposed to polynomial bounds achieved by previous algorithms. The analysis is based on new insights concerning the natural "follow-the-leader" method for online optimization, answers some open problems regarding the latter. One application is for the portfolio management problem, for which we describe experimental results over real market data. In the second part of the thesis, we describe a general scheme of utilizing online game playing algorithms to obtain efficient algorithms for offline optimization. Using new and old online convex optimization algorithms we show how to derive the following: (1) Approximation algorithms for convex programming with linear dependence on the approximation guaranty. (2) Fast algorithms for approximate Semidefinite Programming. (3) Efficient algorithms for haplotype frequency estimation.