Developing Learning Algorithms via Optimized Discretization of Continuous Dynamical Systems

  • Authors:
  • Qing Tao;Zhengya Sun;Kang Kong

  • Affiliations:
  • Institute of Automation, Chinese Academy of Sciences, Beijing, China;Institute of Automation, Chinese Academy of Sciences, Beijing, China;New Star Research Institute of Applied Technology, Hefei, China

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most of the existing numerical optimization methods are based upon a discretization of some ordinary differential equations. In order to solve some convex and smooth optimization problems coming from machine learning, in this paper, we develop efficient batch and online algorithms based on a new principle, i.e., the optimized discretization of continuous dynamical systems (ODCDSs). First, a batch learning projected gradient dynamical system with Lyapunov's stability and monotonic property is introduced, and its dynamical behavior guarantees the accuracy of discretization-based optimizer and applicability of line search strategy. Furthermore, under fair assumptions, a new online learning algorithm achieving regret $O(\sqrt{T})$ or $O(logT)$ is obtained. By using the line search strategy, the proposed batch learning ODCDS exhibits insensitivity to the step sizes and faster decrease. With only a small number of line search steps, the proposed stochastic algorithm shows sufficient stability and approximate optimality. Experimental results demonstrate the correctness of our theoretical analysis and efficiency of our algorithms.