Risk bounds of learning processes for Lévy processes

  • Authors:
  • Chao Zhang;Dacheng Tao

  • Affiliations:
  • Center for Evolutionary Medicine and Informatics, Biodesign Institute, Arizona State University, Tempe, AZ;Centre for Quantum Computation & Intelligent Systems, FEIT, University of Technology, Sydney, NSW, Australia

  • Venue:
  • The Journal of Machine Learning Research
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Léevy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian motions, and play an important role in stochastic processes and machine learning. Therefore, it is essential to study risk bounds of the learning process for time-dependent samples drawn from a Léevy process (or briefly called learning process for Léevy process). It is noteworthy that samples in this learning process are not independently and identically distributed (i.i.d.). Therefore, results in traditional statistical learning theory are not applicable (or at least cannot be applied directly), because they are obtained under the sample-i.i.d. assumption. In this paper, we study risk bounds of the learning process for time-dependent samples drawn from a Léevy process, and then analyze the asymptotical behavior of the learning process. In particular, we first develop the deviation inequalities and the symmetrization inequality for the learning process. By using the resultant inequalities, we then obtain the risk bounds based on the covering number. Finally, based on the resulting risk bounds, we study the asymptotic convergence and the rate of convergence of the learning process for Léevy process. Meanwhile, we also give a comparison to the related results under the sample-i.i.d. assumption.