A few notes on statistical learning theory
Advanced lectures on machine learning
Retrieving Lévy Processes from Option Prices: Regularization of an Ill-posed Inverse Problem
SIAM Journal on Control and Optimization
The Journal of Machine Learning Research
Stability Bounds for Stationary φ-mixing and β-mixing Processes
The Journal of Machine Learning Research
Mutual information for stochastic signals and Lévy processes
IEEE Transactions on Information Theory
Information, Divergence and Risk for Binary Experiments
The Journal of Machine Learning Research
On α kernels, Lévy processes, and natural image statistics
Scale-Space'05 Proceedings of the 5th international conference on Scale Space and PDE Methods in Computer Vision
SVM multiregression for nonlinear channel estimation in multiple-input multiple-output systems
IEEE Transactions on Signal Processing
Training-based MIMO channel estimation: a study of estimator tradeoffs and optimal training signals
IEEE Transactions on Signal Processing
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Channel capacity and state estimation for state-dependent Gaussian channels
IEEE Transactions on Information Theory
Impact of antenna correlation on the capacity of multiantenna channels
IEEE Transactions on Information Theory
Improved Risk Tail Bounds for On-Line Algorithms
IEEE Transactions on Information Theory
Lower Bounds for the Empirical Minimization Algorithm
IEEE Transactions on Information Theory
An overview of limited feedback in wireless communication systems
IEEE Journal on Selected Areas in Communications
Hi-index | 0.00 |
Léevy processes refer to a class of stochastic processes, for example, Poisson processes and Brownian motions, and play an important role in stochastic processes and machine learning. Therefore, it is essential to study risk bounds of the learning process for time-dependent samples drawn from a Léevy process (or briefly called learning process for Léevy process). It is noteworthy that samples in this learning process are not independently and identically distributed (i.i.d.). Therefore, results in traditional statistical learning theory are not applicable (or at least cannot be applied directly), because they are obtained under the sample-i.i.d. assumption. In this paper, we study risk bounds of the learning process for time-dependent samples drawn from a Léevy process, and then analyze the asymptotical behavior of the learning process. In particular, we first develop the deviation inequalities and the symmetrization inequality for the learning process. By using the resultant inequalities, we then obtain the risk bounds based on the covering number. Finally, based on the resulting risk bounds, we study the asymptotic convergence and the rate of convergence of the learning process for Léevy process. Meanwhile, we also give a comparison to the related results under the sample-i.i.d. assumption.