The nature of statistical learning theory
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Large Scale Kernel Regression via Linear Programming
Machine Learning
A Generalized Representer Theorem
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
A tutorial on support vector regression
Statistics and Computing
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Neural Computation
Two-dimensional solution path for support vector regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Large-Scale Kernel Machines (Neural Information Processing)
Large-Scale Kernel Machines (Neural Information Processing)
Regularization Paths for ν-SVM and ν-SVR
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Switched and PieceWise Nonlinear Hybrid System Identification
HSCC '08 Proceedings of the 11th international workshop on Hybrid Systems: Computation and Control
Support Vector Machines
Brief paper: A continuous optimization framework for hybrid system identification
Automatica (Journal of IFAC)
Identification of switched linear systems via sparse optimization
Automatica (Journal of IFAC)
A clustering technique for the identification of piecewise affine systems
Automatica (Journal of IFAC)
Identification of piecewise affine systems via mixed-integer programming
Automatica (Journal of IFAC)
Improvements to the SMO algorithm for SVM regression
IEEE Transactions on Neural Networks
Linear dependency between ε and the input noise in ε-support vector regression
IEEE Transactions on Neural Networks
Reduced-Size Kernel Models for Nonlinear Hybrid System Identification
IEEE Transactions on Neural Networks - Part 2
Hi-index | 0.00 |
This paper deals with the identification of hybrid systems switching between nonlinear subsystems of unknown structure and focuses on the connections with a family of machine learning algorithms known as support vector machines. In particular, we consider a recent approach to nonlinear hybrid system identification based on a convex relaxation of a sparse optimization problem. In this approach, the submodels are iteratively estimated one by one by maximizing the sparsity of the corresponding error vector. We extend this approach in several ways. First, we relax the sparsity condition by introducing robust sparsity, which can be optimized through the minimization of a modified l1-norm or, equivalently, of the ε-insensitive loss function. Then, we show that, depending on the choice of regularizer, the method is equivalent to different forms of support vector regression. More precisely, the submodels can be estimated by iteratively solving a classical support vector regression problem, in which the sparsity of support vectors relates to the sparsity of the error vector in the considered hybrid system identification framework. This allows us to extend theoretical results as well as efficient optimization algorithms from the field of machine learning to the hybrid system framework.