On-line fuzzy modeling via clustering and support vector machines
Information Sciences: an International Journal
T-S Fuzzy Model Identification Based on Chaos Optimization
ISNN '08 Proceedings of the 5th international symposium on Neural Networks: Advances in Neural Networks
On-Line Modeling Via Fuzzy Support Vector Machines
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Online fuzzy modeling with structure and parameter learning
Expert Systems with Applications: An International Journal
Hybrid robust approach for TSK fuzzy modeling with outliers
Expert Systems with Applications: An International Journal
Fuzzy Sets and Systems
A locally recurrent fuzzy neural network with support vector regression for dynamic-system modeling
IEEE Transactions on Fuzzy Systems
Multiple fuzzy neural networks modeling with sparse data
Neurocomputing
Variational bayes for a mixed stochastic/deterministic fuzzy filter
IEEE Transactions on Fuzzy Systems
On maximum likelihood fuzzy neural networks
Fuzzy Sets and Systems
A mixture of fuzzy filters applied to the analysis of heartbeat intervals
Fuzzy Optimization and Decision Making
Double inverted pendulum control based on support vector machines and fuzzy inference
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Smooth support vector learning for fuzzy rule-based classification systems
Intelligent Data Analysis
TS-fuzzy modeling based on ε-insensitive smooth support vector regression
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
In this paper, new learning methods tolerant to imprecision are introduced and applied to fuzzy modeling based on the Takagi-Sugeno-Kang fuzzy system. The fuzzy modeling has an intrinsic inconsistency. It may perform thinking tolerant to imprecision, but learning methods are zero-tolerant to imprecision. The proposed methods make it possible to exclude this intrinsic inconsistency of a fuzzy modeling, where zero-tolerance learning is used to obtain fuzzy model tolerant to imprecision. These new methods can be called ε-insensitive learning or ε learning, where, in order to fit the fuzzy model to real data, the ε-insensitive loss function is used. This leads to a weighted or "fuzzified" version of Vapnik's support vector regression machine. This paper introduces two approaches to solving the ε-insensitive learning problem. The first approach leads to the quadratic programming problem with bound constraints and one linear equality constraint. The second approach leads to a problem of solving a system of linear inequalities. Two computationally efficient numerical methods for the ε-insensitive learning are proposed. The ε-insensitive learning leads to a model with the minimal Vapnik-Chervonenkis dimension, which results in an improved generalization ability of this model and its outliers robustness. Finally, numerical examples are given to demonstrate the validity of the introduced methods.