A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A tutorial on support vector regression
Statistics and Computing
Forecasting of the daily meteorological pollution using wavelets and support vector machine
Engineering Applications of Artificial Intelligence
Fuzzy relevance vector machine for learning from unbalanced data and noise
Pattern Recognition Letters
Robust support vector regression in the primal
Neural Networks
ISICA '08 Proceedings of the 3rd International Symposium on Advances in Computation and Intelligence
Chaotic maps based on binary particle swarm optimization for feature selection
Applied Soft Computing
An improved genetic algorithm for optimal feature subset selection from multi-character feature set
Expert Systems with Applications: An International Journal
Short communication: Diagnosis of bladder cancers with small sample size via feature selection
Expert Systems with Applications: An International Journal
A novel approach to improving C-Tree for feature selection
Applied Soft Computing
Fuzzy Support Vector Machine for bankruptcy prediction
Applied Soft Computing
Implementing support vector regression with differential evolution to forecast motherboard shipments
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
This paper presents an optimal training subset for support vector regression (SVR) under deregulated power, which has a distinct advantage over SVR based on the full training set, since it solves the problem of large sample memory complexity O(N^2) and prevents over-fitting during unbalanced data regression. To compute the proposed optimal training subset, an approximation convexity optimization framework is constructed through coupling a penalty term for the size of the optimal training subset to the mean absolute percentage error (MAPE) for the full training set prediction. Furthermore, a special method for finding the approximate solution of the optimization goal function is introduced, which enables us to extract maximum information from the full training set and increases the overall prediction accuracy. The applicability and superiority of the presented algorithm are shown by the half-hourly electric load data (48 data points per day) experiments in New South Wales under three different sample sizes. Especially, the benefit of the developed methods for large data sets is demonstrated by the significantly less CPU running time.