The nature of statistical learning theory
The nature of statistical learning theory
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Incremental Support Vector Machine Construction
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
An Incremental Learning Algorithm Based on Support Vector Domain Classifier
ICCI '06 Proceedings of the 2006 5th IEEE International Conference on Cognitive Informatics - Volume 02
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Incremental training of support vector machines
IEEE Transactions on Neural Networks
A Max-Margin Learning Algorithm with Additional Features
FAW '09 Proceedings of the 3d International Workshop on Frontiers in Algorithmics
A Large Margin Classifier with Additional Features
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
An incremental approach to support vector machine learning
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
Incremental learning has been widely addressed in machine learning literature to deal with tasks where the learning environment is steadily changing or training samples become available one after another over time. Support Vector Machine has been successfully used in pattern recognition and function estimation. In order to tackle with incremental learning problems with new features, an incremental feature learning algorithm based on Least Square Support Vector Machine is proposed in this paper. In this algorithm, features of newly joined samples contain two parts: already existing features and new features. Using historic structural parameters which are trained from the already existing features, the algorithm only trains the new features with Least Square Support Vector Machine. Experiments show that this algorithm has two outstanding properties. First, different kernel functions can be used for the already existing features and the new features according to the distribution of samples. Consequently, this algorithm is more suitable to deal with classification tasks which can not be well solved by using a single kernel function. Second, the training time and the memory space can be reduced because the algorithm fully uses the structural parameters of classifiers trained formerly and only trains the new features with Least Square Support Vector Machine. Some UCI datasets are used to demonstrate the less training time and comparable or better performance of this algorithm than the Least Square Support Vector Machine.