Matrix computations (3rd ed.)
A pruning method for the recursive least squared algorithm
Neural Networks
A streaming ensemble algorithm (SEA) for large-scale classification
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Learning classifiers from distributed, semantically heterogeneous, autonomous data sources
Learning classifiers from distributed, semantically heterogeneous, autonomous data sources
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Privacy-Preserving Data Mining: Models and Algorithms
Privacy-Preserving Data Mining: Models and Algorithms
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
A collaborative training algorithm for distributed learning
IEEE Transactions on Information Theory
Sparse approximation through boosting for learning large scale kernel machines
IEEE Transactions on Neural Networks
Heuristic Updatable Weighted Random Subspaces for Non-stationary Environments
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
Gradient-based variable forgetting factor RLS algorithm in time-varying environments
IEEE Transactions on Signal Processing - Part II
Array-Based QR-RLS Multichannel Lattice Filtering
IEEE Transactions on Signal Processing - Part I
Incremental Learning of Concept Drift in Nonstationary Environments
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Incremental learning of neural networks has attracted much interest in recent years due to its wide applicability to large scale data sets and to distributed learning scenarios. Moreover, nonstationary learning paradigms have also emerged as a subarea of study in Machine Learning literature due to the problems of classical methods when dealing with data set shifts. In this paper we present an algorithm to train single layer neural networks with nonlinear output functions that take into account incremental, nonstationary and distributed learning scenarios. Moreover, it is demonstrated that introducing a regularization term into the proposed model is equivalent to choosing a particular initialization for the devised training algorithm, which may be suitable for real time systems that have to work under noisy conditions. In addition, the algorithm includes some previous models as special cases and can be used as a block component to build more complex models such as multilayer perceptrons, extending the capacity of these models to incremental, nonstationary and distributed learning paradigms. In this paper, the proposed algorithm is tested with standard data sets and compared with previous approaches, demonstrating its higher accuracy.