Machine Learning
Boosting Algorithms for Parallel and Distributed Learning
Distributed and Parallel Databases - Special issue: Parallel and distributed data mining
Introduction to Parallel Computing
Introduction to Parallel Computing
Improving Regressors using Boosting Techniques
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
MPICH2: A New Start for MPI Implementations
Proceedings of the 9th European PVM/MPI Users' Group Meeting on Recent Advances in Parallel Virtual Machine and Message Passing Interface
Machine Learning
A Framework for Grid-based Neural Networks
DFMA '05 Proceedings of the First International Conference on Distributed Frameworks for Multimedia Applications
Data Mining: Next Generation Challenges and Future Directions
Data Mining: Next Generation Challenges and Future Directions
Experiments with AdaBoost.RT, an improved boosting scheme for regression
Neural Computation
Parallelizing AdaBoost by weights dynamics
Computational Statistics & Data Analysis
Out-of-bag estimation of the optimal sample size in bagging
Pattern Recognition
The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications
The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications
A brief introduction to boosting
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Hybrid MPI/OpenMP Parallel Linear Support Vector Machine Training
The Journal of Machine Learning Research
Ensemble learning with local diversity
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Local negative correlation with resampling
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Incorporation of a Regularization Term to Control Negative Correlation in Mixture of Experts
Neural Processing Letters
Hi-index | 0.00 |
Ensemble learning has gained considerable attention in different tasks including regression, classification and clustering. Adaboost and Bagging are two popular approaches used to train these models. The former provides accurate estimations in regression settings but is computationally expensive because of its inherently sequential structure, while the latter is less accurate but highly efficient. One of the drawbacks of the ensemble algorithms is the high computational cost of the training stage. To address this issue, we propose a parallel implementation of the Resampling Local Negative Correlation (RLNC) algorithm for training a neural network ensemble in order to acquire a competitive accuracy like that of Adaboost and an efficiency comparable to that of Bagging. We test our approach on both synthetic and real datasets from the UCI and Statlib repositories for the regression task. In particular, our fine-grained parallel approach allows us to achieve a satisfactory balance between accuracy and parallel efficiency.