The Strength of Weak Learnability
Machine Learning
Experiments on multistrategy learning by meta-learning
CIKM '93 Proceedings of the second international conference on Information and knowledge management
Machine Learning
A parallel mixture of SVMs for very large scale problems
Neural Computation
Support Vector Mixture for Classification and Regression Problems
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Letters: Convex incremental extreme learning machine
Neurocomputing
Adaptive mixtures of local experts
Neural Computation
Sales forecasting using extreme learning machine with applications in fashion retailing
Decision Support Systems
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Adaptive Ensemble Models of Extreme Learning Machines for Time Series Prediction
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part II
IEEE Transactions on Neural Networks
Learning capability and storage capacity of two-hidden-layer feedforward networks
IEEE Transactions on Neural Networks
Universal approximation using incremental constructive feedforward networks with random hidden nodes
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Extreme Leaning Machine (ELM) simply randomly assigns input weights and biases, ineluctably leading to certain stochastic behaviors and reducing generalization performance. In this paper, we propose a meta-learning model of ELM, called Meta-ELM. The Meta-ELM architecture consists of several base ELMs and one top ELM. Therefore, the Meta-ELM learning proceeds in two stages. First, each base ELM is trained on a subset of the training data. Then, the top ELM is learned with the base ELMs as hidden nodes. Theoretical analysis and experimental results on a few artificial and benchmark regression datasets show that the proposed Meta-ELM model is feasible and effective.