Improved training via incremental learning
Proceedings of the sixth international workshop on Machine learning
On the power of incremental learning
Theoretical Computer Science
Online Choice of Active Learning Algorithms
The Journal of Machine Learning Research
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Towards Efficient Supercomputing: A Quest for the Right Metric
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 11 - Volume 12
Text Classification without Negative Examples Revisit
IEEE Transactions on Knowledge and Data Engineering
A note on the utility of incremental learning
AI Communications
Cross-domain transfer for reinforcement learning
Proceedings of the 24th international conference on Machine learning
Combining Subclassifiers in Text Categorization: A DST-Based Solution and a Case Study
IEEE Transactions on Knowledge and Data Engineering
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Exploring Strategies for Training Deep Neural Networks
The Journal of Machine Learning Research
A design methodology for domain-optimized power-efficient supercomputing
Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis
Hi-index | 0.00 |
Traditional supervised learning learns from whatever training examples given to it This is dramatically different from human learning; human learns simple examples before conquering hard ones to minimize his effort Effort can equate to energy consumption, and it would be important for machine learning modules to use minimal energy in real-world deployments In this paper, we propose a novel, simple and effective machine learning paradigm that explicitly exploits this important simple-to-complex (S2C) human learning strategy, and implement it based on C4.5 efficiently Experiment results show that S2C has several distinctive advantages over the original C4.5 First of all, S2C does indeed take much less effort in learning the training examples than C4.5 which selects examples randomly Second, with minimal effort, the learning process is much more stable Finally, even though S2C only locally updates the model with minimal effort, we show that it is as accurate as the global learner C4.5 The applications of this simple-to-complex learning strategy in real-world learning tasks, especially cognitive learning tasks, will be fruitful.