Original Contribution: Stacked generalization
Neural Networks
The weighted majority algorithm
Information and Computation
The nature of statistical learning theory
The nature of statistical learning theory
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Parallel Formulations of Decision-Tree Classification Algorithms
Data Mining and Knowledge Discovery
Machine Learning
LEARN++: an incremental learning algorithm for multilayer perceptron networks
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 06
Pattern discovery in distributed databases
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Scaling up: distributed machine learning with cooperation
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Towards novel neuroscience-inspired computing
Emergent neural computational architectures based on neuroscience
Learning in open-ended dynamic distributed environments
Eighteenth national conference on Artificial intelligence
Hi-index | 0.00 |
We propose a theoretical framework for specification and analysis of a class of learning problems that arise in open-ended environments that contain multiple, distributed, dynamic data and knowledge sources. We introduce a family of learning operators for precise specification of some existing solutions and to facilitate the design and analysis of new algorithms for this class of problems. We state some properties of instance and hypothesis representations, and learning operators that make exact learning possible in some settings. We also explore some relationships between models of learning using different subsets of the proposed operators under certain assumptions.