Original Contribution: Stacked generalization
Neural Networks
The weighted majority algorithm
Information and Computation
Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Parallel Formulations of Decision-Tree Classification Algorithms
Data Mining and Knowledge Discovery
Machine Learning
LEARN++: an incremental learning algorithm for multilayer perceptron networks
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 06
Pattern discovery in distributed databases
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Scaling up: distributed machine learning with cooperation
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Towards Novel Neuroscience-Inspired Computing
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
A classification paradigm for distributed vertically partitioned data
Neural Computation
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Multi-agent based classification using argumentation from experience
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
We propose a theoretical framework for specification and analysis of a class of learning problems that arise in open-ended environments that contain multiple, distributed, dynamic data and knowledge sources. We introduce a family of learning operators for precise specification of some existing solutions and to facilitate the design and analysis of new algorithms for this class of problems. We state some properties of instance and hypothesis representations, and learning operators that make exact learning possible in some settings. We also explore some relationships between models of learning using different subsets of the proposed operators under certain assumptions.