Proceedings of the seventh international conference (1990) on Machine learning
Similarity, typicality, and categorization
Similarity and analogical reasoning
Intelligence without representation
Artificial Intelligence
Learning to Perceive and Act by Trial and Error
Machine Learning
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Practical Issues in Temporal Difference Learning
Machine Learning
A connectionist model for commonsense reasoning incorporating rules and similarities
Knowledge Acquisition
Integrating rules and connectionism for robust commonsense reasoning
Integrating rules and connectionism for robust commonsense reasoning
The evolution of strategies for multiagent environments
Adaptive Behavior
Extracting Refined Rules from Knowledge-Based Neural Networks
Machine Learning
Machine Learning
Incorporating advice into agents that learn from reinforcements
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Reinforcement learning of non-Markov decision processes
Artificial Intelligence - Special volume on computational research on interaction and agency, part 2
Robust reasoning: integrating rule-based and similarity-based reasoning
Artificial Intelligence
Learning in the presence of concept drift and hidden contexts
Machine Learning
Machine Learning
Learning, action and consciousness: a hybrid approach toward modelling consciousness
Neural Networks - 1997 special issue on neural networks for consciousness
Reinforcement learning with hierarchies of machines
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Multi-time models for temporally abstract planning
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Some experiments with a hybrid model for learning sequential decision making
Information Sciences—Informatics and Computer Science: An International Journal
The Architecture of Cognition
Neuro-Dynamic Programming
Computational Architectures Integrating Neural and Symbolic Processes: A Perspective on the State of the Art
Soar Papers: Research on Integrated Intelligence
Soar Papers: Research on Integrated Intelligence
Learning Logical Definitions from Relations
Machine Learning
Incremental Induction of Decision Trees
Machine Learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Machine Learning
Machine Learning
Discovery as Autonomous Learning from the Environment
Machine Learning
Experiments with Incremental Concept Formation: UNIMEM
Machine Learning
Knowledge Acquisition Via Incremental Conceptual Clustering
Machine Learning
Hierarchical Explanation-Based Reinforcement Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
An integrated framework for learning and reasoning
Journal of Artificial Intelligence Research
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Autonomous learning of sequential tasks: experiments and analyses
IEEE Transactions on Neural Networks
Knowledge extraction from reinforcement learning
New learning paradigms in soft computing
Neural Networks and Structured Knowledge: Rule Extraction andApplications
Applied Intelligence
Representation of procedural knowledge of an intelligent agent using a novel cognitive memory model
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part I
Hi-index | 0.00 |
In developing autonomous agents, one usually emphasizesonly (situated) procedural knowledge, ignoring more explicitdeclarative knowledge. On the other hand, in developing symbolicreasoning models, one usually emphasizes only declarative knowledge,ignoring procedural knowledge. In contrast, we have developed alearning model CLARION, which is a hybrid connectionist modelconsisting of both localist and distributed representations, based onthe two-level approach proposed in [40]. CLARION learns andutilizes both procedural and declarative knowledge, tapping into thesynergy of the two types of processes, and enables an agent to learnin situated contexts and generalize resulting knowledge to differentscenarios. It unifies connectionist, reinforcement, and symboliclearning in a synergistic way, to perform on-line, bottom-uplearning. This summary paper presents one version of the architectureand some results of the experiments.