Induction: processes of inference, learning, and discovery
Induction: processes of inference, learning, and discovery
Unified theories of cognition
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Mental leaps: analogy in creative thought
Mental leaps: analogy in creative thought
Temporal difference learning and TD-Gammon
Communications of the ACM
The general counterfeit coin problem
Discrete Mathematics
Learning, action and consciousness: a hybrid approach toward modelling consciousness
Neural Networks - 1997 special issue on neural networks for consciousness
Learning plans without a priori knowledge
Adaptive Behavior
Fundamentals of Neural Network Modeling: Neuropsychology and Cognitive Neuroscience
Fundamentals of Neural Network Modeling: Neuropsychology and Cognitive Neuroscience
Learning Sequences of Compatible Actions Among Agents
Artificial Intelligence Review
Neural Computation
Accelerating autonomous learning by using heuristic selection of actions
Journal of Heuristics
Guiding inference through relational reinforcement learning
ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
IEEE Transactions on Autonomous Mental Development
Acquisition of hierarchical reactive skills in a unified cognitive architecture
Cognitive Systems Research
Hi-index | 0.00 |
We present a cognitive, connectionist-based model of complex problem solving that integrates cognitive biases and distance-based and environmental rewards under a temporal-difference learning mechanism. The model is tested against experimental data obtained in a well-defined and planning-intensive problem. We show that incorporating cognitive biases (symmetry and simplicity) in a temporal-difference learning rule (SARSA) increases model adequacy-the solution space explored by biased models better fits observed human solutions. While learning from explicit rewards alone is intrinsically slow, adding distance-based rewards, a measure of closeness to goal, to the learning rule significantly accelerates learning. Finally, the model correctly predicts that explicit rewards have little impact on problem solvers' ability to discover optimal solutions.