Neural computing: an introduction
Neural computing: an introduction
Hebbian learning of context in recurrent neural networks
Neural Computation
Complexity: metaphors, models, and reality
Complexity: metaphors, models, and reality
Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks
Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks
Modelling Brain Function: The World of Attractor Neural Networks
Modelling Brain Function: The World of Attractor Neural Networks
MOSAIC Model for Sensorimotor Learning and Control
Neural Computation
Similarity in Perception: A Window to Brain Organization
Journal of Cognitive Neuroscience
Concepts in complexity engineering
International Journal of Bio-Inspired Computation
Hi-index | 0.00 |
A model of intentional actions is presented through the operation of two connected neural networks. A deterministic causal recurrent network relates a random initial state to an ordered final state. A perceptron-like, feed-forward network provides a memory mechanism that links the final states to the original initial states. A non-supervised learning mechanism that selects which final states are defined as goals to be retrieved together with initial states leading to them. Causal sequences of states are transformed into procedures directed towards the achievements of goals. We propose a mechanism through which goals and their achievement in goal-directed actions can be emerging properties of self-organizing networks, not initially endowed with intentionality. This allows for a monist, non-mentalist description which does not need to resort to intentional mental states as causes of intentional actions. Cognitive, neurophysiological and philosophical implications are discussed.