Models with Biological Relevance to Control Anthropomorphic Limbs: A Survey
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
Learning to Control in Operational Space
International Journal of Robotics Research
Context-dependent predictions and cognitive arm control with XCSF
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Redundancy, self-motion, and motor control
Neural Computation
Towards a neurocomputational model of speech production and perception
Speech Communication
Developing concept representations
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
A neuromorphic model of spatial lookahead planning
Neural Networks
Adaptive sampling of motion trajectories for discrete task-based analysis and synthesis of gesture
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Active learning of inverse models with intrinsically motivated goal exploration in robots
Robotics and Autonomous Systems
Hi-index | 0.00 |
This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.