Unified theories of cognition
Technical Note: \cal Q-Learning
Machine Learning
Learning to solve multiple goals
Learning to solve multiple goals
EGVE '02 Proceedings of the workshop on Virtual environments 2002
An Behavior-based Robotics
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Acquiring Visual-Motor Models for Precision Manipulation with Robot Hands
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume II - Volume II
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
A multimodal learning interface for grounding spoken language in sensory perceptions
ACM Transactions on Applied Perception (TAP)
Learning to coordinate visual behaviors
Learning to coordinate visual behaviors
Temporal Difference Model Reproduces Anticipatory Neural Activity
Neural Computation
Visual Search and Dual Tasks Reveal Two Distinct Attentional Resources
Journal of Cognitive Neuroscience
Memory representations in natural tasks
Journal of Cognitive Neuroscience
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
An architecture for vision and action
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Multiple-goal reinforcement learning with modular Sarsa(O)
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Modularity and design in reactive intelligence
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Interactivist-constructivist foundations for embodying attention
Journal of Experimental & Theoretical Artificial Intelligence
Sharing Gaze Control in a Robotic System
IWINAC '07 Proceedings of the 2nd international work-conference on Nature Inspired Problem-Solving Methods in Knowledge Engineering: Interplay Between Natural and Artificial Computation, Part II
Online learning of task-driven object-based visual attention control
Image and Vision Computing
MTVS: a multi-task active-vision system
EUROCAST'07 Proceedings of the 11th international conference on Computer aided systems theory
A swarm cognition realization of attention, action selection, and spatial memory
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Towards artificial systems: what can we learn from human perception?
PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
Learning motion controllers with adaptive depth perception
EUROSCA'12 Proceedings of the 11th ACM SIGGRAPH / Eurographics conference on Computer Animation
Learning motion controllers with adaptive depth perception
Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Models of gaze control for manipulation tasks
ACM Transactions on Applied Perception (TAP)
Towards active event recognition
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Hi-index | 0.00 |
To make progess in understanding human visuomotor behavior, we will need to understand its basic components at an abstract level. One way to achieve such an understanding would be to create a model of a human that has a sufficient amount of complexity so as to be capable of generating such behaviors. Recent technological advances have been made that allow progress to be made in this direction. Graphics models that simulate extensive human capabilities can be used as platforms from which to develop synthetic models of visuomotor behavior. Currently, such models can capture only a small portion of a full behavioral repertoire, but for the behaviors that they do model, they can describe complete visuomotor subsystems at a useful level of detail. The value in doing so is that the body's elaborate visuomotor structures greatly simplify the specification of the abstract behaviors that guide them. The net result is that, essentially, one is faced with proposing an embodied “operating system” model for picking the right set of abstract behaviors at each instant. This paper outlines one such model. A centerpiece of the model uses vision to aid the behavior that has the most to gain from taking environmental measurements. Preliminary tests of the model against human performance in realistic VR environments show that main features of the model show up in human behavior.