Investigating multimodal real-time patterns of joint attention in an hri word learning task
Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction
A multimodal real-time platform for studying human-avatar interactions
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Analyzing multimodal time series as dynamical systems
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Teaching a humanoid robot to draw `Shapes'
Autonomous Robots
Adaptive eye gaze patterns in interactions with human and artificial agents
ACM Transactions on Interactive Intelligent Systems (TiiS)
IDA'10 Proceedings of the 9th international conference on Advances in Intelligent Data Analysis
Hi-index | 0.00 |
An important goal in studying both human intelligence and artificial intelligence is to understand how a natural or an artificial learning system deals with the uncertainty and ambiguity of the real world. For a natural intelligence system such as a human toddler, the relevant aspects in a learning environment are only those that make contact with the learner's sensory system. In real-world interactions, what the child perceives critically depends on his own actions as these actions bring information into and out of the learner's sensory field. The present analyses indicate how, in the case of a toddler playing with toys, these perception-action loops may simplify the learning environment by selecting relevant information and filtering irrelevant information. This paper reports new findings using a novel method that seeks to describe the visual learning environment from a young child's point of view and measures the visual information that a child perceives in real-time toy play with a parent. The main results are 1) what the child perceives primarily depends on his own actions but also his social partner's actions; 2) manual actions, in particular, play a critical role in creating visual experiences in which one object dominates; 3) this selecting and filtering of visual objects through the actions of the child provides more constrained and clean input that seems likely to facilitate cognitive learning processes. These findings have broad implications for how one studies and thinks about human and artificial learning systems.