Natural language with integrated deictic and graphic gestures
Readings in intelligent user interfaces
Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Fast Stereo-Based Head Tracking for Interactive Environments
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Foundations for a theory of mind for a humanoid robot
Foundations for a theory of mind for a humanoid robot
Combining deictic gestures and natural language for referent identification
COLING '86 Proceedings of the 11th coference on Computational linguistics
Using vision, acoustics, and natural language for disambiguation
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Spatial scaffolding is a naturally occurring human teaching behavior, in which teachers use their bodies to spatially structure the learning environment to direct the attention of the learner. Robotic systems can take advantage of simple, highly reliable spatial scaffolding cues to learn from human teachers. We present an integrated robotic architecture that combines social attention and machine learning components to learn tasks effectively from natural spatial scaffolding interactions with human teachers. We evaluate the performance of this architecture in comparison to human learning data drawn from a novel study of the use of embodied cues in human task learning and teaching behavior. This evaluation provides quantitative evidence for the utility of spatial scaffolding to learning systems. In addition, this evaluation supported the construction of a novel, interactive demonstration of a humanoid robot taking advantage of spatial scaffolding cues to learn from natural human teaching behavior.