Integrating active perception with an autonomous robot architecture
AGENTS '98 Proceedings of the second international conference on Autonomous agents
The intelligent classroom: providing competent assistance
Proceedings of the fifth international conference on Autonomous agents
Multimodal human discourse: gesture and speech
ACM Transactions on Computer-Human Interaction (TOCHI)
A Gesture Based Interface for Human-Robot Interaction
Autonomous Robots
The catchment feature model: a device for multimodal fusion and a bridge between signal and sense
EURASIP Journal on Applied Signal Processing
Adaptive multi-modal stereo people tracking without background modelling
Journal of Visual Communication and Image Representation
People detection and tracking with multiple stereo cameras using particle filters
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
As autonomous robots become increasingly adept at performing simple tasks like moving from place to place and picking up and delivering objects, it is becoming apparent that an important area of robotic research is that of developing natural interfaces for controlling them. In the context of building a robot ``waiter'', we demonstrate the use of the Perseus architecture for gesture recognition, teamed with the Animate Agent architecture for tightly coupled perception and action. Of particular significance is the ease of implementing this task utilizing the architectures and routines we have already created for other tasks.