Human-Machine Collaborative Systems for Microsurgical Applications
International Journal of Robotics Research
Inferring body pose using speech content
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Design and implementation of a command processor for high level human-robot interaction system
ACM-SE 45 Proceedings of the 45th annual southeast regional conference
Motion intention recognition in robot assisted applications
Robotics and Autonomous Systems
A high-level command processor for human-robot interaction system
HCI '08 Proceedings of the Third IASTED International Conference on Human Computer Interaction
Episode segmentation using recursive multiple eigenspaces
EuroSSC'09 Proceedings of the 4th European conference on Smart sensing and context
Hi-index | 0.00 |
We present the results of using Hidden Markov Models (HMMs) for automatic segmentation and recognition of user motions. Previous work on recognition of user intent with man/machine interfaces has used task-level HMMs with a single hidden state for each sub-task. In contrast, many speech recognition systems employ HMMs at the phoneme level, and use a network of HMMs to model words. We analogously make use of multi-state, continuous HMMs to model action at the "gesteme" level, and a network of HMMs to describe a task or activity. As a result, we are able to create a "task language" that is used to model and segment two different tasks performed with a human-machine cooperative manipulation system. Tests were performed using force and position data recorded from an instrument held simultaneously by a robot and human operator. Experimental results show a recognition accuracy exceeding 85%. The resulting information could be used for intelligent command of virtual and teleoperated environments, and implementation of contextually appropriate virtual fixtures for dynamic operator assistance while executing complex tasks.