Knowledge-based augmented reality
Communications of the ACM - Special issue on computer augmented environments: back to the real world
A design space for multimodal systems: concurrent processing and data fusion
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
Readings in intelligent user interfaces
Readings in intelligent user interfaces
Instrumental interaction: an interaction model for designing post-WIMP user interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A gesture processing framework for multimodal interaction in virtual reality
AFRIGRAPH '01 Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation
A Visual Language for Non-WIMP User Interfaces
VL '96 Proceedings of the 1996 IEEE Symposium on Visual Languages
Planning text for advisory dialogues: capturing intentional and rhetorical information
Computational Linguistics
The Educational Value of an Information-Rich Virtual Environment
Presence: Teleoperators and Virtual Environments
Journal of Visual Languages and Computing
Dwell-based pointing in applications of human computer interaction
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
Hi-index | 0.00 |
This paper describes the role and the use of an explicit task representation in applications where humans interact in non-traditional computer environments using gestures. The focus lies on training and assistance applications, where the objective of the training includes implicit knowledge, e.g., motor-skills. On the one hand, these applications require a clear and transparent description of what has to be done during the interaction, while, on the other hand, they are highly interactive and multimodal. Therefore, the human computer interaction becomes modelled from the top down as a collaboration in which each participant pursues their individual goal that is stipulated by a task. In a bottom up processing, gesture recognition determines the actions of the user by applying processing on the continuous data streams from the environment. The resulting gesture or action is interpreted as the user's intention and becomes evaluated during the collaboration, allowing the system to reason about how to best provide guidance at this point. A vertical prototype based on the combination of a haptic virtual environment and a knowledge-based reasoning system is discussed and the evolvement of the task-based collaboration becomes demonstrated.