Aperture based selection for immersive virtual environments
Proceedings of the 9th annual ACM symposium on User interface software and technology
Moving objects in space: exploiting proprioception in virtual-environment interaction
Proceedings of the 24th annual conference on Computer graphics and interactive techniques
The state of the art in automating usability evaluation of user interfaces
ACM Computing Surveys (CSUR)
The Object Primer: Agile Model-Driven Development with UML 2.0
The Object Primer: Agile Model-Driven Development with UML 2.0
Multisensory interaction metaphors with haptics and proprioception in virtual environments
Proceedings of the third Nordic conference on Human-computer interaction
Using the Non-Dominant Hand for Selection in 3D
3DUI '06 Proceedings of the 3D User Interfaces
Introducing semantic information during conceptual modelling of interaction for virtual environments
Proceedings of the 2007 workshop on Multimodal interfaces in semantic interaction
Designing context-aware multimodal virtual environments
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Comparing NiMMiT and data-driven notations for describing multimodal interaction
TAMODIA'06 Proceedings of the 5th international conference on Task models and diagrams for users interface design
DREAMER: a design rationale environment for argumentation, modeling and engineering requirements
Proceedings of the 28th ACM International Conference on Design of Communication
Formal description of multi-touch interactions
Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems
Hi-index | 0.00 |
Nowadays, the claim that a human-computer interface is user friendly, must be supported by a formal usability experiment. Due to its inherent complexity, this is particularly true when developing a multimodal interface. For such a rich user interface, there is a lack of support for automated testing and observing, so in preparation of its formal evaluation a lot of time is spent to adapt the programming code itself. Based on NiMMiT, which is a high-level notation to describe and automatically execute multimodal interaction techniques, we propose in this paper an easy way for the interaction designer to collect and log data related to the user experiment. Inserting 'probes' and 'filters' in NiMMiT interaction diagrams is indeed more efficient than editing the code of the interaction technique itself. We will clarify our approach as applied during a concrete user experiment.