A synthetic visual environment with hand gesturing and voice input
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Large Displays in Automotive Design
IEEE Computer Graphics and Applications
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
ILoveSketch: as-natural-as-possible sketching system for creating 3d curve models
Proceedings of the 21st annual ACM symposium on User interface software and technology
Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems
LLP+: multi-touch sensing using cross plane infrared laser light for interactive based displays
ACM SIGGRAPH 2010 Posters
A multi-touch interface for fast architectural sketching and massing
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
There is a need for computer aided design tools that support rapid conceptual level design. In this paper we explore and evaluate how intuitive speech and multitouch input can be combined in a multimodal interface for conceptual 3D modeling. Our system, MozArt, is based on a user's innate abilities - speaking and touching, and has a toolbar/button-less interface for creating and interacting with computer graphics models. We briefly cover the hardware and software technology behind MozArt, and present a pilot study comparing our multimodal system with a conventional multitouch modeling interface with first time CAD users. While a larger study is required to obtain statistically significant comparison regarding efficiency and accuracy of the two interfaces, a majority of the participants preferred the multimodal interface over the multitouch. We summarize lessons learned and discuss directions for future research.