The KERNEL text understanding system
Artificial Intelligence - Special volume on natural language processing
Building problem solvers
Physics-based visual understanding
Computer Vision and Image Understanding - Special issue on physics-based modeling and reasoning in computer vision
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Understanding mechanical motion: from images to behaviors
Artificial Intelligence
Something from nothing: augmenting a paper-based work practice via multimodal interaction
DARE '00 Proceedings of DARE 2000 on Designing augmented reality environments
Towards a computational model of sketching
Proceedings of the 6th international conference on Intelligent user interfaces
IEEE Intelligent Systems
Causal Reconstruction
Resolving ambiguities to create a natural computer-based sketching environment
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Generating multiple new designs from a sketch
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Recognition of freehand sketches using mean shift
Proceedings of the 8th international conference on Intelligent user interfaces
A domain-independent system for sketch recognition
Proceedings of the 1st international conference on Computer graphics and interactive techniques in Australasia and South East Asia
ACM SIGGRAPH 2007 courses
Worlds and transformations: Supporting the sharing and reuse of engineering design knowledge
International Journal of Human-Computer Studies
Speech and sketching: an empirical study of multimodal interaction
SBIM '07 Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling
Recognizing sketched multistroke primitives
ACM Transactions on Interactive Intelligent Systems (TiiS)
PhysicsBook: a sketch-based interface for animating physics diagrams
Proceedings of the 2012 ACM international conference on Intelligent User Interfaces
Hi-index | 0.00 |
Designers routinely explain their designs to one another using sketches and verbal descriptions of behavior, both of which can be understood long before the device has been fully specified. But current design tools fail almost completely to support this sort of interaction, instead not only forcing designers to specify details of the design, but typically requiring that they do so by navigating a forest of menus and dialog boxes, rather than directly describing the behaviors with sketches and verbal explanations. We have created a prototype system, called assistance, capable of interpreting multimodal explanations for simple 2-D kinematic devices. The program generates a model of the events and the causal relationships between events that have been described via hand drawn sketches, sketched annotations, and verbal descriptions. Our goal is to make the designer's interaction with the computer more like interacting with another designer. This requires the ability not only to understand physical devices but also to understand the means by which the explanations of these devices are conveyed.