Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Distinguishing Text from Graphics in On-Line Handwritten Ink
IWFHR '04 Proceedings of the Ninth International Workshop on Frontiers in Handwriting Recognition
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Speech and sketching: an empirical study of multimodal interaction
SBIM '07 Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling
Ink features for diagram recognition
SBIM '07 Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling
An image-based, trainable symbol recognizer for hand-drawn sketches
Computers and Graphics
A freehand sketching interface for progressive construction of 3D objects
Computers and Graphics
Hackers & Painters: Big Ideas from the Computer Age
Hackers & Painters: Big Ideas from the Computer Age
ClassySeg: a machine learning approach to automatic stroke segmentation
Proceedings of the Eighth Eurographics Symposium on Sketch-Based Interfaces and Modeling
Observational study on teaching artifacts created using tablet PC
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
Mechanical design tools would be considerably more useful if we could interact with them in the way that human designers communicate design ideas to one another, i.e., using crude sketches and informal speech. Those crude sketches frequently contain pen strokes of two different sorts, one type portraying device structure, the other denoting gestures, such as arrows used to indicate motion. We report here on techniques we developed that use information from both sketch and speech to distinguish gesture strokes from non-gestures -- a critical first step in understanding a sketch of a device. We collected and analyzed unconstrained device descriptions, which revealed six common types of gestures. Guided by this knowledge, we developed a classifier that uses both sketch and speech features to distinguish gesture strokes from nongestures. Experiments with our techniques indicate that the sketch and speech modalities alone produce equivalent classification accuracy, but combining them produces higher accuracy.