Term-weighting approaches in automatic text retrieval
Information Processing and Management: an International Journal
CNLS '89 Proceedings of the ninth annual international conference of the Center for Nonlinear Studies on Self-organizing, Collective, and Cooperative Phenomena in Natural and Artificial Computing Networks on Emergent computation
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Ten myths of multimodal interaction
Communications of the ACM
Effects of out of vocabulary words in spoken document retrieval (poster session)
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Subword-based approaches for spoken document retrieval
Speech Communication
Modern Information Retrieval
A study of digital ink in lecture presentation
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Speech, ink, and slides: the interaction of content channels
Proceedings of the 12th annual ACM international conference on Multimedia
A multimodal learning interface for sketch, speak and point creation of a schedule chart
Proceedings of the 6th international conference on Multimodal interfaces
Linguistic theories in efficient multimodal reference resolution: an empirical investigation
Proceedings of the 10th international conference on Intelligent user interfaces
Proceedings of the 10th international conference on Intelligent user interfaces
Distributed pointing for multimodal collaboration over sketched diagrams
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Speech pen: predictive handwriting based on ambient multimodal recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Collaborative multimodal photo annotation over digital paper
Proceedings of the 8th international conference on Multimodal interfaces
Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations
Proceedings of the 8th international conference on Multimodal interfaces
Analysis and processing of lecture audio data: preliminary investigations
SpeechIR '04 Proceedings of the Workshop on Interdisciplinary Approaches to Speech Indexing and Retrieval at HLT-NAACL 2004
Toward content-aware multimodal tagging of personal photo collections
Proceedings of the 9th international conference on Multimodal interfaces
Cross-domain matching for automatic tag extraction across redundant handwriting and speech events
Proceedings of the 2007 workshop on Tagging, mining and retrieval of human related activity information
Speech and sketching: an empirical study of multimodal interaction
SBIM '07 Proceedings of the 4th Eurographics workshop on Sketch-based interfaces and modeling
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Graph-based partial hypothesis fusion for pen-aided speech input
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multi-view platform: an accessible live classroom viewing approach for low vision students
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
Observational study on teaching artifacts created using tablet PC
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Design of human-centric adaptive multimodal interfaces
International Journal of Human-Computer Studies
Hi-index | 0.01 |
Lecturers, presenters and meeting participants often say what they publicly handwrite. In this paper, we report on three empirical explorations of such multimodal redundancy -- during whiteboard presentations, during a spontaneous brainstorming meeting, and during the informal annotation and discussion of photographs. We show that redundantly presented words, compared to other words used during a presentation or meeting, tend to be topic specific and thus are likely to be out-of-vocabulary. We also show that they have significantly higher tf-idf (term frequency-inverse document frequency) weights than other words, which we argue supports the hypothesis that they are dialogue-critical words. We frame the import of these empirical findings by describing SHACER, our recently introduced Speech and HAndwriting reCognizER, which can combine information from instances of redundant handwriting and speech to dynamically learn new vocabulary.