Temporal reasoning based on semi-intervals
Artificial Intelligence
Maintaining knowledge about temporal intervals
Communications of the ACM
IV '03 Proceedings of the Seventh International Conference on Information Visualization
MacVisSTA: a system for multimodal analysis
Proceedings of the 6th international conference on Multimodal interfaces
VCode and VData: illustrating a new framework for supporting the video annotation workflow
AVI '08 Proceedings of the working conference on Advanced visual interfaces
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Jigsaw: supporting investigative analysis through interactive visualization
Information Visualization
Space to think: large high-resolution displays for sensemaking
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An exchange format for multimodal annotations
Multimodal corpora
Supporting the cyber analytic process using visual history on large displays
Proceedings of the 8th International Symposium on Visualization for Cyber Security
Toward multimodal situated analysis
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
The AMI meeting corpus: a pre-announcement
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Multimodal human behavior analysis: learning correlation and interaction across modalities
Proceedings of the 14th ACM international conference on Multimodal interaction
Structural and temporal inference search (STIS): pattern identification in multimodal data
Proceedings of the 14th ACM international conference on Multimodal interaction
AVEC 2012: the continuous audio/visual emotion challenge - an introduction
Proceedings of the 14th ACM international conference on Multimodal interaction
Interactive data-driven discovery of temporal behavior models from events in media streams
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.00 |
In this paper we present the findings of three longitudinal case studies in which a new method for conducting multimodal analysis of human behavior is tested. The focus of this new method is to engage a researcher integrally in the analysis process and allow them to guide the identification and discovery of relevant behavior instances within multimodal data. The case studies resulted in the creation of two analysis strategies: Single-Focus Hypothesis Testing and Multi-Focus Hypothesis Testing. Each were shown to be beneficial to multimodal analysis through supporting either a single focused deep analysis or analysis across multiple angles in unison. These strategies exemplified how challenging questions can be answered for multimodal datasets. The new method is described and the case studies' findings are presented detailing how the new method supports multimodal analysis and opens the door for a new breed of analysis methods. Two of the three case studies resulted in publishable results for the respective participants.