MacSHAPA and the enterprise of exploratory sequential data analysis (ESDA)
International Journal of Human-Computer Studies
A Multimedia System for Temporally Situated Perceptual Psycholinguistic Analysis
Multimedia Tools and Applications
Towards multimodal spoken language corpora: TransTool and SyncTool
Transcribe '98 Proceedings of the Workshop on Partially Automated Techniques for Transcribing Naturally Occurring Continuous Speech
Using maximum entropy (ME) model to incorporate gesture cues for SU detection
Proceedings of the 8th international conference on Multimodal interfaces
Multimodal floor control shift detection
Proceedings of the 2009 international conference on Multimodal interfaces
A multimodal system for gesture recognition in interactive music performance
Computer Music Journal
An exchange format for multimodal annotations
Multimodal corpora
Fun to develop embodied skill: how games help the blind to understand pointing
Proceedings of the 3rd International Conference on PErvasive Technologies Related to Assistive Environments
Enabling multimodal discourse for the blind
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
Utilizing gestures to improve sentence boundary detection
Multimedia Tools and Applications
Toward multimodal situated analysis
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Speaker identification and speech recognition using phased arrays
Ambient Intelligence in Everyday Life
A multimodal analysis of floor control in meetings
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Supporting activity modelling from activity traces
Expert Systems: The Journal of Knowledge Engineering
Structural and temporal inference search (STIS): pattern identification in multimodal data
Proceedings of the 14th ACM international conference on Multimodal interaction
Enabling the blind to see gestures
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on the theory and practice of embodied interaction in HCI and interaction design
Interactive relevance search and modeling: support for expert-driven analysis of multimodal data
Proceedings of the 15th ACM on International conference on multimodal interaction
A framework for multimodal data collection, visualization, annotation and learning
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
The study of embodied communication requires access to mul-tiple data sources such as multistream video and audio, various derived and meta-data such as gesture, head, posture, facial expression and gaze information. The common element that runs through these data is the co-temporality of the multiple modes of behavior. In this paper, we present the multimedia Visualization for Situated Temporal Analysis (MacVisSTA) system for the analysis of multimodal human communication through video, audio, speech transcriptions, and gesture and head orientation data. The system uses a multiple linked representation strategy in which different rep-resentations are linked by the current time focus. In this framework, the multiple display components associated with the disparate data types are kept in synchrony, each compo-nent serving as both a controller of the system as well as a display. Hence the user is able to analyze and manipulate the data from different analytical viewpoints (e.g. through the time-synchronized speech transcription or through motion segments of interest). MacVisSTA supports analysis of the synchronized data at varying timescales. It provides an annotation interface that permits users to code the data into 'music-score' objects, and to make and organize multimedia observa-tions about the data. Hence MacVisSTA integrates flexible visualization with annotation within a single framework. An XML database manager has been created for storage and search of annotation data. We compare the system with other existing annotation tools with respect to functionality and interface design. The software runs on Macintosh OS X computer systems.