CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DOLPHIN: integrated meeting support across local and remote desktop environments and LiveBoards
CSCW '94 Proceedings of the 1994 ACM conference on Computer supported cooperative work
“I'll get that off the audio”: a case study of salvaging multimedia meeting records
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Filochat: handwritten notes provide access to recorded conversations
CHI '94 Conference Companion on Human Factors in Computing Systems
Automatic summarization of open-domain multiparty dialogues in diverse genres
Computational Linguistics - Summarization
Memory cues for meeting video retrieval
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
Multimodal group action clustering in meetings
Proceedings of the ACM 2nd international workshop on Video surveillance & sensor networks
Thematic segmentation of meetings through document/speech alignment
Proceedings of the 12th annual ACM international conference on Multimedia
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
HLT '01 Proceedings of the first international conference on Human language technology research
Multimodal multispeaker probabilistic tracking in meetings
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
The AMI meeting corpus: a pre-announcement
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Multimodal integration for meeting group action segmentation and recognition
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Detection and resolution of references to meeting documents
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Analysing meeting records: an ethnographic study and technological implications
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Accessing multimodal meeting data: systems, problems and possibilities
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Audio-visual speech modeling for continuous speech recognition
IEEE Transactions on Multimedia
Modeling individual and group actions in meetings with layered HMMs
IEEE Transactions on Multimedia
Application of Information Retrieval Technologies to Presentation Slides
IEEE Transactions on Multimedia
Modeling focus of attention for meeting indexing based on multiple cues
IEEE Transactions on Neural Networks
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Audio analysis for multimedia retrieval from a ubiquitous home
MMM'08 Proceedings of the 14th international conference on Advances in multimedia modeling
Inferring competitive role patterns in reality TV show through nonverbal analysis
Multimedia Tools and Applications
Hi-index | 0.00 |
Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the extraction of semantic information from real human activities entails. In this paper, we present a succint overview of recent approaches in this field, largely influenced by our own experiences. We first review some of the existing and potential needs for users of multimedia meeting information systems. We then summarize recent work on various research areas addressing some of these requirements. In more detail, we describe our work on automatic analysis of human interaction patterns from audio-visual sensors, discussing open issues in this domain.