Learning from TV programs: application of TV presentation to a videoconferencing system
Proceedings of the 8th annual ACM symposium on User interface and software technology
Distributed meetings: a meeting capture and broadcasting system
Proceedings of the tenth ACM international conference on Multimedia
Just blink your eyes: a head-free gaze tracking system
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Video cut editing rule based on participants' gaze in multiparty conversation
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems
Proceedings of the 6th international conference on Multimodal interfaces
Automatic video editing system using stereo-based head tracking for multiparty conversation
CHI '05 Extended Abstracts on Human Factors in Computing Systems
Contextual recognition of head gestures
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Head gestures for perceptual interfaces: The role of context in improving recognition
Artificial Intelligence
Proceedings of the Workshop on Use of Context in Vision Processing
JSAI'05 Proceedings of the 2005 international conference on New Frontiers in Artificial Intelligence
Grounding and turn-taking in multimodal multiparty conversation
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.01 |
This paper presents a video cut editing rule based on participants' gaze for establishing video editing rules that can accurately and clearly convey the flow of conversation in multiparty conversations to viewers. Demand is growing to be able to effectively archive meetings and teleconferences to facilitate human communication. Conventional systems use fixed-viewpoint cameras and simple camera selection based on participants' utterances etc. However, these systems fail to convey a sufficient amount of nonverbal information about the participants and the flow of conversation. On the basis of participants' gaze behavior in multiparty conversation, we propose a new video cut editing rule; the rule uses majority decision with regard to participants' gaze direction. We then present experiments that compare the proposed method to conventional visual representations. We conclude that the proposed method can more successfully convey 1) who is talking to whom and 2) hearers' response to speakers, which are extremely crucial pieces of information that allow viewers to understand the flow of conversation.