Look who's talking: the GAZE groupware system
CHI 98 Cconference Summary on Human Factors in Computing Systems
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Using group history to identify character-directed utterances in multi-child interactions
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Managing chaos: models of turn-taking in character-multichild interactions
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
This paper reports on automatic prediction of dialog acts and address types in three-party conversations. In multi-party interaction, dialog structure becomes more complex compared to one-to-one case, because there is more than one hearer for an utterance. To cope with this problem, we predict dialog acts and address types simultaneously on our framework. Prediction of dialog act labels has gained to 68.5% by considering both context and address types. CART decision tree analysis has also been applied to examine useful features to predict those labels.