Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Distinguishing the Communicative Functions of Gestures
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
Classification of feedback expressions in multimodal data
ACLShort '10 Proceedings of the ACL 2010 Conference Short Papers
Multi-level fusion of audio and visual features for speaker identification
ICB'06 Proceedings of the 2006 international conference on Advances in Biometrics
Fundamentals of Speaker Recognition
Fundamentals of Speaker Recognition
Annotating non-verbal behaviours in informal interactions
COST'10 Proceedings of the 2010 international conference on Analysis of Verbal and Nonverbal Communication and Enactment
Multimodal speaker identification using an adaptive classifier cascade based on modality reliability
IEEE Transactions on Multimedia
Hi-index | 0.00 |
This paper investigates to which extent participants in spontaneously occurring interactions can be recognised automatically from the shape description of their bodily behaviours. For this purpose, we apply classification algorithms to an annotated corpus of Danish dyadic and triadic conversations. The bodily behaviours which we consider are head movement, facial expressions and hand gestures. Although the data used are of limited size, the results of classification are promising especially for hand gestures indicating big variance in people's bodily behaviours even if the involved participants are a homogeneous group in terms of gender, age and social background. The obtained results are not only interesting from a theoretic point of view, but they can also be relevant for video indexing and searching, computer games and other applications which involve multimodal interaction.