Pauses and the temporal structure of speech
Fundamentals of speech synthesis and speech recognition
Integrated Person Tracking Using Stereo, Color, and Pattern Detection
International Journal of Computer Vision - Special issue on a special section on visual surveillance
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Embodied contextual agent in information delivering application
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
The Rules Behind Roles: Identifying Speaker Role in Radio Broadcasts
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Vocal communication of emotion: a review of research paradigms
Speech Communication - Special issue on speech and emotion
Real-time Posture and Activity Recognition
MOTION '02 Proceedings of the Workshop on Motion and Video Computing
Human Posture Recognition Using Multi-Scale Morphological Method and Kalman Motion Estimation
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Layered Representations for Human Activity Recognition
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Dynamic bayesian networks: representation, inference and learning
Dynamic bayesian networks: representation, inference and learning
The components of conversational facial expressions
APGV '04 Proceedings of the 1st Symposium on Applied perception in graphics and visualization
Toward adaptive conversational interfaces: Modeling speech convergence with animated personas
ACM Transactions on Computer-Human Interaction (TOCHI)
Memory cues for meeting video retrieval
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer Animation and Virtual Worlds - Special Issue: The Very Best Papers from CASA 2004
Smile and Laughter Recognition using Speech Processing and Face Recognition from Conversation Video
CW '05 Proceedings of the 2005 International Conference on Cyberworlds
Facial Attractiveness: Beauty and the Machine
Neural Computation
Reality mining: sensing complex social systems
Personal and Ubiquitous Computing
2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
Automatic detection of group functional roles in face to face interactions
Proceedings of the 8th international conference on Multimodal interfaces
Foundations of human computing: facial expression and emotion
Proceedings of the 8th international conference on Multimodal interfaces
Detection and application of influence rankings in small group meetings
Proceedings of the 8th international conference on Multimodal interfaces
Cross-cultural differences in recognizing affect from body posture
Interacting with Computers
Automatic discrimination between laughter and speech
Speech Communication
Computers in Human Behavior
Vision-based human motion analysis: An overview
Computer Vision and Image Understanding
Using audio and video features to classify the most dominant person in a group meeting
Proceedings of the 15th international conference on Multimedia
Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships
IEEE Transactions on Pattern Analysis and Machine Intelligence
Faces of pain: automated measurement of spontaneousallfacial expressions of genuine and posed pain
Proceedings of the 9th international conference on Multimodal interfaces
Audiovisual recognition of spontaneous interest within conversations
Proceedings of the 9th international conference on Multimodal interfaces
How to distinguish posed from spontaneous smiles using geometric features
Proceedings of the 9th international conference on Multimodal interfaces
Using the influence model to recognize functional roles in meetings
Proceedings of the 9th international conference on Multimodal interfaces
Human-Centred Intelligent Human Computer Interaction (HCI²): how far are we from attaining it?
International Journal of Autonomous and Adaptive Communications Systems
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Conversational speech synthesis and the need for some laughter
IEEE Transactions on Audio, Speech, and Language Processing
An overview of automatic speaker diarization systems
IEEE Transactions on Audio, Speech, and Language Processing
Automatic Meeting Segmentation Using Dynamic Bayesian Networks
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia
Social signal processing: Survey of an emerging domain
Image and Vision Computing
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
SOMM: Self organizing Markov map for gesture recognition
Pattern Recognition Letters
RoleNet: movie analysis from the perspective of social networks
IEEE Transactions on Multimedia - Special issue on integration of context and content
IEEE Transactions on Multimedia
'Follow me': a web-based, location-sharing architecture for large, indoor environments
Proceedings of the 19th international conference on World wide web
Mobile social signal processing: vision and research issues
Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Concensus of self-features for nonverbal behavior analysis
HBU'10 Proceedings of the First international conference on Human behavior understanding
Understanding parent-infant behaviors using non-negative matrix factorization
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
Blink: observing thin slices of behavior to determine users' expectation towards task difficulty
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Body buddies: social signaling through puppeteering
Proceedings of the 2011 international conference on Virtual and mixed reality: systems and applications - Volume Part II
mConverse: inferring conversation episodes from respiratory measurements collected in the field
Proceedings of the 2nd Conference on Wireless Health
Thin slices of interaction: predicting users' task difficulty within 60 sec.
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the Designing Interactive Systems Conference
Eye localization from infrared thermal images
MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction
Exploiting unconscious user signals in multimodal human-computer interaction
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) - Special Sections on the 20th Anniversary of ACM International Conference on Multimedia, Best Papers of ACM Multimedia 2012
Towards affect sensitive and socially perceptive companions
Your Virtual Butler
Hi-index | 0.00 |
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially-aware computing.