IEEE Transactions on Pattern Analysis and Machine Intelligence
Active Appearance Models Revisited
International Journal of Computer Vision
The painful face: pain expression recognition using active appearance models
Proceedings of the 9th international conference on Multimodal interfaces
The painful face - Pain expression recognition using active appearance models
Image and Vision Computing
Toward Practical Smile Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-time combined 2D+3D active appearance models
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Automatic detection of pain intensity
Proceedings of the 14th ACM international conference on Multimodal interaction
Inferring mood in ubiquitous conversational video
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Hi-index | 0.00 |
In intensive care units in hospitals, it has been recently shown that enormous improvements in patient outcomes can be gained from the medical staff periodically monitoring patient pain levels. However, due to the burden/stress that the staff are already under, this type of monitoring has been difficult to sustain so an automatic solution could be an ideal remedy. Using an automatic facial expression system to do this represents an achievable pursuit as pain can be described via a number of facial action units (AUs). To facilitate this work, the ''University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database'' was collected which contains video of participant's faces (who were suffering from shoulder pain) while they were performing a series of range-of-motion tests. Each frame of this data was AU coded by certified FACS coders, and self-report and observer measures at the sequence level were taken as well. To promote and facilitate research into pain and augmentcurrent datasets, we have publicly made available a portion of this database, which includes 200 sequences across 25 subjects, containing more than 48,000 coded frames of spontaneous facial expressions with 66-point AAM tracked facial feature landmarks. In addition to describing the data distribution, we give baseline pain and AU detection results on a frame-by-frame basis at the binary-level (i.e. AU vs. no-AU and pain vs. no-pain) using our AAM/SVM system. Another contribution we make is classifying pain intensities at the sequence-level by using facial expressions and 3D head pose changes.