The Usability Engineering Life Cycle
Computer
UAHCI '09 Proceedings of the 5th International on ConferenceUniversal Access in Human-Computer Interaction. Part II: Intelligent and Ubiquitous Interaction Environments
Deformable Model Fitting by Regularized Landmark Mean-Shift
International Journal of Computer Vision
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
iFeeling: vibrotactile rendering of human emotions on mobile phones
Mobile Multimedia Processing
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Hi-index | 0.00 |
This work demonstrates a visual-to-auditory Sensory Substitution System (SSD) called Facial Expression Perception through Sound (FEPS). It is designed to enable the visually impaired people to participate in a more effective social communication by perceiving their interlocutor's facial expressions. The earlier SSDs provided feedback on inferred emotions, where as, this system responds to the facial movements. This is a better method than emotion inference due to complexities in expression-to-emotion mapping, the problem of capturing multitude of possible emotions derived from a limited facial movements and the difficulty to correctly predict emotions due to lack in ground truth data. In this work, the user's ability to understand the facial expressions has been ensured by a usability study.