FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Using Multiple Sensors for Mobile Sign Language Recognition
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Large vocabulary sign language recognition based on hierarchical decision trees
Proceedings of the 5th international conference on Multimodal interfaces
Evaluation of Face Resolution for Expression Analysis
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 5 - Volume 05
The Story Picturing Engine---a system for automatic text illustration
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Facial expression recognition based on Local Binary Patterns: A comprehensive study
Image and Vision Computing
Easy as ABC?: facilitating pictorial communication via semantically enhanced layout
CoNLL '08 Proceedings of the Twelfth Conference on Computational Natural Language Learning
A text-to-picture synthesis system for augmenting communication
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
A new instrumented approach for translating American sign language into sound and text
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Microsoft Kinect Sensor and Its Effect
IEEE MultiMedia
Chat with illustration: a chat system with visual aids
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
Nowadays, most existing online instant messaging tools, such as Live Messenger, Google Talk, Yahoo Messenger, ICQ, enable people to communicate with each other no matter where and when they are. However, it is still difficult for people who speak different native languages and do not understand each other's to communicate smoothly. It could be more difficult when people with hearing impairment are trying to use those tools. Moreover, users' hands are usually tied up with keyboard and mouse to keep typing messages. To deal with these disadvantages, we design a Kinect-based Visual Communication System (KVCS), which contains following features: (1) Kinect-based sign language recognition module to make deaf-mute persons be able to chat. (2) Kinect-based expression recognition module to enrich online chatting experiences at the same time.(3) Kinect-based speech recognition module to free users' hands when chatting. (4) Cross-media multi-lingual visualized translation module to enable users to catch the meanings of conversation much easier. Experiments on our system demonstrate that this novel KVCS provides a powerful and efficient communication function and a wonderful user experience.