Pattern classification: a unified view of statistical and neural approaches
Pattern classification: a unified view of statistical and neural approaches
Architecture of Multi-modal Dialogue System
TDS '00 Proceedings of the Third International Workshop on Text, Speech and Dialogue
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Applied Pattern Recognition.
A multi-agent modal language for concurrency with non-communicating agents
CEEMAS'03 Proceedings of the 3rd Central and Eastern European conference on Multi-agent systems
Development of multi-modal interfaces in multi-device environments
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
Hand gesture recognition system using fuzzy algorithm and RDBMS for post PC
FSKD'05 Proceedings of the Second international conference on Fuzzy Systems and Knowledge Discovery - Volume Part II
Intelligent Information System Based on a Speech Web Using Fuzzy Association Rule Mining
APCHI '08 Proceedings of the 8th Asia-Pacific conference on Computer-Human Interaction
Mobile Web 2.0-Oriented Five Senses Multimedia Technology with LBS-Based Intelligent Agent
UIC '09 Proceedings of the 6th International Conference on Ubiquitous Intelligence and Computing
WiBro Net.-Based Five Senses Multimedia Technology Using Mobile Mash-Up
ICCSA '09 Proceedings of the International Conference on Computational Science and Its Applications: Part II
Hi-index | 0.00 |
In this study, we suggest and implement Multi-Modal Sentential Dialog System (MMSDS) integrating 2 sensory channels with speech and haptic information based on ubiquitous computing and WWW for clear communication. The importance and necessity of MMSDS for HCI as following: 1) it can allow more interactive and natural communication functions between the hearing-impaired and hearing person without special learning and education, 2) according as it recognizes a sentential Korean Standard Sign Language (KSSL) which is represented with speech and haptics and then translates recognition results into a synthetic speech and visual illustration in real-time, it may provide a wider range of personalized and differentiated information more effectively to them, and 3) above all things, a user need not be constrained by the limitations of a particular interaction mode at any given moment because it can guarantee mobility of WPS (Wearable Personal Station for the post PC) with a built-in sentential sign language recognizer. In experiment results, while an average recognition rate of uni-modal recognizer using KSSL only is 93.1% and speech only is 95.5%, advanced MMSDS deduced an average recognition rate of 96.1% for 32 sentential KSSL recognition models