Integrated natural spoken dialogue system of Jijo-2 mobile robot for office services
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Speaking Autonomous Intelligent Devices
AICS '02 Proceedings of the 13th Irish International Conference on Artificial Intelligence and Cognitive Science
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
A review of speech-based bimodal recognition
IEEE Transactions on Multimedia
AF-APL – bridging principles and practice in agent oriented languages
ProMAS'04 Proceedings of the Second international conference on Programming Multi-Agent Systems
Context-sensitive ASR for controlling the navigation of mobile robots
SBIA'12 Proceedings of the 21st Brazilian conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
In many real-world environments, Automatic Speech Recognition (ASR) technologies fail to provide adequate performance for applications such as human robot dialog. Despite substantial evidence that speech recognition in humans is performed in a top-down as well as bottom-up manner, ASR systems typically fail to capitalize on this, instead relying on a purely statistical, bottom up methodology. In this paper we advocate the use of a knowledge based approach to improving ASR in domains such as mobile robotics. A simple implementation is presented, which uses the visual recognition of objects in a robot's environment to increase the probability that words and sentences related to these objects will be recognized.