Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Using IIDs to estimate sound source direction
ICSAB Proceedings of the seventh international conference on simulation of adaptive behavior on From animals to animats
Robust real-time vision for a personal service robot
Computer Vision and Image Understanding
Toward humanoid manipulation in human-centred environments
Robotics and Autonomous Systems
Robotic orientation towards speaker for human-robot interaction
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Hi-index | 0.00 |
In this paper we present a sound-source model for localising and tracking an acoustic source of interest along the azimuth plane in acoustically cluttered environments, for a mobile service robot. The model we present is a hybrid architecture using cross-correlation and recurrent neural networks to develop a robotic model accurate and robust enough to perform within an acoustically cluttered environment. This model has been developed with considerations of both processing power and physical robot size, allowing for this model to be deployed on to a wide variety of robotic systems where power consumption and size is a limitation. The development of the system we present has its inspiration taken from the central auditory system (CAS) of the mammalian brain. In this paper we describe experimental results of the proposed model including the experimental methodology for testing sound-source localisation systems. The results of the system are shown in both restricted test environments and in real-world conditions. This paper shows how a hybrid architecture using band pass filtering, cross-correlation and recurrent neural networks can be used to develop a robust, accurate and fast sound-source localisation model for a mobile robot.