Auditory stream segregation in auditory scene analysis with a multi-agent system
AAAI '94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 1)
Computational auditory scene analysis
Computational auditory scene analysis
Using vision to improve sound source separation
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Designing a humanoid head for RoboCup challenge
AGENTS '00 Proceedings of the fourth international conference on Autonomous agents
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Residue-driven architecture for computational auditory scene analysis
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 1
Humanoid active audition system improved by the cover acoustics
PRICAI'00 Proceedings of the 6th Pacific Rim international conference on Artificial intelligence
The cog project: building a humanoid robot
Computation for metaphors, analogy, and agents
Social Interaction of Humanoid RobotBased on Audio-Visual Tracking
IEA/AIE '02 Proceedings of the 15th international conference on Industrial and engineering applications of artificial intelligence and expert systems: developments in applied artificial intelligence
IEA/AIE'06 Proceedings of the 19th international conference on Advances in Applied Artificial Intelligence: industrial, Engineering and Other Applications of Applied Intelligent Systems
Hi-index | 0.00 |
Mobile robots with auditory perception usually adopt "stop-perceive-act" principle to avoid sounds made during moving due to motor noises or bumpy roads. Although this principle reduces the complexity of the problems involved auditory processing for mobile robots, it restricts their capabilities of auditory processing. In this paper, sound and visual tracking is investigated to attain robust object tracking by compensating each drawbacks in tracking objects. Visual tracking may be difficult in case of occlusion, while sound tracking may be ambiguous in localization due to the nature of auditory processing. For this purpose, we present an active audition system for a humanoid robot. The audition system of the intelligent humanoid requires localization of sound sources and identification of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG the humanoid actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. The system adaptively cancels motor noise using motor control signals. The experimental result demonstrates the effectiveness and robustness of sound and visual tracking.