From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Visual attention in spoken human-robot interaction
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Explorations in engagement for humans and robots
Artificial Intelligence
Neural network-based head pose estimation and multi-view fusion
CLEAR'06 Proceedings of the 1st international evaluation conference on Classification of events, activities and relationships
Hi-index | 0.00 |
Gaze direction is an important communicative cue. In order to use this cue for human-robot interaction, software needs to be developed that enables the estimation of head pose. We began by designing an application that is be able to make a good estimate of the head pose, and, contrary to earlier neural network approaches, that works for non-optimal lighting conditions. Initial results show that the approach using multiple networks trained with differing datasets, gives a good estimate of head pose, and it works well in poor lighting conditions. The solution is not optimal yet. Smart selection rules taking into account different lighting conditions would enable us to select the neural networks trained with images with similar lighting conditions. This research will allow us to use head orientation cues in Human-Robot interaction with low-resolution cameras and in poor lighting conditions. The software allows the robot to give a timely reaction to the dynamical communicative cues used by humans.