The spatial semantic hierarchy
Artificial Intelligence
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Cognitive maps for mobile robots-an object based approach
Robotics and Autonomous Systems
PCA-SIFT: a more distinctive representation for local image descriptors
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Modelling models of robot navigation using formal spatial ontology
SC'04 Proceedings of the 4th international conference on Spatial Cognition: reasoning, Action, Interaction
Specification of an ontology for route graphs
SC'04 Proceedings of the 4th international conference on Spatial Cognition: reasoning, Action, Interaction
Hi-index | 0.00 |
We propose a semantic representation and Bayesian model for robot localization using spatial relations among objects that can be created by a single consumer-grade camera and odometry. We first suggest a semantic representation to be shared by human and robot. This representation consists of perceived objects and their spatial relationships, and a qualitatively defined odometry-based metric distance. We refer to this as a topological-semantic distance map. To support our semantic representation, we develop a Bayesian model for localization that enables the location of a robot to be estimated sufficiently well to navigate in an indoor environment. Extensive localization experiments in an indoor environment show that our Bayesian localization technique using a topological-semantic distance map is valid in the sense that localization accuracy improves whenever objects and their spatial relationships are detected and instantiated.