Towards a computational theory of cognitive maps
Artificial Intelligence
Qualitative navigation for mobile robots
Artificial Intelligence
Qualitative spatial reasoning: the CLOCK project
Artificial Intelligence - Special issue: Qualitative reasoning about physical systems II
Qualitative kinematics of linkages
Recent advances in qualitative physics
Using Orientation Information for Qualitative Spatial Reasoning
Proceedings of the International Conference GIS - From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning on Theories and Methods of Spatio-Temporal Reasoning in Geographic Space
Distributed vision system: a perceptual information infrastructure for robot navigation
IJCAI'97 Proceedings of the 15th international joint conference on Artifical intelligence - Volume 1
Mobile Robot Navigation by Distributed Vision Agent
PRIMA '99 Proceedings of the Second Pacific Rim International Workshop on Multi-Agents: Approaches to Intelligent Agents
Information Granules for Spatial Reasoning
PADKK '00 Proceedings of the 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Current Issues and New Applications
Reasoning about cyclic space: axiomatic and computational aspects
Spatial cognition III
A qualitative trajectory calculus and the composition of its relations
GeoS'05 Proceedings of the First international conference on GeoSpatial Semantics
Qualitative Spatial Representation and Reasoning: An Overview
Fundamenta Informaticae - Qualitative Spatial Reasoning
Identification and tracking of robots in an intelligent space using static cameras and an XPFCP
Robotics and Autonomous Systems
Hi-index | 0.00 |
In robot navigation, one of the important and fundamental issues is to reconstruct positions of landmarks or vision sensors locating around the robot. This paper proposes a method for reconstructing qualitative positions of multiple vision sensors from qualitative information observed by the vision sensors, i.e., motion directions of moving objects. The process iterates the following steps: (1) observing motion directions of moving objects from the vision sensors, (2) classifying the vision sensors into spatially classified pairs, (3) acquiring three point constraints, and (4) propagating the constraints. The method have been evaluated with simulations.