Navigation and mapping in large-scale space
AI Magazine
An introduction to ray tracing
An introduction to ray tracing
Fuzzy Sets and Systems - Special issue on fuzzy methods for computer vision and pattern recognition
A method of spatial reasoning based on qualitative trigonometry
Artificial Intelligence
Fuzzy Relative Position Between Objects in Image Processing: A Morphological Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Comparison of spatial relation definitions in computer vision
ISUMA '95 Proceedings of the 3rd International Symposium on Uncertainty Modelling and Analysis
Manhattan World: Compass Direction from a Single Image by Bayesian Inference
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Fuzzy spatial relationships for image processing and interpretation: a review
Image and Vision Computing
Learning spatial relationships between objects
International Journal of Robotics Research
From 3D scene geometry to human workspace
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Real-Time Query Processing on Live Videos in Networks of Distributed Cameras
International Journal of Interdisciplinary Telecommunications and Networking
Hi-index | 0.00 |
Interpretation of spatial relations between objects is essential to many applications such as robotics, video surveillance, spatial reasoning, and scene understanding. Current models for spatial logic are two-dimensional. With the advance in new sensing technology, inexpensive depth sensors become widely available and 3D scene reconstruction can be applied in various application scenarios. In this paper, we propose a 3D spatial logic and algorithms for interpretation of spatial relationships among objects in 3D space. More specifically, these techniques are developed for LVDBMS (Live Video DataBase Management System), a generic platform for live video computing. We extend the original directional relationships into 3D directional relationships, and introduce a simple yet effective way to build 3D object models based on depth sensors. A highly accurate and efficient algorithm is also proposed to compute the spatial relationships between two objects by sampling the entire space from the reference object. Experimental results based on a real indoor scene and an RGB-D dataset are given to demonstrate the effectiveness of our techniques.