IEEE Transactions on Pattern Analysis and Machine Intelligence
Induction of one-level decision trees
ML92 Proceedings of the ninth international workshop on Machine learning
Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Efficient Graph-Based Image Segmentation
International Journal of Computer Vision
International Journal of Computer Vision
Segmentation and Recognition Using Structure from Motion Point Clouds
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Learning Probabilistic Structure Graphs for Classification and Detection of Object Structures
ICMLA '09 Proceedings of the 2009 International Conference on Machine Learning and Applications
Accurate, Dense, and Robust Multiview Stereopsis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Parts-based 3D object classification
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
3D scene retrieval and recognition with Depth Gradient Images
Pattern Recognition Letters
Representation and classification of 3-D objects
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Classification of remote sensing image and range data is normally done in 2D space, because anyhow most sensors capture the surface of the earth from a close-to vertical direction and thus vertical structures, e.g. at building façades are not visible anyways. However, when the objects of interest are photographed from off-nadir directions, like in oblique airborne images, the question on how to efficiently classify those scenes arises. In this paper a study on classification in 3D object space is presented: image features from individual oblique airborne images, and 3D geometric features derived from matching in those images are projected onto voxels. Those are segmented and classified. The study area is Port-Au-Prince (Haiti), where images have been acquired after the earthquakes in January 2010. Results show that through the combination of image evidence as realized by the projection into object space the classification becomes more accurate compared to single image classification.