Texture Features for Browsing and Retrieval of Image Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Indoor-Outdoor Image Classification
CAIVD '98 Proceedings of the 1998 International Workshop on Content-Based Access of Image and Video Databases (CAIVD '98)
Foreground/background segmentation of color images by integration of multiple cues
ICIP '95 Proceedings of the 1995 International Conference on Image Processing (Vol. 1)-Volume 1 - Volume 1
Automatic Identification of Perceptually Important Regions in an Image
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 1 - Volume 1
Content-based image classification using a neural network
Pattern Recognition Letters
Central object extraction for object-based image retrieval
CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
Global semantic classification of scenes using power spectrum templates
IM'99 Proceedings of the 1999 international conference on Challenge of Image Retrieval
Image classification for content-based indexing
IEEE Transactions on Image Processing
Natural Object/Artifact Image Classification Based on Line Features
IEICE - Transactions on Information and Systems
Modeling human aesthetic perception of visual textures
ACM Transactions on Applied Perception (TAP)
Hi-index | 0.00 |
Recently many researchers are interested in objects of interest in an image, which are useful for efficient image matching based on them and bridging the semantic gap between higher concept of users and low-level image features. In this paper, we introduce a computational approach that classifies an object of interest into a natural or a man-made class, which can be of great interest for semantic indexing applications processing very large image databases. We first show that Gabor energy maps for man-made objects tend to have dominant orientation features through analysis of Gabor filtering results for many object images. Then a sum of Gabor orientation energy differences is proposed as a classification measure, which shows a classification accuracy of 82.9% in a test with 2,600 object images.