Hierarchical Discriminant Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Face recognition: A literature survey
ACM Computing Surveys (CSUR)
Negative Samples Analysis in Relevance Feedback
IEEE Transactions on Knowledge and Data Engineering
Biased discriminant euclidean embedding for content-based image retrieval
IEEE Transactions on Image Processing
A new recognition method for natural images
WSEAS Transactions on Computers
PCA plus LDA on wavelet co-occurrence histogram features: application to CBIR
MIWAI'11 Proceedings of the 5th international conference on Multi-Disciplinary Trends in Artificial Intelligence
Hi-index | 0.00 |
The method we have been using is based on our Self-Organizing Hierarchical Optimal Subspace Learning and Inference Framework (SHOSLIF). It uses the theories of linear discriminant projection for automatic optimal feature selection in each of the internal nodes of a Space-Tessellation Tree. In this paper, we present our recent study on the applicability of the approach to variability in position, size, and 3D orientation. In the work presented here, we require "well-framed" images os input for recognition. By well-framed images we mean that only a relatively small variation in the size, position, and orientation of the objects in the input images is allowed. We report the experimental results that show the performance difference between the subspaces of linear discriminant analysis and the principle component analysis and the effect of using a tree as opposed to a flat eigenspace.