Tensor-based transductive learning for multimodality video semantic concept detection
IEEE Transactions on Multimedia
Evolutionary discriminant feature extraction with application to face recognition
EURASIP Journal on Advances in Signal Processing - Special issue on recent advances in biometric systems: a signal processing perspective
Locality preserving nonnegative matrix factorization
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Biased discriminant euclidean embedding for content-based image retrieval
IEEE Transactions on Image Processing
Eigen-space learning using semi-supervised diffusion maps for human action recognition
Proceedings of the ACM International Conference on Image and Video Retrieval
Multilabel dimensionality reduction via dependence maximization
ACM Transactions on Knowledge Discovery from Data (TKDD)
Class mean embedding for face recognition
Artificial Intelligence Review
Appearance manifold of facial expression
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Towards the Optimal Discriminant Subspace
WI-IAT '12 Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Dimensionality reduction-based spoken emotion recognition
Multimedia Tools and Applications
Hi-index | 0.00 |
Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA)---a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.