Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Multidimensional access methods
ACM Computing Surveys (CSUR)
Matrix algorithms
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A Database for Handwritten Text Recognition Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Dimensionality Reduction: A New Approach to Indexing High Dimensional Spaces
VLDB '00 Proceedings of the 26th International Conference on Very Large Data Bases
Fast and accurate text classification via multiple linear discriminant projections
The VLDB Journal — The International Journal on Very Large Data Bases
Towards effective indexing for very large video sequence database
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
SIAM Journal on Matrix Analysis and Applications
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Generalized spectral bounds for sparse LDA
ICML '06 Proceedings of the 23rd international conference on Machine learning
An adaptive and dynamic dimensionality reduction method for high-dimensional indexing
The VLDB Journal — The International Journal on Very Large Data Bases
Spectral regression: a unified subspace learning framework for content-based image retrieval
Proceedings of the 15th international conference on Multimedia
General Tensor Discriminant Analysis and Gabor Features for Gait Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Regularized locality preserving indexing via spectral regression
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
SRDA: An Efficient Algorithm for Large-Scale Discriminant Analysis
IEEE Transactions on Knowledge and Data Engineering
Spectral Regression: A Unified Approach for Sparse Subspace Learning
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Efficient Kernel Discriminant Analysis via Spectral Regression
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Indexing text and visual features for WWW images
APWeb'05 Proceedings of the 7th Asia-Pacific web conference on Web Technologies Research and Development
IEEE Transactions on Multimedia
Interactive cartoon reusing by transfer learning
Signal Processing
A Rayleigh-Ritz style method for large-scale discriminant analysis
Pattern Recognition
Hi-index | 0.00 |
Linear discriminant analysis (LDA) has been a popular method for dimensionality reduction, which preserves class separability. The projection vectors are commonly obtained by maximizing the between-class covariance and simultaneously minimizing the within-class covariance. LDA can be performed either in the original input space or in the reproducing kernel Hilbert space (RKHS) into which data points are mapped, which leads to kernel discriminant analysis (KDA). When the data are highly nonlinear distributed, KDA can achieve better performance than LDA. However, computing the projective functions in KDA involves eigen-decomposition of kernel matrix, which is very expensive when a large number of training samples exist. In this paper, we present a new algorithm for kernel discriminant analysis, called Spectral Regression Kernel Discriminant Analysis (SRKDA). By using spectral graph analysis, SRKDA casts discriminant analysis into a regression framework, which facilitates both efficient computation and the use of regularization techniques. Specifically, SRKDA only needs to solve a set of regularized regression problems, and there is no eigenvector computation involved, which is a huge save of computational cost. The new formulation makes it very easy to develop incremental version of the algorithm, which can fully utilize the computational results of the existing training samples. Moreover, it is easy to produce sparse projections (Sparse KDA) with a L 1-norm regularizer. Extensive experiments on spoken letter, handwritten digit image and face image data demonstrate the effectiveness and efficiency of the proposed algorithm.