Matrix computations (3rd ed.)
Proceedings of the 1998 conference on Advances in neural information processing systems II
Candid Covariance-Free Incremental Principal Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Toward Integrating Feature Selection Algorithms for Classification and Clustering
IEEE Transactions on Knowledge and Data Engineering
Orthogonal locality preserving indexing
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
LESS: A Model-Based Classifier for Sparse Subspaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using IPCA-ICA Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Representing Images Using Nonorthogonal Haar-Like Bases
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning a Maximum Margin Subspace for Image Retrieval
IEEE Transactions on Knowledge and Data Engineering
A new independent component analysis for speech recognition and separation
IEEE Transactions on Audio, Speech, and Language Processing
Orientation distance-based discriminative feature extraction for multi-class classification
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
TAKES: a fast method to select features in the kernel space
Proceedings of the 20th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Feature extraction is an effective tool in data mining and machine learning. Many feature extraction methods have been investigated recently. However, few methods can achieve orthogonal components. Non-orthogonal components distort the metric structure of original data space and contain reductant information. In this paper, we propose a feature extraction method, named as incremental orthogonal basis analysis (IOBA), to cope with the challenging endeavors. First, IOBA learns orthogonal components for original data, not only theoretically but also numerically. Second, an innovative way of training data selection is proposed. This selection scheme helps IOBA pick up numerically orthogonal components from training patterns. Third, by designing a self-adaptive threshold technique, no prior knowledge about the number of components is necessary to use IOBA. Moreover, without solving eigenvalue and eigenvector problems, IOBA not only saves large computing loads, but also avoids ill-conditioned problems. Results of experiments show the efficiency of the proposed IOBA.