Robust regression and outlier detection
Robust regression and outlier detection
The nature of statistical learning theory
The nature of statistical learning theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Convex Optimization
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Uncertainty principles and ideal atomic decomposition
IEEE Transactions on Information Theory
A generalized uncertainty principle and sparse representation in pairs of bases
IEEE Transactions on Information Theory
Gene Classification Using Parameter-Free Semi-Supervised Manifold Learning
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
DuoWave: Mitigating the curse of dimensionality for uncertain data
Data & Knowledge Engineering
Hi-index | 0.00 |
Extracting features from high-dimensional data is a critically important task for pattern recognition and machine learning applications. High-dimensional data typically have much more variables than observations, and contain significant noise, missing components, or outliers. Features extracted from high-dimensional data need to be discriminative, sparse, and can capture essential characteristics of the data. In this paper, we present a way to constructing multivariate features and then classify the data into proper classes. The resulting small subset of features is nearly the best in the sense of Greenshtein's persistence; however, the estimated feature weights may be biased. We take a systematic approach for correcting the biases. We use conjugate gradient-based primal-dual interior-point techniques for large-scale problems. We apply our procedure to microarray gene analysis. The effectiveness of our method is confirmed by experimental results.