Discriminant Analysis: A Unified Approach
ICDM '05 Proceedings of the Fifth IEEE International Conference on Data Mining
Selection of relevant genes in cancer diagnosis based on their prediction accuracy
Artificial Intelligence in Medicine
Feature selection methods for text classification
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient algorithm for learning to rank from preference graphs
Machine Learning
A Novel Regularization Learning for Single-View Patterns: Multi-View Discriminative Regularization
Neural Processing Letters
Computational Statistics & Data Analysis
A novel multi-view learning developed from single-view patterns
Pattern Recognition
Three-fold structured classifier design based on matrix pattern
Pattern Recognition
Hi-index | 0.00 |
Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hilbert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.