The nature of statistical learning theory
The nature of statistical learning theory
Classification on pairwise proximity data
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support vector machine active learning for image retrieval
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Database for Handwritten Text Recognition Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
Diffusion Kernels on Graphs and Other Discrete Input Spaces
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A generalized kernel approach to dissimilarity-based classification
The Journal of Machine Learning Research
Optimal Cluster Preserving Embedding of Nonmetric Proximity Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning with non-positive kernels
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Enhanced Perceptual Distance Functions and Indexing for Image Replica Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Space Interpretation of SVMs with Indefinite Kernels
IEEE Transactions on Pattern Analysis and Machine Intelligence
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
More efficiency in multiple kernel learning
Proceedings of the 24th international conference on Machine learning
A scalable modular convex solver for regularized risk minimization
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning kernels from indefinite similarities
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
A Reformulation of Support Vector Machines for General Confidence Functions
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
A Family of Simple Non-Parametric Kernel Learning Algorithms
The Journal of Machine Learning Research
Multi-task regularization of generative similarity models
SIMBAD'11 Proceedings of the First international conference on Similarity-based pattern recognition
Correlated multi-label feature selection
Proceedings of the 20th ACM international conference on Information and knowledge management
Gender classification from unaligned facial images using support subspaces
Information Sciences: an International Journal
Online Multiple Kernel Classification
Machine Learning
Low-rank quadratic semidefinite programming
Neurocomputing
An Analysis of Different Approaches to Gait Recognition Using Cell Phone Based Accelerometers
Proceedings of International Conference on Advances in Mobile Computing & Multimedia
Approximation and Estimation Bounds for Subsets of Reproducing Kernel Kreĭn Spaces
Neural Processing Letters
Hi-index | 0.00 |
Similarity matrices generated from many applications may not be positive semidefinite, and hence can't fit into the kernel machine framework. In this paper, we study the problem of training support vector machines with an indefinite kernel. We consider a regularized SVM formulation, in which the indefinite kernel matrix is treated as a noisy observation of some unknown positive semidefinite one (proxy kernel) and the support vectors and the proxy kernel can be computed simultaneously. We propose a semi-infinite quadratically constrained linear program formulation for the optimization, which can be solved iteratively to find a global optimum solution. We further propose to employ an additional pruning strategy, which significantly improves the efficiency of the algorithm, while retaining the convergence property of the algorithm. In addition, we show the close relationship between the proposed formulation and multiple kernel learning. Experiments on a collection of benchmark data sets demonstrate the efficiency and effectiveness of the proposed algorithm.