Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Fast maximum margin matrix factorization for collaborative prediction
ICML '05 Proceedings of the 22nd international conference on Machine learning
Separating Style and Content with Bilinear Models
Neural Computation
A Visual Vocabulary for Flower Classification
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Scene Discovery by Matrix Factorization
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part IV
Image Feature Extraction Using Gradient Local Auto-Correlations
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Three-way auto-correlation approach to motion recognition
Pattern Recognition Letters
Automated Flower Classification over a Large Number of Classes
ICVGIP '08 Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing
Learning to recognize objects from unseen modalities
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Motion recognition using local auto-correlation of space-time gradients
Pattern Recognition Letters
Hi-index | 0.00 |
In pattern classification, it is needed to efficiently treat two-way data (feature matrices) while preserving the two-way structure such as spatio-temporal relationships, etc. The classifier for the feature matrix is generally formulated by multiple bilinear forms which result in a matrix. The rank of the matrix, i.e., the number of bilinear forms, should be low from the viewpoint of generalization performance and computational cost. For that purpose, we propose a low-rank bilinear classifier based on the efficient optimization. In the proposed method, the classifier is optimized by minimizing the trace norm of the classifier (matrix), which contributes to the rank reduction for an efficient classifier without any hard constraint on the rank. We formulate the optimization problem in a tractable convex form and propose the procedure to solve it efficiently with the global optimum. In addition, by considering a kernel-based extension of the bilinear method, we induce a novel multiple kernel learning (MKL), called heterogeneous MKL. The method combines both inter kernels between heterogeneous types of features and the ordinary kernels within homogeneous features into a new discriminative kernel in a unified manner using the bilinear model. In the experiments on various classification problems using feature arrays, co-occurrence feature matrices, and multiple kernels, the proposed method exhibits favorable performances compared to the other methods.