Machine Learning
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Learning the Kernel Matrix with Semi-Definite Programming
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Decomposition methods for linear support vector machines
Neural Computation
Evolutionary strategies for multi-scale radial basis function kernels in support vector machines
GECCO '05 Proceedings of the 7th annual conference on Genetic and evolutionary computation
A statistical framework for genomic data fusion
Bioinformatics
Kernel methods for predicting protein--protein interactions
Bioinformatics
A support vector method for multivariate performance measures
ICML '05 Proceedings of the 22nd international conference on Machine learning
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Kernel-based data fusion for gene prioritization
Bioinformatics
Learning a combination of heterogeneous dissimilarities from incomplete knowledge
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
Hi-index | 0.00 |
In recent years, more and more high-throughput data sources useful for protein complex prediction have become available (e.g., gene sequence, mRNA expression, and interactions). The integration of these different data sources can be challenging. Recently, it has been recognized that kernel-based classifiers are well suited for this task. However, the different kernels (data sources) are often combined using equal weights. Although several methods have been developed to optimize kernel weights, no large-scale example of an improvement in classifier performance has been shown yet. In this work, we employ an evolutionary algorithm to determine weights for a larger set of kernels by optimizing a criterion based on the area under the ROC curve. We show that setting the right kernel weights can indeed improve performance. We compare this to the existing kernel weight optimization methods (i.e., (regularized) optimization of the SVM criterion or aligning the kernel with an ideal kernel) and find that these do not result in a significant performance improvement and can even cause a decrease in performance. Results also show that an expert approach of assigning high weights to features with high individual performance is not necessarily the best strategy.