From regularization operators to support vector kernels
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
A Theory of Networks for Approximation and Learning
A Theory of Networks for Approximation and Learning
Simpler knowledge-based support vector machines
ICML '06 Proceedings of the 23rd international conference on Machine learning
Proximal Knowledge-based Classification
Statistical Analysis and Data Mining
Online knowledge-based support vector machines
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Nonlinear Knowledge-Based Classification
IEEE Transactions on Neural Networks
Learning with boundary conditions
Neural Computation
A theoretical framework for supervised learning from regions
Neurocomputing
Hi-index | 0.00 |
Supervised examples and prior knowledge expressed by propositions have been profitably integrated in kernel machines so as to improve the performance of classifiers in different real-world contexts. In this paper, using arguments from variational calculus, a novel representer theorem is proposed which solves optimally a more general form of the associated regularization problem. In particular, it is shown that the solution is based on box kernels, which arises from combining classic kernels with the constraints expressed in terms of propositions. The effectiveness of this new representation is evaluated on real-world problems of medical diagnosis and image categorization.