Incorporating a priori knowledge from detractor points into support vector classification
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part II
A novel learning scheme for Chebyshev functional link neural networks
Advances in Artificial Neural Systems
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
A theoretical framework for supervised learning from regions
Neurocomputing
Hi-index | 0.00 |
In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.