Introduction to the theory of neural computation
Introduction to the theory of neural computation
Neural Computation
Constrained Learning in Neural Networks: Application to Stable Factorization of 2-D Polynomials
Neural Processing Letters
Knowledge Incorporation into Neural Networks From Fuzzy Rules
Neural Processing Letters
Rotational prior knowledge for SVMs
ECML'05 Proceedings of the 16th European conference on Machine Learning
Hi-index | 0.00 |
The generalization error is a widely used performance measure employed in the analysis of adaptive learning systems. This measure is generally critically dependent on the knowledge that the system is given about the problem it is trying to learn. In this paper we examine to what extent it is necessarily the case that an increase in the knowledge that the system has about the problem will reduce the generalization error. Using the standard definition of the generalization error, we present simple cases for which the intuitive idea of “reducivity”---that more knowledge will improve generalization---does not hold. Under a simple approximation, however, we find conditions to satisfy “reducivity.” Finally, we calculate the effect of a specific constraint on the generalization error of the linear perceptron, in which the signs of the weight components are fixed. This particular restriction results in a significant improvement in generalization performance.