Neural networks and the bias/variance dilemma
Neural Computation
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Efficient Pattern Recognition Using a New Transformation Distance
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Incorporating Invariances in Support Vector Learning Machines
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
On the Bayes fusion of visual features
Image and Vision Computing
Hi-index | 0.00 |
Invariant features or operators are often used to shield the recognition process from the effect of "nuisance" parameters, such as rotations, foreshortening, or illumination changes. From an information-theoretic point of view, imposing invariance results in reduced (rather than improved) system performance. In fact, in the case of small training samples, the situation is reversed, and invariant operators may reduce the misclassification rate. We propose an analysis of this interesting behavior based on the bias-variance dilemma, and present experimental results confirming our theoretical expectations. In addition, we introduce the concept of "randomized invariants" for training, which can be used to mitigate the effect of small sample size.