Synthesizing Statistical Knowledge from Incomplete Mixed-Mode Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
What size net gives valid generalization?
Neural Computation
Practical neural network recipes in C++
Practical neural network recipes in C++
Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Feature Selection via Discretization
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering
Bounds on the number of hidden neurons in three-layer binary
Neural Networks
Data Mining and Knowledge Discovery Handbook
Data Mining and Knowledge Discovery Handbook
Generalization and Selection of Examples in Feedforward Neural Networks
Neural Computation
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Optimizing number of hidden neurons in neural networks
AIAP'07 Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications
Neural network architecture selection: can function complexity help?
Neural Processing Letters
ChiMerge: discretization of numeric attributes
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Generalization properties of modular networks: implementing the parity function
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper studies the extension of the Generalization Complexity (GC) measure to real valued input problems The GC measure, defined in Boolean space, was proposed as a simple tool to estimate the generalization ability that can be obtained when a data set is learnt by a neural network Using two different discretization methods, the real valued inputs are transformed into binary values, from which the generalization complexity can be straightforwardly computed The discretization transformation is carried out both through a very simple method based on equal width intervals (EW) and with a more sophisticated supervised method (the CAIM algorithm) that use much more information about the data A study of the relationship between data complexity and generalization ability obtained was done together with an analysis of the relationship between best neural architecture size and complexity.