Advances in neural information processing systems 2
Model selection in neural networks
Neural Networks
On cross validation for model selection
Neural Computation
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Jacobian Conditioning Analysis for Model Validation
Neural Computation
IEEE Transactions on Neural Networks
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Bayesian nonlinear model selection and neural networks: a conjugate prior approach
IEEE Transactions on Neural Networks
Neural-network construction and selection in nonlinear modeling
IEEE Transactions on Neural Networks
An introduction to variable and feature selection
The Journal of Machine Learning Research
Selecting salient features for classification based on neural network committees
Pattern Recognition Letters
Feature Selection via Coalitional Game Theory
Neural Computation
Structural Risk Minimisation based gene expression profiling analysis
International Journal of Bioinformatics Research and Applications
KES '08 Proceedings of the 12th international conference on Knowledge-Based Intelligent Information and Engineering Systems, Part III
Gene identification and survival prediction with Lp Cox regression and novel similarity measure
International Journal of Data Mining and Bioinformatics
Sparse Support Vector Machines with L_{p} Penalty for Biomarker Identification
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Data analysis pipeline from laboratory to MP models
Natural Computing: an international journal
Distinguishing Facial Features for Ethnicity-Based 3D Face Recognition
ACM Transactions on Intelligent Systems and Technology (TIST)
Blind Kriging: Implementation and performance analysis
Advances in Engineering Software
Sensor selection to support practical use of health-monitoring smart environments
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery
Hi-index | 0.00 |
This paper presents a model selection procedure which stresses the importance of the classic polynomial models as tools for evaluating the complexity of a given modeling problem, and for removing non-significant input variables. If the complexity of the problem makes a neural network necessary, the selection among neural candidates can be performed in two phases. In an additive phase, the most important one, candidate neural networks with an increasing number of hidden neurons are trained. The addition of hidden neurons is stopped when the effect of the round-off errors becomes significant, so that, for instance, confidence intervals cannot be accurately estimated. This phase leads to a set of approved candidate networks. In a subsequent subtractive phase, a selection among approved networks is performed using statistical Fisher tests. The series of tests starts from a possibly too large unbiased network (the full network), and ends with the smallest unbiased network whose input variables and hidden neurons all have a significant contribution to the regression estimate. This method was successfully tested against the real-world regression problems proposed at the NIPS2000 Unlabeled Data Supervised Learning Competition; two of them are included here as illustrative examples.