Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Advances in neural information processing systems 2
Can neural networks do better than the Vapnik-Chervonenkis bounds?
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bayesian methods for adaptive models
Bayesian methods for adaptive models
C4.5: programs for machine learning
C4.5: programs for machine learning
Initializing back propagation networks with prototypes
Neural Networks
Deterministic global optimal FNN training algorithms
Neural Networks
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Accelerating backpropagation through dynamic self-adaptation
Neural Networks
Incremental learning from positive data
Journal of Computer and System Sciences
Effective backpropagation training with variable stepsize
Neural Networks
Estimating learning curves of concept learning
Neural Networks
Sensitivity Analysis for Decision Boundaries
Neural Processing Letters
Machine Learning
Automatic Scaling using Gamma Learning for Feedforward Neural Networks
IWANN '96 Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation
Balancing Bias and Variance: Network Topology and Pattern Set Reduction Techniques
IWANN '96 Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation
IWANN '93 Proceedings of the International Workshop on Artificial Neural Networks: New Trends in Neural Computation
Vapnik-chervonenkis generalization bounds for real valued neural networks
Neural Computation
Customer Validation of Commercial Predictive Models
Proceedings of the 2010 conference on Data Mining for Business Applications
Support vector neural training
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Hi-index | 0.00 |
Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.