Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Advances in neural information processing systems 2
Can neural networks do better than the Vapnik-Chervonenkis bounds?
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Bayesian methods for adaptive models
Bayesian methods for adaptive models
C4.5: programs for machine learning
C4.5: programs for machine learning
Initializing back propagation networks with prototypes
Neural Networks
Deterministic global optimal FNN training algorithms
Neural Networks
Improving Generalization with Active Learning
Machine Learning - Special issue on structured connectionist systems
Accelerating backpropagation through dynamic self-adaptation
Neural Networks
Incremental learning from positive data
Journal of Computer and System Sciences
Effective backpropagation training with variable stepsize
Neural Networks
Estimating learning curves of concept learning
Neural Networks
Sensitivity Analysis for Decision Boundaries
Neural Processing Letters
Machine Learning
Automatic Scaling using Gamma Learning for Feedforward Neural Networks
IWANN '96 Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation
Balancing Bias and Variance: Network Topology and Pattern Set Reduction Techniques
IWANN '96 Proceedings of the International Workshop on Artificial Neural Networks: From Natural to Artificial Neural Computation
IWANN '93 Proceedings of the International Workshop on Artificial Neural Networks: New Trends in Neural Computation
Neural Network Exploration Using Optional Experiment Design
Neural Network Exploration Using Optional Experiment Design
A Formulation for Active Learning with Applications to Object Detection
A Formulation for Active Learning with Applications to Object Detection
Neural Computation
Vapnik-chervonenkis generalization bounds for real valued neural networks
Neural Computation
A novel objective function for improved phoneme recognition using time-delay neural networks
IEEE Transactions on Neural Networks
Neural net algorithms that learn in polynomial time from examples and queries
IEEE Transactions on Neural Networks
Query-based learning applied to partially trained multilayer perceptrons
IEEE Transactions on Neural Networks
Performance and generalization of the classification figure of merit criterion function
IEEE Transactions on Neural Networks
Avoiding false local minima by proper initialization of connections
IEEE Transactions on Neural Networks
Selecting concise training sets from clean data
IEEE Transactions on Neural Networks
Computation of Madalines' Sensitivity to Input and Weight Perturbations
Neural Computation
Hi-index | 0.00 |
Research on improving the performance of feedforward neural networks has concentrated mostly on the optimal setting of initial weights and learning parameters, sophisticated optimization techniques, architecture optimization, and adaptive activation functions. An alternative approach is presented in this paper where the neural network dynamically selects training patterns from a candidate training set during training, using the network's current attained knowledge about the target concept. Sensitivity analysis of the neural network output with respect to small input perturbations is used to quantify the informativeness of candidate patterns. Only the most informative patterns, which are those patterns closest to decision boundaries, are selected for training. Experimental results show a significant reduction in the training set size, without negatively influencing generalization performance and convergence characteristics. This approach to selective learning is then compared to an alternative where informativeness is measured as the magnitude in prediction error.