Learning from hints in neural networks
Journal of Complexity
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural networks and the bias/variance dilemma
Neural Computation
Knowledge-based artificial neural networks
Artificial Intelligence
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Rule Revision With Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
What Inductive Bias Gives Good Neural Network Training Performance?
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Learning capacity and sample complexity on expert networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The use of prior knowledge to train neural networks for better performance has attracted increased attention. Initial domain theories exists for many machine learning applications. In both, feed forward and recurrent neural networks, algorithms for encoding prior knowledge hasbeen constructed. We propose a heuristic for determining the strength of the prior knowledge (inductive bias) for recurrent neural networks encoded with a DFA as initial domain knowledge. Our heuristic uses gradient information in weight space in the direction of the prior knowledge to enhance performance. Tests on known benchmark problems demonstrate that our heuristic reduces training time, on average, by 30% compared to a random choice of the strength of the inductive bias. It also achieves, on average, near perfect generalization for that specific choice of the inductive bias.