The capacity of the Hopfield associative memory
IEEE Transactions on Information Theory
On the complexity of loading shallow neural networks
Journal of Complexity - Special Issue on Neural Computation
Learning from hints in neural networks
Journal of Complexity
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
The use of neural nets to combine equalization with decoding
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: plenary, special, audio, underwater acoustics, VLSI, neural networks - Volume I
Hi-index | 0.00 |
The aim of a neural net is to partition the data space into near optimal decision regions. Learning such a partitioning solely from examples has proven to be a very hard problem (Blum and Rivest 1988; Judd 1988). To remedy this, we use the idea of supplying hints to the network as discussed by Abu-Mostafa (1990). Hints reduce the solution space, and as a consequence speed up the learning process. The minimum Hamming distance between the patterns serves as the hint. Next, it is shown how to learn such a hint and how to incorporate it into the learning algorithm. Modifications in the net structure and its operation are suggested, which allow for a better generalization. The sensitivity to errors in such a hint is studied through some simulations.