Recent approaches to global optimization problems through Particle Swarm Optimization
Natural Computing: an international journal
Parallel evolutionary training algorithms for “hardware-friendly“ neural networks
Natural Computing: an international journal
Information Processing and Management: an International Journal
An automatically constructed thesaurus for neural network based document categorization
Expert Systems with Applications: An International Journal
Automatic thesaurus construction for spam filtering using revised back propagation neural network
Expert Systems with Applications: An International Journal
Text categorization based on artificial neural networks
ICONIP'06 Proceedings of the 13th international conference on Neural information processing - Volume Part III
A novel algorithm for text categorization using improved back-propagation neural network
FSKD'06 Proceedings of the Third international conference on Fuzzy Systems and Knowledge Discovery
Hi-index | 0.00 |
Evolutionary neural network training algorithms are presented. These algorithms are applied to train neural networks with weight values confined to a narrow band of integers. We constrain the weights and biases in the range [-2k-1 + 1, 2k-1 - 1], for k = 3, 4, 5, thus they can be represented by just k bits. Such neural networks are better suited for hardware implementation than the real weight ones. Mathematical operations that are easy to implement in software might often be very burdensome in the hardware and therefore more costly. Hardware-friendly algorithms are essential to ensure the functionality and cost effectiveness of the hardware implementation. T o this end, in addition to the integer weights, the trained neural networks use threshold activation functions only, so hardware implementation is even easier. These algorithms have been designed keeping in mind that the resulting integer weights require fewer bits to be stored and the digital arithmetic operations between them are easier to be implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, as we have found here, the network training procedure can be more effective and efficient when larger weights are allowed. Thus, for a given application a trade off between effectiveness and memory consumption has to be considered. Our intention is to present results of evolutionary algorithms on this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable.