On learning the past tenses of English verbs
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
Competitive learning algorithms for vector quantization
Neural Networks
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Hi-index | 0.00 |
In this paper, we propose a new type of computational method to accelerate a process of information maximization and a new technique to extract important features in input patterns by a concept of information loss. Information-theoretic competitive learning has been proposed to solve the fundamental problems of competitive learning such as the dead neuron problem with many practical applications. However, one of the major problems in information-theoretic competitive learning is slow in increasing information in competitive units, depending upon given problems. To overcome this shortcoming, we propose a new computational method in which maximum information is supposed to be already achieved before learning. By this computational method, we force networks to converge much faster. In addition, information loss is proposed in which difference in formation between an original network and network without an input unit is measured. If the information loss for the unit is large, the input unit should a very important role. By forced information with the information loss, information-theoretic competitive learning is expected to be applied to large-scale practical problems.