Information controller to maximize and minimize information
Neural Computation
On different facets of regularization theory
Neural Computation
Information-Theoretic Competitive Learning with Inverse Euclidean Distance Output Units
Neural Processing Letters
Enhancing and Relaxing Competitive Units for Feature Discovery
Neural Processing Letters
Competitive learning by information maximization: eliminating dead neurons in competitive learning
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Hi-index | 0.00 |
Controlling the network complexity in order to preventoverfitting is one of the major problems encountered when usingneural network models to extract the structure from small datasets. In this paper we present a network architecture designed foruse with a cost function that includes a novel complexity penaltyterm. In this architecture the outputs of the hidden units arestrictly positive and sum to one, and their outputs are defined asthe probability that the actual input belongs to a certain classformed during learning. The penalty term expresses the mutualinformation between the inputs and the extracted classes. Thismeasure effectively describes the network complexity with respectto the given data in an unsupervised fashion. The efficiency ofthis architecture/penalty-term when combined with backpropagationtraining, is demonstrated on a real world economic time seriesforecasting problem. The model was also applied to the benchmarksunspot data and to a synthetic data set from the statisticscommunity.