Optimization of entropy with neural networks
Optimization of entropy with neural networks
Unsupervised Discrimination of Clustered Data via Optimization of Binary Information Gain
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Square Unit Augmented, Radially Extended, Multilayer Perceptrons
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Recurrent Nets that Time and Count
IJCNN '00 Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks (IJCNN'00)-Volume 3 - Volume 3
Learning to Forget: Continual Prediction with LSTM
Neural Computation
Neural Computation
A Testbed for Neural-Network Models Capable of Integrating Information in Time
Anticipatory Behavior in Adaptive Learning Systems
An unsupervised learning based LSTM model: a new architecture
AMERICAN-MATH'11/CEA'11 Proceedings of the 2011 American conference on applied mathematics and the 5th WSEAS international conference on Computer engineering and applications
Hi-index | 0.00 |
While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and time-varying inputs has rarely been explored. Here we train Long Short-Term Memory (LSTM) recurrent networks to maximize two information-theoretic objectives for unsupervised learning: Binary Information Gain Optimization (BINGO) and Nonparametric Entropy Optimization (NEO). LSTM learns to discriminate different types of temporal sequences and group them according to a variety of features.