Multilayer feedforward networks are universal approximators
Neural Networks
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Approximate nearest neighbors: towards removing the curse of dimensionality
STOC '98 Proceedings of the thirtieth annual ACM symposium on Theory of computing
Convolutional networks for images, speech, and time series
The handbook of brain theory and neural networks
Random projection in dimensionality reduction: applications to image and text data
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Similarity Search in High Dimensions via Hashing
VLDB '99 Proceedings of the 25th International Conference on Very Large Data Bases
An introduction to variable and feature selection
The Journal of Machine Learning Research
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
Accelerating neuroevolutionary methods using a Kalman filter
Proceedings of the 10th annual conference on Genetic and evolutionary computation
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
The Journal of Machine Learning Research
Neuroevolution strategies for episodic reinforcement learning
Journal of Algorithms
Robot design for space missions using evolutionary computation
CEC'09 Proceedings of the Eleventh conference on Congress on Evolutionary Computation
The Balanced Accuracy and Its Posterior Distribution
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Decoding by linear programming
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Learning parameters of linear models in compressed parameter space
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Hi-index | 0.00 |
We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly.