Skeletonization: a technique for trimming the fat from a network via relevance assessment
Advances in neural information processing systems 1
Creating artificial neural networks that generalize
Neural Networks
Advances in neural information processing systems 2
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
GANNet: A Genetic Algorithm for Optimizing Topology and Weights in Neural Network Design
IWANN '93 Proceedings of the International Workshop on Artificial Neural Networks: New Trends in Neural Computation
Theoretical Results on Sparse Representations of Multiple-Measurement Vectors
IEEE Transactions on Signal Processing
Subset selection in noise based on diversity measure minimization
IEEE Transactions on Signal Processing
Sparse solutions to linear inverse problems with multiple measurement vectors
IEEE Transactions on Signal Processing
Information geometry on hierarchy of probability distributions
IEEE Transactions on Information Theory
On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Signal Reconstruction From Noisy Random Projections
IEEE Transactions on Information Theory
Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit
IEEE Transactions on Information Theory
Compressed Sensing and Redundant Dictionaries
IEEE Transactions on Information Theory
The dependence identification neural network construction algorithm
IEEE Transactions on Neural Networks
A node pruning algorithm based on a Fourier amplitude sensitivity test method
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The balance between computational complexity and the architecture bottlenecks the development of Neural Networks (NNs), An architecture that is too large or too small will influence the performance to a large extent in terms of generalization and computational cost. In the past, saliency analysis has been employed to determine the most suitable structure, however, it is time-consuming and the performance is not robust. In this paper, a family of new algorithms for pruning elements (weighs and hidden neurons) in Neural Networks is presented based on Compressive Sampling (CS) theory. The proposed framework makes it possible to locate the significant elements, and hence find a sparse structure, without computing their saliency. Experiment results are presented which demonstrate the effectiveness of the proposed approach.