Advances in neural information processing systems 2
Neural networks for pattern recognition
Neural networks for pattern recognition
Regularization with a pruning prior
Neural Networks
Mean-field approaches to independent component analysis
Neural Computation
Second Order Derivatives for Network Pruning: Optimal Brain Surgeon
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Independent Component Analysis and Blind Signal Separation: 6th International Conference, ICA 2006, Charleston, SC, USA, March 5-8, 2006, Proceedings (Lecture Notes in Computer Science)
Blind detection of independent dynamic components
ICASSP '01 Proceedings of the Acoustics, Speech, and Signal Processing, 2001. on IEEE International Conference - Volume 05
Pruning from adaptive regularization
Neural Computation
K-EVD clustering and its applications to sparse component analysis
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
New permutation algorithms for causal discovery using ICA
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
Blind deconvolution with sparse priors on the deconvolution filters
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
A review of Bayesian neural networks with an application to near infrared spectroscopy
IEEE Transactions on Neural Networks
A novel pruning algorithm for self-organizing neural network
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.01 |
We discuss pruning as a means of structure learning in independent component analysis (ICA). Learning the structure is attractive in both signal processing and in analysis of abstract data, where it can assist model interpretation, generalizability and reduce computation. We derive the relevant saliency expressions and compare with magnitude based pruning and Bayesian sparsification. We show in simulations that pruning is able to identify underlying structures without prior knowledge on the dimensionality of the model. We find, that for ICA, magnitude based pruning is as efficient as saliency based methods and Bayesian methods, for both small and large samples. The Bayesian information criterion (BIC) seems to outperform both AIC and test sets as tools for determining the optimal dimensionality.