Matrix analysis
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Dictionary learning algorithms for sparse representation
Neural Computation
Linear geometric ICA: fundamentals and algorithms
Neural Computation
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications
Analysis of sparse representation and blind source separation
Neural Computation
Blind Source Separation by Sparse Decomposition in a Signal Dictionary
Neural Computation
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Blind separation of disjoint orthogonal signals: demixing N sources from 2 mixtures
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 05
Sparse ICA via cluster-wise PCA
Neurocomputing
K-EVD clustering and its applications to sparse component analysis
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
A robust method to count and locate audio sources in a stereophonic linear instantaneous mixture
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
Blind separation of speech mixtures via time-frequency masking
IEEE Transactions on Signal Processing
Sparse signal reconstruction from limited data using FOCUSS: are-weighted minimum norm algorithm
IEEE Transactions on Signal Processing
Performance analysis of minimum ℓ1-norm solutions for underdetermined source separation
IEEE Transactions on Signal Processing
Underdetermined blind source separation based on sparse representation
IEEE Transactions on Signal Processing
Sparse solutions to linear inverse problems with multiple measurement vectors
IEEE Transactions on Signal Processing
Convolutive Blind Source Separation in the Frequency Domain Based on Sparse Representation
IEEE Transactions on Audio, Speech, and Language Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Optimality and stability of the K-hyperline clustering algorithm
Pattern Recognition Letters
Mixing matrix estimation using discriminative clustering for blind source separation
Digital Signal Processing
Using gaussian potential function for underdetermined blind sources separation based on DUET
AICI'12 Proceedings of the 4th international conference on Artificial Intelligence and Computational Intelligence
An algorithm for underdetermined mixing matrix estimation
Neurocomputing
Towards information-theoretic K-means clustering for image indexing
Signal Processing
A fast mixing matrix estimation method in the wavelet domain
Signal Processing
Hi-index | 0.08 |
A two-stage clustering-then-@?"1-optimization approach has been often used for sparse component analysis (SCA). The first challenging task of this approach is to estimate the basis matrix by cluster analysis. In this paper, a robust K-hyperline clustering (K-HLC) algorithm is developed for this task. The novelty of our method is that it is not only able to implement hyperline clustering, but also is capable of detecting the number of hidden hyperlines (or sparse components). K-HLC seamlessly integrates ''the hyperline clustering'' and ''hyperline number detection'' in the same algorithm. In addition, three strategies are proposed to tackle this problem: (1) reject the outliers by overestimating the number of hyperlines; (2) escape from local minima by using a multilayer initialization and (3) suppress the noise by a multilayer K-HLC. By taking these strategies into account, the robust K-HLC procedure can be briefly described as follows: first, we overestimate the number of hyperlines; then, a confidence index is given to evaluate the significance of each hyperline. Subsequently, we determine the number of hyperlines by checking the gap in the sorted confidence indices. Moreover, we select those hyperlines corresponding to large confidence indices with high rank priority and remove spurious ones with small confidence indices. The high performance of our clustering scheme is illustrated by extensive numerical experiments including some challenging benchmarks, e.g., very ill-conditioned basis matrix (Hilbert matrix), or the observations with strong outliers.