Convolutive blind separation of speech mixtures using the natural gradient
Speech Communication - Special issue on speech processing for hearing aids
Blind Source Separation of Convolutive Mixtures of Speech in Frequency Domain
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Evaluation of blind signal separation method using directivity pattern under reverberant conditions
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 05
Blind separation of disjoint orthogonal signals: demixing N sources from 2 mixtures
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 05
Blind source separation combining frequency-domain ICA and beamforming
ICASSP '01 Proceedings of the Acoustics, Speech, and Signal Processing, 2001. on IEEE International Conference - Volume 05
Sparse coding for convolutive blind audio source separation
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
Blind separation of speech mixtures via time-frequency masking
IEEE Transactions on Signal Processing
Equivariant adaptive source separation
IEEE Transactions on Signal Processing
Performance measurement in blind audio source separation
IEEE Transactions on Audio, Speech, and Language Processing
Fast and robust fixed-point algorithms for independent component analysis
IEEE Transactions on Neural Networks
Self-adaptive blind source separation based on activation functions adaptation
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
We consider the problem of convolutive blind source separation of stereo mixtures, where a pair of microphones records mixtures of sound sources that are convolved with the impulse response between each source and sensor. We propose an adaptive stereo basis (ASB) source separation method for such convolutive mixtures, using an adaptive transform basis which is learned from the stereo mixture pair. The stereo basis vector pairs of the transform are grouped according to the estimated relative delay between the left and right channels for each basis, and the sources are then extracted by projecting the transformed signal onto the subspace corresponding to each group of basis vector pairs. The performance of the proposed algorithm is compared with FD-ICA and DUET under different reverberation and noise conditions, using both objective distortion measures and formal listening tests. The results indicate that the proposed stereo coding method is competitive with both these algorithms at short and intermediate reverberation times, and offers significantly improved performance at low noise and short reverberation times.