Training products of experts by minimizing contrastive divergence
Neural Computation
Estimation of Non-Normalized Statistical Models by Score Matching
The Journal of Machine Learning Research
A Variational Method for Learning Sparse and Overcomplete Representations
Neural Computation
Learning Overcomplete Representations
Neural Computation
Some extensions of score matching
Computational Statistics & Data Analysis
Variational and stochastic inference for Bayesian source separation
Digital Signal Processing
Modeling and estimation of dependent subspaces with non-radially symmetric and skewed densities
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Conjugate gamma Markov random fields for modelling nonstationary sources
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Wavelet-based statistical signal processing using hidden Markovmodels
IEEE Transactions on Signal Processing
A Bayesian Approach for Blind Separation of Sparse Sources
IEEE Transactions on Audio, Speech, and Language Processing
Informed source separation through spectrogram coding and data embedding
Signal Processing
Hi-index | 0.00 |
In many audio processing tasks, such as source separation, denoising or compression, it is crucial to construct realistic and flexible models to capture the physical properties of audio signals. This can be accomplished in the Bayesian framework through the use of appropriate prior distributions. In this paper, we describe a class of prior models called Gamma Markov random fields (GMRFs) to model the sparsity and the local dependency of the energies (i.e., variances) of time-frequency expansion coefficients. A GMRF model describes a non-normalised joint distribution over unobserved variance variables, where given the field the actual source coefficients are independent. Our construction ensures a positive coupling between the variance variables, so that signal energy changes smoothly over both axes to capture the temporal and spectral continuity. The coupling strength is controlled by a set of hyperparameters. Inference on the overall model is convenient because of the conditional conjugacy of all of the variables in the model, but automatic optimization of hyperparameters is crucial to obtain better fits. The marginal likelihood of the model is not available because of the intractable normalizing constant of GMRFs. In this paper, we optimize the hyperparameters of our GMRF-based audio model using contrastive divergence and compare this method to alternatives such as score matching and pseudolikelihood maximization where applicable. We present the performance of the GMRF models in denoising and single-channel source separation problems in completely blind scenarios, where all the hyperparameters are jointly estimated given only audio data.