Robust estimation and wavelet thresholding in partially linear models
Statistics and Computing
A fast optimization transfer algorithm for image inpainting in wavelet domains
IEEE Transactions on Image Processing
Regularization approaches to demosaicking
IEEE Transactions on Image Processing
A note on the bregmanized total variation and dual forms
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Robust semi-supervised learning for biometrics
LSMS/ICSEE'10 Proceedings of the 2010 international conference on Life system modeling and and intelligent computing, and 2010 international conference on Intelligent computing for sustainable energy and environment: Part I
Image denoising with a constrained discrete total variation scale space
DGCI'11 Proceedings of the 16th IAPR international conference on Discrete geometry for computer imagery
A variational approach for exact histogram specification
SSVM'11 Proceedings of the Third international conference on Scale Space and Variational Methods in Computer Vision
On decomposition-based block preconditioned iterative methods for half-quadratic image restoration
Journal of Computational and Applied Mathematics
Journal of Parallel and Distributed Computing
Robust spectral regression for face recognition
Neurocomputing
A fixed-point augmented Lagrangian method for total variation minimization problems
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
We address the minimization of regularized convex cost functions which are customarily used for edge-preserving restoration and reconstruction of signals and images. In order to accelerate computation, the multiplicative and the additive half-quadratic reformulation of the original cost-function have been pioneered in Geman and Reynolds [IEEE Trans. Pattern Anal. Machine Intelligence, 14 (1992), pp. 367--383] and Geman and Yang IEEE Trans. Image Process., 4 (1995), pp. 932--946]. The alternate minimization of the resultant (augmented) cost-functions has a simple explicit form. The goal of this paper is to provide a systematic analysis of the convergence rate achieved by these methods. For the multiplicative and additive half-quadratic regularizations, we determine their upper bounds for their root-convergence factors. The bound for the multiplicative form is seen to be always smaller than the bound for the additive form. Experiments show that the number of iterations required for convergence for the multiplicative form is always less than that for the additive form. However, the computational cost of each iteration is much higher for the multiplicative form than for the additive form. The global assessment is that minimization using the additive form of half-quadratic regularization is faster than using the multiplicative form. When the additive form is applicable, it is hence recommended. Extensive experiments demonstrate that in our MATLAB implementation, both methods are substantially faster (in terms of computational times) than the standard MATLAB Optimization Toolbox routines used in our comparison study.