High-quality non-blind motion deblurring

  • Authors:
  • Chao Wang;LiFeng Sun;ZhuoYuan Chen;ShiQiang Yang;JianWei Zhang

  • Affiliations:
  • Computer Science, Tsinghua Univ., Tsinghua National Laboratory for Information Science and Technology, Key Laboratory of Media and Networking, MOE-Microsoft, China;Computer Science, Tsinghua Univ., Tsinghua National Laboratory for Information Science and Technology, Key Laboratory of Media and Networking, MOE-Microsoft, China;Computer Science, Tsinghua Univ., Tsinghua National Laboratory for Information Science and Technology, Key Laboratory of Media and Networking, MOE-Microsoft, China;Computer Science, Tsinghua Univ., Tsinghua National Laboratory for Information Science and Technology, Key Laboratory of Media and Networking, MOE-Microsoft, China;Department Informatics, Hamburg Univ., Germany

  • Venue:
  • ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional non-blind motion deblurring methods are sensitive to kernel estimate errors and image noise, thus suffering from either ringing artifacts, enlarged image noise, or oversmoothed image details. We introduce a robust nonblind deblurring algorithm that produces high quality results even from many challenging images with noisy kernels. We adopt the Gaussian Scale Mixture Fields of Experts (GSM FOE) model and the smoothness constraint as image prior, and use the iterative re-weight least-square (IRLS) algorithm to produce the temporal result. The residual deconvolution suite is used to restore the lost image details. We denoise the result using our std-controlled cross bilateral filter. The experimental results are much better than those of previous approaches.