Image Interpolation Using Enhanced Multiresolution Critical-Point Filters

  • Authors:
  • Kensuke Habuka;Yoshihisa Shinagawa

  • Affiliations:
  • Department of Electrical and Computer Engineering and Beckman Institute, University of Illinois at Urbana-Champaign;Department of Electrical and Computer Engineering and Beckman Institute, University of Illinois at Urbana-Champaign

  • Venue:
  • International Journal of Computer Vision - Special Issue on Computer Vision Research at the Beckman Institute of Advanced Science and Technology
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

There are increasing demands for image interpolation in various fields such as virtual reality, computer animation, and video transmission. The critical-point filters (CPF) we have proposed previously enable completely automatic matching of two images. In our previous method, however, it has taken a long time to compute the optimal parameter values by iterative searches. This paper proposes an improved algorithm called the enhanced critical-point filters (ECPF) where the parameters are stably computed using together the inverse mapping from the destination to the source. It takes only a second to match images whose size is 64 × 64. The algorithm is also improved in its precision by directly handling color images while the previous algorithm has taken only the intensity value into account. We apply ECPF to keyframe interpolation of video sequences. We also apply ECPF to interpolating two different views of an object to generate any intermediate views. In this case, there are many constraints that can be used to determine the mapping between the images. We propose a method to use such constraints to improve the accuracy of the mappings. As the results, image-based pseudo-3D models are easily created from a set of views without any special devices.