A survey of image registration techniques
ACM Computing Surveys (CSUR)
An analytic solution for the pose determination of human faces from a monocular image
Pattern Recognition Letters
Low-cost model reconstruction from image sequences
AFRIGRAPH '01 Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation
Gaze Awareness for Video-Conferencing: A Software Approach
IEEE MultiMedia
3D reconstruction using labeled image regions
Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing
Recovering 3-D shape and reflectance from a small number of photographs
EGRW '03 Proceedings of the 14th Eurographics workshop on Rendering
MMM '04 Proceedings of the 10th International Multimedia Modelling Conference
IEEE Transactions on Circuits and Systems for Video Technology
Image registration using triangular mesh
PCM'04 Proceedings of the 5th Pacific Rim conference on Advances in Multimedia Information Processing - Volume Part I
Hi-index | 0.00 |
Constructing three-dimensional model from two-dimensional images is an old problem in the area of computer vision. There are many publications and our approach is specifically designed for constructing the depth map of a human face, based on the head movement in a monocular setting. In our example, along with the front view image of the user, three additional images with various head movement are also captured. The objective of our algorithm is to construct the depth map of the front view image. The head pose of the images facing left, up and right are calculated with reference to the front image. The depth map is calculated through a triangular mesh. The nodes on the mesh are the feature points that we calculate the depth with. Through image registration process, the feature points on the front view image are mapped to the other three images. Based on the head pose and the newly mapped coordinate, we could calculate the depth of the feature point. The depth results calculated from each of the three images are combined together to find the final depth value. In this paper, we assumed that the only movement in the scene is the head movement. The result is not as accurate as we expect, and we believe it could be improved.