Multi-modal user interaction method based on gaze tracking and gesture recognition

  • Authors:
  • Heekyung Lee;Seong Yong Lim;Injae Lee;Jihun Cha;Dong-Chan Cho;Sunyoung Cho

  • Affiliations:
  • Electronics and Telecommunications Research Institute, 218Gajeong-ro, Yuseong-gu, Daejeon305-700, Republic of Korea;Electronics and Telecommunications Research Institute, 218Gajeong-ro, Yuseong-gu, Daejeon305-700, Republic of Korea;Electronics and Telecommunications Research Institute, 218Gajeong-ro, Yuseong-gu, Daejeon305-700, Republic of Korea;Electronics and Telecommunications Research Institute, 218Gajeong-ro, Yuseong-gu, Daejeon305-700, Republic of Korea;Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 133-791, Republic of Korea;Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul 120-749, Republic of Korea

  • Venue:
  • Image Communication
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a gaze tracking technology which provides a convenient human-centric interface for multimedia consumption without any wearable device. It enables a user to interact with various multimedia on a large display in distance by tracking user movement and acquiring high resolution eye images. This paper also presents a gesture recognition technology which is helpful to interact with scene descriptions in terms of controlling and rendering scene objects. It is based on Hidden Markov Model and CRF using a commercial depth sensor. And then, this paper shows a collaboration method with those new sensors and MPEG standards in order to achieve interoperability among interactive applications, new user interaction devices and users.