Virtual conferencing

  • Authors:
  • A. N. Mortlock;D. Machin;S. McConnell;P. Sheppard

  • Affiliations:
  • BT Laboratories, Martlesham Heath, Ipswich, Suffolk, England IP 5 3RE;BT Laboratories, Martlesham Heath, Ipswich, Suffolk, England IP 5 3RE;BT Laboratories, Martlesham Heath, Ipswich, Suffolk, England IP 5 3RE;BT Laboratories, Martlesham Heath, Ipswich, Suffolk, England IP 5 3RE

  • Venue:
  • BT Technology Journal
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current multi-party video- and audioconferencing systems limit natural communications between participants. People communicate by speech, facial expressions and body gestures. In interactions between three or more people, these communications channels are directed towards particular participants. Spatial proximity and gaze direction are therefore important elements for effective conversational interactions, and yet are largely unsupported in existing conferencing tools. Advanced audioconferencing systems do simulate presence in a shared environment by using ‘virtual humans‘ to represent the people taking part in a meeting, but the keyboard and mouse are used to direct conversations to specific people or to change the visual representation to simulate emotion.This paper describes an experimental implementation of virtual conferencing, which uses machine vision to control a realistic virtual human, with the objective of making ‘virtual meetings‘ more like physical ones. The computer vision system provides a more natural interface to the environment, while the realistic representation of users, with appropriate facial gestures and upper body movement, gives more natural visual feedback.