Intelligent camera control using behavior trees

  • Authors:
  • Daniel Markowitz;Joseph T. Kider;Alexander Shoulson;Norman I. Badler

  • Affiliations:
  • Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA;Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA;Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA;Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA

  • Venue:
  • MIG'11 Proceedings of the 4th international conference on Motion in Games
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic camera systems produce very basic animations for virtual worlds. Users often view environments through two types of cameras: a camera that they control manually, or a very basic automatic camera that follows their character, minimizing occlusions. Real cinematography features much more variety producing more robust stories. Cameras shoot establishing shots, close-ups, tracking shots, and bird's eye views to enrich a narrative. Camera techniques such as zoom, focus, and depth of field contribute to framing a particular shot. We present an intelligent camera system that automatically positions, pans, tilts, zooms, and tracks events occurring in real-time while obeying traditional standards of cinematography. We design behavior trees that describe how a single intelligent camera might behave from low-level narrative elements assigned by “smart events”. Camera actions are formed by hierarchically arranging behavior sub-trees encapsulating nodes that control specific camera semantics. This approach is more modular and particularly reusable for quickly creating complex camera styles and transitions rather then focusing only on visibility. Additionally, our user interface allows a director to provide further camera instructions, such as prioritizing one event over another, drawing a path for the camera to follow, and adjusting camera settings on the fly. We demonstrate our method by placing multiple intelligent cameras in a complicated world with several events and storylines, and illustrate how to produce a well-shot “documentary” of the events constructed in real-time.