VERL: An Ontology Framework for Representing and Annotating Video Events

  • Authors:
  • Alexandre R. J. Francois;Ram Nevatia;Jerry Hobbs;Robert C. Bolles

  • Affiliations:
  • University of Southern California;University of Southern California;Information Sciences Institute, USC;SRI International

  • Venue:
  • IEEE MultiMedia
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The notion of "events" is extremely important in characterizing the contents of video. An event is typically triggered by some kind of change of state captured in the video, such as when an object starts moving.The ability to reason with events is a critical step toward video understanding.This article describes the findings of a recent workshop series that has produced an ontology framework for representing video events驴called Video Event Representation Language (VERL驴and a companion annotation framework, called Video Event Markup Language (VEML).One of the key concepts in this work is the modeling of events as composable, whereby complex events are constructed from simpler events by operations such as sequencing, iteration, and alternation.The article presents an extensible event and object ontology expressed in VERL and discusses a detailed example of applying VERL and VEML to the description of a "tailgating" event in surveillance video.