Auto-organized visual perception using distributed camera network

  • Authors:
  • Richard Chang;Sio-Hoi Ieng;Ryad Benosman

  • Affiliations:
  • Institut des Systemes Intelligents et de Robotique (ISIR-CNRS), 4 Place Jussieu, 75252 Paris Cedex 05, France;Institut des Systemes Intelligents et de Robotique (ISIR-CNRS), 4 Place Jussieu, 75252 Paris Cedex 05, France;Institut des Systemes Intelligents et de Robotique (ISIR-CNRS), 4 Place Jussieu, 75252 Paris Cedex 05, France

  • Venue:
  • Robotics and Autonomous Systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Camera networks are complex vision systems difficult to control if the number of sensors is getting higher. With classic approaches, each camera has to be calibrated and synchronized individually. These tasks are often troublesome because of spatial constraints, and mostly due to the amount of information that need to be processed. Cameras generally observe overlapping areas, leading to redundant information that are then acquired, transmitted, stored and then processed. We propose in this paper a method to segment, cluster and codify images acquired by cameras of a network. The images are decomposed sequentially into layers where redundant information are discarded. Without the need of any calibration operation, each sensor contributes to build a global representation of the entire network environment. The information sent by the network is then represented by a reduced and compact amount of data using a codification process. This framework allows structures to be retrieved and also the topology of the network. It can also provide the localization and trajectories of mobile objects. Experiments will present practical results in the case of a network containing 20 cameras observing a common scene.