Designing an Embedded Video Processing Camera Using a 16-bit Microprocessor for a Surveillance System

  • Authors:
  • Koichi Sato;Brian L. Evans;J. K. Aggarwal

  • Affiliations:
  • Department of Electrical and Computer Engineering and Computer and Vision Research Center, The University of Texas at Austin, Austin, USA 78712;Department of Electrical and Computer Engineering and Embedded Signal Processing Laboratory, The University of Texas at Austin, Austin, USA 78712;Department of Electrical and Computer Engineering and Computer and Vision Research Center, The University of Texas at Austin, Austin, USA 78712

  • Venue:
  • Journal of VLSI Signal Processing Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes the design and implementation of a hybrid intelligent surveillance system that consists of an embedded system and a personal computer (PC)-based system. The embedded system performs some of the image processing tasks and sends the processed data to the PC. The PC tracks persons and recognizes two-person interactions by using a grayscale side view image sequence captured by a stationary camera. Based on our previous research, we explored the optimum division of tasks between the embedded system and the PC, simulated the embedded system using dataflow models in Ptolemy, and prototyped the embedded system in real-time hardware and software using a 16-bit CISC microprocessor. This embedded system processes one 320 脳 240 frame in 89 ms, which yields one-third of the rate of 30 Hz video system. In addition, the real-time embedded system prototype uses 5.7 K bytes of program memory, 854 K bytes of internal data memory and 2 M bytes external DRAM.