Overtaking Vehicle Detection Using Dynamic and Quasi-Static Background Modeling

  • Authors:
  • Junxian Wang;George Bebis;Ronald Miller

  • Affiliations:
  • University of Nevada, Reno;University of Nevada, Reno;Ford Motor Company

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robust and reliable detection of overtaking vehicles is an important component of any on-board driver assistance system. Optical flow, with the abundant motion information present in image sequences, has been studied extensively for vehicle detection. However, using dense optical ?ow for vehicle detection is sensitive to shocks and vibrations of the mobile camera; image outliers caused by illumination changes; and high computational complexity. To improve vehicle detection performance and reduce computational complexity, we propose an efficient and robust methodology for overtaking vehicle detection based on homogeneous sparse optical flow and eigenspace modeling. Specifically, our method models the background into dynamic and quasi-static regions. Instead of using dense optical flow to model the dynamic parts of the background, we employ homogeneous sparse optical flow, which makes detection more robust to camera shocks and vibrations. Moreover, to make detection robust to illumination changes, we employ a blockbased eigenspace approach to represent quasi-static regions in the background. A region-based hysteresis-thresholding approach, augmented by a localized spatial segmentation procedure, attains a good tradeoff between true detections and false positives. The proposed methodology has been evaluated using challenging traf?c scenes illustrating good performance.