Real-time natural scene analysis for a blind prosthesis

  • Authors:
  • Michael F. Deering;Carter Collins

  • Affiliations:
  • Computer Science Division, Department of EECS, University of California, Berkeley, Berkeley, California;Smith-Kettlewell Institute of Visual Sciences, San Francisco, California

  • Venue:
  • IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1981

Quantified Score

Hi-index 0.00

Visualization

Abstract

A real-time computer vision system designed for the limited environment of city sidewalks is presented. This system is part of a prototype mobility aid for the blind. The overall device endeavors to keep blind pedestrians on a safe path down the sidewalk, and also warn of upcoming obstacles. The scene analysis algorithm uses semantic models of the environment to interpret edges in the multi-frame image data as borders of various objects, as well as to assign distance estimates to these objects. The input is a 64 by 64 by 6 bit gray-scale image taken from the vantage point of the shoulder of a pedestrian once a second. Along with each image, the three dimensional transformation of the camera location since the previous frame is assumed to be provided by hardware. After an initial segmentation into edge lines represented as arcs of circles, predictions of edges (generated by analysis of previous frames) are used to identify edges in the current frame. Edges not identified by this process are incorporated into the portion of the three dimensional world model that they are the most consistent with. The induced three dimensional world model of objects can then be used to provide mobility information to the blind user. The emphasis throughout the system has been on efficiency. The design trade-offs and techniques used to obtain high processing rates are discussed. Most of the vision system is currently running in real-time on a 16 bit micro-processor. Field trials of the complete prototype device will begin soon.