Models and algorithms for vision through the atmosphere

  • Authors:
  • Shree K. Nayar;Srinivasa G. Narasimhan

  • Affiliations:
  • -;-

  • Venue:
  • Models and algorithms for vision through the atmosphere
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from bad weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow. We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage; we are not only interested in what bad weather does to vision but also what it can do for vision. This thesis presents a novel and comprehensive set of models, algorithms and image datasets for better image understanding in bad weather. The models presented here can be broadly classified into single scattering and multiple scattering models. Existing single scattering models like attenuation and airlight form the basis of three new models viz., the contrast model, the dichromatic model and the polarization model. Each of these models is suited to different types of atmospheric and illumination conditions as well as different sensor types. Based on these models, we develop algorithms to recover pertinent scene properties, such as 3D structure, and clear day scene contrasts and colors, from one or more images taken under poor weather conditions. Next, we present an analytic model for multiple scattering of light in a scattering medium. From a single image of a light source immersed in a medium, interesting properties of the medium can be estimated. If the medium is the atmosphere, the weather condition and the visibility of the atmosphere can be estimated. These quantities can in turn be used to remove the glows around sources obtaining a clear picture of the scene. Based on these results, the camera serves as a “visual weather meter”. Our analytic model can be used to analyze scattering in virtually any scattering medium, including fluids and tissues. Therefore, in addition to vision in bad weather, our work has implications for real-time rendering of participating media in computer graphics, medical imaging and underwater imaging. Apart from the models and algorithms, we have acquired an extensive database of images of an outdoor scene almost every hour for 9 months. This dataset is the first of its kind and includes high quality calibrated images captured under a wide variety of weather and illumination conditions and all four seasons. Such a dataset could not only be used as a testbed for validating existing appearance models (including the ones presented in this work) but also inspire new data driven models. In addition to computer vision, this dataset could be useful for researchers in other fields like graphics, image processing, remote sensing and atmospheric sciences. The database is freely distributed for research purposes and can be requested through our web site http://www.cs.columbia.edu/∼wild. We believe that this thesis opens new research directions needed for computer vision to be successful in the outdoors.