Vision-based material recognition for automated monitoring of construction progress and generating building information modeling from unordered site image collections

  • Authors:
  • Andrey Dimitrov;Mani Golparvar-Fard

  • Affiliations:
  • -;-

  • Venue:
  • Advanced Engineering Informatics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatically monitoring construction progress or generating Building Information Models using site images collections - beyond point cloud data - requires semantic information such as construction materials and inter-connectivity to be recognized for building elements. In the case of materials such information can only be derived from appearance-based data contained in 2D imagery. Currently, the state-of-the-art texture recognition algorithms which are often used for recognizing materials are very promising (reaching over 95% average accuracy), yet they have mainly been tested in strictly controlled conditions and often do not perform well with images collected from construction sites (dropping to 70% accuracy and lower). In addition, there is no benchmark that validates their performance under real-world construction site conditions. To overcome these limitations, we propose a new vision-based method for material classification from single images taken under unknown viewpoint and site illumination conditions. In the proposed algorithm, material appearance is modeled by a joint probability distribution of responses from a filter bank and principal Hue-Saturation-Value color values and classified using a multiple one-vs.-all @g^2 kernel Support Vector Machine classifier. Classification performance is compared with the state-of-the-art algorithms both in computer vision and AEC communities. For experimental studies, a new database containing 20 typical construction materials with more than 150 images per category is assembled and used for validation. Overall, for material classification an average accuracy of 97.1% for 200x200 pixel image patches are reported. In cases where image patches are smaller, our method can synthetically generate additional pixels and maintain a competitive accuracy to those reported above (90.8% for 30x30 pixel patches). The results show the promise of the applicability of the proposed method and expose the limitations of the state-of-the-art classification algorithms under real world conditions. It further defines a new benchmark that could be used to measure the performance of future algorithms.