Supporting knowledge-intensive inspection tasks with application ontologies

  • Authors:
  • Nicole J. J. P. Koenderink;Jan L. Top;Lucas J. van Vliet

  • Affiliations:
  • Agrotechnology & Food Sciences Group, Wageningen UR, Bornsesteeg 59, 6708 GP Wageningen, The Netherlands;Department of Computer Science, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam, The Netherlands;Department of Imaging Science & Technology, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands

  • Venue:
  • International Journal of Human-Computer Studies
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the major challenges in computer vision is to create automated systems that perform tasks with at least the same competences as human experts. In particular for automated inspection of natural objects this is not easy to achieve. The task is hampered by large in-class variations and complex 3D-morphology of the objects and subtle argumentations of experts. For example, in our horticultural case we deal with quality assessment of young tomato plants, which requires experienced specialists. We submit that automation of such a task employing an explicit model of the objects and their assessment is preferred over a black-box model obtained from modelling input-output relations only. We propose to employ ontologies for representing the geometrical shapes, object parts and quality classes associated with the explicit models. Our main contribution is the description of a method to develop a white-box computer vision application in which the needed expert knowledge is defined by: (i) decomposing the task of the inspection system into subtasks and (ii) identifying the algorithms that execute the subtasks. This method describes the interaction between the task decomposition and the needed task-specific knowledge, and studies the delicate balance between general domain knowledge and task-specific details. As a proof of principle of this methodology, we work through a horticultural case study and argue that the method leads to a robust, well-performing, and extendable computer vision system.