Learning from Inconsistencies in an Integrated Cognitive Architecture

  • Authors:
  • Kai-Uwe Kühnberger;Peter Geibel;Helmar Gust;Ulf Krumnack;Ekaterina Ovchinnikova;Angela Schwering;Tonio Wandmacher

  • Affiliations:
  • University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany;University of Osnabrück, Institute of Cognitive Science, Albrechtstr. 28, 49076 Osnabrück, Germany

  • Venue:
  • Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Whereas symbol-based systems, like deductive reasoning devices, knowledge bases, planning systems, or tools for solving constraint satisfaction problems, presuppose (more or less) the consistency of data and the consistency of results of internal computations, this is far from being plausible in real-world applications, in particular, if we take natural agents into account. Furthermore in complex cognitive systems, that often contain a large number of different modules, inconsistencies can jeopardize the integrity of the whole system. This paper addresses the problem of resolving inconsistencies in hybrid cognitively inspired systems on both levels, in single processing modules and in the overall system. We propose the hybrid architecture I-Cog as a flexible tool, that is explicitly designed to reorganize knowledge constantly and use occurring inconsistencies as a non-classical learning mechanism.