Modular Models of Intelligence – Review, Limitations and Prospects

  • Authors:
  • Amitabha Mukerjee;Amol Dattatraya Mali

  • Affiliations:
  • Center for Robotics, I.I.T. Kanpur, India 208016 (E-mail: amit@iitk.ac.in);Department of Elec. Engg and Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, USA (E-mail: mali@miller.cs.uwm.edu)

  • Venue:
  • Artificial Intelligence Review
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

AI applications are increasingly moving to modular agents, i.e.,systems that independently handle parts of the problem based on smalllocally stored information (Grosz and Davis 1994), (Russell and Norvig 1995). Many suchagents minimize inter-agent communication by relying on changes in theenvironment as their cue for action. Some early successes of thismodel, especially in robotics (``reactive agents''), have led to adebate over this class of models as a whole. One of theissues on which attention has been drawn is that of conflicts betweensuch agents. In this work we investigate a cyclic conflict thatresults in infinite looping between agents and has a severedebilitating effect on performance. We present some new results inthe debate, and compare this problem with similar cyclicity observedin planning systems, meta-level planners, distributed agent models andhybrid reactive models. The main results of this work are:(a) The likelihood of such cycles developing increasesas the behavior sets become more useful.(b) Control methods for avoiding cycles such asprioritization are unreliable, and(c) Behavior refinement methods that reliably avoidthese conflicts (either by refining the stimulus, or by weakeningthe action) lead to weaker functionality.Finally, we show how attempts to introduce learning into thebehavior modules will also increase the likelihood of cycles.