Evolving and messaging decision-making agents

  • Authors:
  • Edmund S. Yu

  • Affiliations:
  • MNIS-TextWise Labs, 401 S. Salina St., Syracuse, NY

  • Venue:
  • Proceedings of the fifth international conference on Autonomous agents
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe our neurogenetic approach to developing a multi- agent decision support system which assists users in gathering, merging, analyzing, and using information to assess risks and make recommendations in situations that may require tremendous amounts of time and attention of the users. In Phase I of this project, called the EMMA project, we demonstrated the feasibility of a set of solutions to various problems by building an intelligent agent application that makes recommendations in the credit assessment domain using a constrained, static, well- understood collection of training and testing data. More specifically, this application demonstrated: 1) The effectiveness of a hybrid learning scheme that uses neural networks for local learning by the autonomous domain agents, and a genetic algorithm for evolving the sets of features available to these agents, and the agents themselves. 2) The use of a welldefined agent communication language (IBMs Java- based Knowledge Query and Manipulation Language, or JKQML) to coordinate the training and fusing of multiple decision- making domain agents. 3) The effectiveness of a trainable decisionfusion agent for merging multiple decision- making domain agents results into coherent recommendations for the user. 4) The use of a constrained natural language interface for accepting directives from the user, and for conveying recommendations. Furthermore, the benchmark results show that our EMMA Phase I prototype is comparable to the first class machine learning algorithm in the domain of loan applications or credit- worthiness, as reflected in the published results. We have also shown that our neurogenetic learning algorithm has the potential to perform far better than others, while using just about one half of the input features.