Designing and automating the quality assessment of a knowledge-based. system: The initial Automated academic advisor experience

  • Authors:
  • Nick Cercone;Robert Hadley;Fred Martin;Paul McFetridge;Tomek Strzalkowski

  • Affiliations:
  • Laboratory for Computer and Communications Research, Department of Computing Science, Simon fraser UniversityBurnaby, British Columbia, CANADA;Laboratory for Computer and Communications Research, Department of Computing Science, Simon fraser UniversityBurnaby, British Columbia, CANADA;Laboratory for Computer and Communications Research, Department of Computing Science, Simon fraser UniversityBurnaby, British Columbia, CANADA;Laboratory for Computer and Communications Research, Department of Computing Science, Simon fraser UniversityBurnaby, British Columbia, CANADA;Laboratory for Computer and Communications Research, Department of Computing Science, Simon fraser UniversityBurnaby, British Columbia, CANADA

  • Venue:
  • PKWBS-W'84 Proceedings of the 1984 IEEE conference on Principles of knowledge-based systems
  • Year:
  • 1984

Quantified Score

Hi-index 0.00

Visualization

Abstract

The automated academic advisor (AAA), a large practical artificial intelligence system currently under development, is introduced. Two parsers are described which were designed for use with the AAA. The ATN parser ENGRA is described with emphasis given to several enhancements to traditional ATN parsers incorporated into ENORA. Our first AAA prototype consolidates ENORA and semantic interpreter (SI) to generate formal SQL queries for the ORACLE relational data base for a limited range of queries. We then discuss how the ENGRA/SI prototype was enhanced. SHADOW, a Prolog-based English analyzer which forms the basis of our second prototype, is then described. Informal comparisons are maed between the applicability of the two parsers to the AAA and our initial experience with these two prototypes is discussed. The design of an evaluation subsystem is discussed briefly; using such a system we intend to discover universal techniques of system evaluation which will permit consistency and comparability of evaluation. Our evoluation emphasis is placed on the system's quality assessment rather than the more traditional performance measurement criterion.