Experimental Assessment of Accuracy of Automated Knowledge Capture

  • Authors:
  • Susan M. Stevens;J. Chris Forsythe;Robert G. Abbott;Charles J. Gieseler

  • Affiliations:
  • Sandia National Laboratories, Albuquerque, USA NM 87185;Sandia National Laboratories, Albuquerque, USA NM 87185;Sandia National Laboratories, Albuquerque, USA NM 87185;Sandia National Laboratories, Albuquerque, USA NM 87185

  • Venue:
  • FAC '09 Proceedings of the 5th International Conference on Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience: Held as Part of HCI International 2009
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

The U.S. armed services are widely adopting simulation-based training, largely to reduce costs associated with live training. However simulation-based training still requires a high instructor-to-student ratio which is expensive. Intelligent tutoring systems target this need, but they are often associated with high costs for knowledge engineering and implementation. To reduce these costs, we are investigating the use of machine learning to produce models of expert behavior for automated student assessment. A key concern about the expert modeling approach is whether it can provide accurate assessments on complex tasks of real-world interest. This study evaluates of the accuracy of model-based assessments on a complex task. We trained employees at Sandia National Laboratories on a Navy simulator and then compared their simulation performance to the performance of experts using both automated and manual assessment. Results show that automated assessments were comparable to the manual assessments on three metrics.