Towards an on-demand peer feedback system for a clinical knowledge base: A case study with order sets

  • Authors:
  • Nathan C. Hulse;Guilherme Del Fiol;Richard L. Bradshaw;Lorrie K. Roemer;Roberto A. Rocha

  • Affiliations:
  • Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA and Knowledge Management Team, Intermountain Healthcare, Salt Lake City, UT, USA;Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA and Knowledge Management Team, Intermountain Healthcare, Salt Lake City, UT, USA;Knowledge Management Team, Intermountain Healthcare, Salt Lake City, UT, USA;Knowledge Management Team, Intermountain Healthcare, Salt Lake City, UT, USA;Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA and Knowledge Management Team, Intermountain Healthcare, Salt Lake City, UT, USA

  • Venue:
  • Journal of Biomedical Informatics
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Objective: We have developed an automated knowledge base peer feedback system as part of an effort to facilitate the creation and refinement of sound clinical knowledge content within an enterprise-wide knowledge base. The program collects clinical data stored in our Clinical Data Repository during usage of a physician order entry program. It analyzes usage patterns of order sets relative to their templates and creates a report detailing the usage patterns of the order set template. This report includes a set of suggested modifications to the template. Design: A quantitative analysis was performed to assess the validity of the program's suggested order set template modifications. Measurements: We collected and deidentified 2951 instances of POE-based orders. Our program then identified and generated feedback reports for thirty different order set templates from this data set. These reports contained 500 suggested modifications. Five order set authors were then asked to 'accept' or 'reject' each suggestion contained in his/her respective order set templates. They were also asked to categorize their rationale for doing so ('clinical relevance' or 'user convenience'). Results: In total, 62% (309/500) suggestions were accepted by clinical content authors. Of these, authors accepted 32% (36/114) of the suggested additions, 74% (123/167) of the suggested pre-selections, 76% (16/25) of the suggested de-selections, and 68% (131/194) of the suggested changes in combo box order. Conclusion: Overall, the feedback system generated suggestions that were deemed highly acceptable among order set authors. Future refinements and enhancements to the software will add to its utility.