A system for medical consultation and education using multimodal human/machine communication

  • Authors:
  • M. Aakay;I. Marsic;A. Medl;Guangming Bu

  • Affiliations:
  • Thayer Sch. of Eng., Dartmouth Coll., Hanover, NH, USA;-;-;-

  • Venue:
  • IEEE Transactions on Information Technology in Biomedicine
  • Year:
  • 1998

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel application of an interactive and distributed system in medical consultation and education. The presented application uses a multiuser, collaborative environment with multimodal human/machine communication in the dimensions of sight, sound, and touch. The experimental setup, consisting of two user stations, and the multimodal interfaces, including sight (eye tracking), sound (automatic speech), and touch (microbeam pen), were tested and evaluated. The system uses a collaborative workspace as a common visualization space. Users communicate with the application through a fusion agent by eye tracking, speech, and microbeam pen. The audio/video teleconferencing is also included to help the radiologists to communicate with each other simultaneously while they are working on the mammograms. The system has three software agents: a fusion agent, a conversational agent, and an analytic agent. The fusion agent interprets multimodal commands by integrating the multimodal inputs. The conversational agent answers the user's questions and detects human related or semantic errors and notifies the user about the results of the image analysis. The analytic agent enhances the digitized images using the wavelet denoising algorithm if requested by the user. To evaluate the system, we used it for medical consultation on mammograms. Results also show that the relevant information about the region of interest of the mammograms chosen by the users is extracted automatically and used to enhance the mammograms.