Automatic generation of textual, audio, and animated help in UIDE: the User Interface Design

  • Authors:
  • Piyawadee Noi Sukaviriya;Jeyakumar Muthukumarasamy;Anton Spaans;Hans J. J. de Graaff

  • Affiliations:
  • Graphics, Visualization, and Usability Center, Georgia Institute of Technology, Atlanta, GA;Graphics, Visualization, and Usability Center, Georgia Institute of Technology, Atlanta, GA;Delft University of Technology, Delft, the Netherlands;Delft University of Technology, Delft, the Netherlands

  • Venue:
  • AVI '94 Proceedings of the workshop on Advanced visual interfaces
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research on automatic help generation fails to match the advance in user interface technology. With users and interfaces becoming increasingly sophisticated, generating help information must be presented with a close tie to the current work context. Help research also needs to utilize the media technology to become effective in conveying information to users. Our work on automatic generation of help from user interface specifications attempts to bridge the gaps, both between help and user interface making help truly sensitive to the interface context, and between the help media and the interface media making communication more direct and more effective. Our work previously reported emphasized a shared knowledge representation for both user interface and help, and an architecture for automatic generation of context-sensitive animated help in Smalltalk-80. This paper presents a new integrated architecture in C++ which not only generates animation, but also audio as procedural help. The architecture also uses the knowledge representation to automatically provide textual help of why an object in an interface is disabled.