Creating user interfaces by demonstration
Creating user interfaces by demonstration
Metamouse: specifying graphical procedures by example
SIGGRAPH '89 Proceedings of the 16th annual conference on Computer graphics and interactive techniques
EAGER: programming repetitive tasks by example
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Demonstrational interfaces: Coming soon?
CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Accessing information through graphics
ECAI '92 Proceedings of the 10th European conference on Artificial intelligence
A multimodal syntax-directed graph editor
Structure-based editors and environments
Generating Referring Expressions in a Multimodal Environment
Proceedings of the 6th International Workshop on Natural Language Generation: Aspects of Automated Natural Language Generation
EMACS: The Extensible, Customizable, Self-Documenting Display Editor
EMACS: The Extensible, Customizable, Self-Documenting Display Editor
Programming by example
Commenting on action: continuous linguistic feedback generation
IUI '93 Proceedings of the 1st international conference on Intelligent user interfaces
An approach to natural gesture in virtual environments
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on virtual reality software and technology
A Learning Agent that Assists the Browsing of Software Libraries
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
An action inferring facility for a multimodal interface called Edward is described. Based on the actions the user performs, Edward anticipates future actions and offers to perform them automatically. The system uses inductive inference to anticipate actions. It generalizes over arguments and results, and detects patterns on the basis of a small sequence of user actions, e.g. “copy a lisp file; change extension of original file into .org; put the copy in the backup folder”. Multimodality (particularly the combination of natural language and simulated pointing gestures) and the reuse of patterns are important new features. Some possibilities and problems of action inferring interfaces in general are addressed. Action inferring interfaces are particularly useful for professional users of general-purpose applications. Such users are unable to program repetitive patterns because either the applications do not provide the facilities or the users lack the capabilities.