Designing the user interface: strategies for effective human-computer interaction
Designing the user interface: strategies for effective human-computer interaction
Visual programming—toward realization of user-friendly programming environments
ACM '87 Proceedings of the 1987 Fall Joint Computer Conference on Exploring technology: today and tomorrow
Visual programming
Human-computer interface development: concepts and systems for its management
ACM Computing Surveys (CSUR)
IEEE Transactions on Software Engineering
On mental models and the user interface
Human-computer interaction
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
Visual Languages and Visual Programming
Visual Languages and Visual Programming
Visual Languages
User-Interface Tools: Introduction and Survey
IEEE Software
IEEE Transactions on Software Engineering
Attribute grammar paradigms—a high-level methodology in language implementation
ACM Computing Surveys (CSUR)
Semantics-Based Inference Algorithms for Adaptive Visual Environments
IEEE Transactions on Software Engineering
A Parsing Methodology for the Implementation of Visual Systems
IEEE Transactions on Software Engineering
Using extended positional grammars to develop visual modeling languages
SEKE '02 Proceedings of the 14th international conference on Software engineering and knowledge engineering
Extending Visual Languages for Multimedia
IEEE MultiMedia
Visual-Language System for User Interfaces
IEEE Software
Implementation of visual languages using pattern-based specifications
Software—Practice & Experience
The impact of software engineering research on modern progamming languages
ACM Transactions on Software Engineering and Methodology (TOSEM)
A declarative specification and semantics for visual languages
Journal of Visual Languages and Computing
Hi-index | 0.00 |
A system to generate and interpret customized visual languages in given application areas is presented. The generation is highly automated. The user presents a set of sample visual sentences to the generator. The generator uses inference grammar techniques to produce a grammar that generalizes the initial set of sample sentences, and exploits general semantic information about the application area to determine the meaning of the visual sentences in the inferred language. The interpreter is modeled on an attribute grammar. A knowledge base, constructed during the generation of the system, is then consulted to construct the meaning of the visual sentence. The architecture of the system and its use in the application environment of visual text editing (inspired by the Heidelberg icon set) enhanced with file management features are reported.