Separating application code from toolkits: eliminating the spaghetti of call-backs
UIST '91 Proceedings of the 4th annual ACM symposium on User interface software and technology
Survey on user interface programming
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A graphical filter/flow representation of Boolean queries: a prototype implementation and evaluation
Journal of the American Society for Information Science
Why interaction is more powerful than algorithms
Communications of the ACM
A software model and specification language for non-WIMP user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
An open graph visualization system and its applications to software engineering
Software—Practice & Experience - Special issue on discrete algorithm engineering
A comparison of static, adaptive, and adaptable menus
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing interaction, not interfaces
Proceedings of the working conference on Advanced visual interfaces
An Immersive Assembly and Maintenance Simulation Environment
DS-RT '04 Proceedings of the 8th IEEE International Symposium on Distributed Simulation and Real-Time Applications
User interface façades: towards fully adaptable user interfaces
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
Hi-index | 0.01 |
With the increasing use of 3D displays and input devices we need to be sure that when 3D worlds are created that their users can easily learn how to operate within these 3D worlds. To do this we can provide the user with a contextal interaction support within the environment. Within virtual worlds where you are free to move around and especially when you are immersed, trying to refer to a manual to ascertain your next course of action within the world would not be best received by the user. Instead of manuals separate from the computer system, the computer system should be able to interrogate itself to provide the user with information on what the system can do. For computer systems to be able to do this we need to move away from defining interaction using an event based model to formally defining the interaction dialogue. We have shown how by using ATNs you can allow the user to ask what they can do within the current context. The user can also query the system to see how they can perform a specific task. The help provided can also identify to the user the components within the environment that they need to interact with. Further work has begun to examine how the user could adapt the interaction within the system by visualising the ATN.