Bricks: laying the foundations for graspable user interfaces
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dynamo: a public interactive surface supporting the cooperative sharing and exchange of media
Proceedings of the 16th annual ACM symposium on User interface software and technology
Getting a grip on tangible interaction: a framework on physical space and social interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces
Proceedings of the 1st international conference on Tangible and embedded interaction
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Interactive surfaces and tangibles
XRDS: Crossroads, The ACM Magazine for Students - The Future of Interaction
A framework for robust and flexible handling of inputs with uncertainty
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Midas: a declarative multi-touch interaction framework
Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction
Towards a formalization of multi-touch gestures
ACM International Conference on Interactive Tabletops and Surfaces
Exploring tabletops as an effective tool to foster creativity traits
Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction
Proton: multitouch gestures as regular expressions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
While the HCI community has been putting a lot of effort on creating physical interfaces for collaboration, studying multi-user interaction dynamics and creating specific applications to support (and test) this kind of phenomena, it has not addressed the problems implied in having multiple applications sharing the same interactive space. Having an ecology of rich interactive programs sharing the same interfaces poses questions on how to deal with interaction ambiguity in a cross-application way and still allow different programmers the freedom to program rich unconstrained interaction experiences. This paper describes GestureAgents, a framework demonstrating several techniques that can be used to coordinate different applications in order to have concurrent multi-user multi-tasking interaction and still dealing with gesture ambiguity across multiple applications.