Combining gestures and direct manipulation
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Interactive beautification: a technique for rapid geometric design
Proceedings of the 10th annual ACM symposium on User interface software and technology
Integrating pen operations for composition by example
Proceedings of the 11th annual ACM symposium on User interface software and technology
Fluid sketches: continuous recognition and morphing of simple hand-drawn shapes
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Proceedings of the 16th annual ACM symposium on User interface software and technology
PreSense: interaction techniques for finger sensing input devices
Proceedings of the 16th annual ACM symposium on User interface software and technology
Stylus input and editing without prior selection of mode
Proceedings of the 16th annual ACM symposium on User interface software and technology
Experimental analysis of mode switching techniques in pen-based user interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hover widgets: using the tracking state to extend the capabilities of pen-operated devices
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Pointing lenses: facilitating stylus input through visual-and motor-space magnification
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
The user's intention is reflected in not only the actual input action but the ones immediately before it as well. "P-Recognition"" recognizes the preceding actions, and predicts the intention just when the actual action starts. This paper tests P-Recognition in a pen-based map navigation interface as an example, where the map is panned by user's dragging strokes and zoomed by user's enclosure by a circle. The feasibility of the proposal is confirmed in an experiment. We find that dragging and circling actions are distinguishable before the pen touches the screen. Moreover, for some users we can recognize their intention to write text. It is confirmed that the user's intention is present in the preceding actions and so is detectable.