Universal subgoaling and chunking: the automatic generation and learning of goal hierarchies
Universal subgoaling and chunking: the automatic generation and learning of goal hierarchies
Extending Fitts' law to two-dimensional tasks
CHI '92 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Perceptual-motor control in human-computer interaction
Perceptual-motor control in human-computer interaction
Cognitive modeling reveals menu search in both random and systematic
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
A comparison of rule-based and positionally constant arrangements of computer menu items
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
The Psychology of Human-Computer Interaction
The Psychology of Human-Computer Interaction
Toward automated exploration of interactive systems
Proceedings of the 7th international conference on Intelligent user interfaces
Visual search and mouse-pointing in labeled versus unlabeled two-dimensional visual hierarchies
ACM Transactions on Computer-Human Interaction (TOCHI)
CHI '99 Extended Abstracts on Human Factors in Computing Systems
The human-computer interaction handbook
Dynamic detection of novice vs. skilled use without a task model
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Adaptively shortened pull down menus: location knowledge and selection efficiency
Behaviour & Information Technology
Cognitive strategies for the visual search of hierarchical computer displays
Human-Computer Interaction
Acquisition of Animated and Pop-Up Targets
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part II
Exploiting the icon arrangement on mobile devices as information source for context-awareness
Proceedings of the 12th international conference on Human computer interaction with mobile devices and services
Hi-index | 0.01 |
This research presents cognitive models of a person selecting anitem from a familiar, ordered, pull-down menu. Two different modelsprovide a good fit with human data and thus two different possibleexplanations for the low- level cognitive processes involved in thetask. Both models assert that people make an initial eye and handmovement to an anticipated target location without waiting for themenu to appear. The first model asserts that a person knows theexact location of the target item before the menu appears, but themodel uses nonstandard Fitts law coefficients to predict mousepointing time. The second model asserts that a person would onlyknow the approximate location of the target item, and the modeluses Fitts law coefficients better supported by the literature.This research demonstrates that people can develop considerableknowledge of locations in a visual task environment, and that morework regarding Fitts law is needed.