Cognitive modeling reveals menu search in both random and systematic
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
The Psychology of Human-Computer Interaction
The Psychology of Human-Computer Interaction
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Simulation to predict performance of assistive interfaces
Proceedings of the 9th international ACM SIGACCESS conference on Computers and accessibility
Automatic evaluation of assistive interfaces
Proceedings of the 13th international conference on Intelligent user interfaces
Human-Computer Interaction
Towards accessible interactions with pervasive interfaces, based on human capabilities
ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs: Part I
EHAWC'11 Proceedings of the 2011th international conference on Ergonomics and health aspects of work with computers
Hi-index | 0.00 |
Scientists from many different disciplines (including physiology, psychology, and engineering) have worked on modelling visual perception. However this field has been less extensively studied in the context of computer science, as most existing perception models work only for very specific domains such as menu searching or icon searching tasks. We are developing a perception model that works for any application. It takes a list of mouse events, a sequence of bitmap images of an interface and locations of different objects in the interface as input, and produces a sequence of eye-movements as output. We have identified a set of features to differentiate among different screen objects and using those features, our model has reproduced the results of previous experiments on visual perception in the context of HCI. It can also simulate the effects of different visual impairments on interaction. In this paper we discuss the design, implementation and two pilot studies to demonstrate the model.