Designing the user interface (2nd ed.): strategies for effective human-computer interaction
Designing the user interface (2nd ed.): strategies for effective human-computer interaction
The visible differences predictor: an algorithm for the assessment of image fidelity
Digital images and human vision
Cognitive modeling reveals menu search in both random and systematic
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Human-Computer Interaction
Feature Extraction & Image Processing, Second Edition
Feature Extraction & Image Processing, Second Edition
An integrated model of eye movements and visual encoding
Cognitive Systems Research
Modeling icon search in ACT-R/PM
Cognitive Systems Research
Evaluating the design of inclusive interfaces by simulation
Proceedings of the 15th international conference on Intelligent user interfaces
Investigating the accessibility of program selection menus of a digital TV interface
HCII'11 Proceedings of the 14th international conference on Human-computer interaction: users and applications - Volume Part IV
Modeling visual attention for rule-based usability simulations of elderly citizen
EPCE'11 Proceedings of the 9th international conference on Engineering psychology and cognitive ergonomics
A new interaction technique involving eye gaze tracker and scanning system
Proceedings of the 2013 Conference on Eye Tracking South Africa
Hi-index | 0.00 |
User modeling is widely used in HCI but there are very few systematic HCI modelling tools for people with disabilities. We are developing user models to help with the design and evaluation of interfaces for people with a wide range of abilities. We present a perception model that can work for some kinds of visually-impaired users as well as for able-bodied people. The model takes a list of mouse events, a sequence of bitmap images of an interface and locations of different objects in the interface as input, and produces a sequence of eye-movements as output. Our model can predict the visual search time for two different visual search tasks with significant accuracy for both able-bodied and visually-impaired people.