Personis: A Server for User Models
AH '02 Proceedings of the Second International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems
Automating accessibility: the dynamic keyboard
Assets '04 Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility
Sharing User Models for Adaptive Hypermedia Applications
ISDA '05 Proceedings of the 5th International Conference on Intelligent Systems Design and Applications
A database to promote continuous program improvement
Proceedings of the 7th conference on Information technology education
Automatically generating user interfaces adapted to users' motor and vision capabilities
Proceedings of the 20th annual ACM symposium on User interface software and technology
Ability-Based Design: Concept, Principles and Examples
ACM Transactions on Accessible Computing (TACCESS)
Improving calibration time and accuracy for situation-specific models of color differentiation
The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility
User modeling in a distributed e-learning architecture
UM'05 Proceedings of the 10th international conference on User Modeling
PersonisJ: mobile, client-side user modelling
UMAP'10 Proceedings of the 18th international conference on User Modeling, Adaptation, and Personalization
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: applications and services for quality of life - Volume Part III
Hi-index | 0.00 |
Touch-screens are becoming increasingly ubiquitous. They have great appeal due to their capabilities to support new forms of human interaction, including their abilities to interpret rich gestural inputs, render flexible user interfaces and enable multi-user interactions. However, the technology creates new challenges and barriers for users with limited levels of vision and motor abilities. The PhD work described in this paper proposes a technique combining Shared User Models (SUM) and adaptive interfaces to improve the accessibility of touch-screen devices for people with low levels of vision and motor ability. SUM, built from an individual's interaction data across multiple applications and devices, is used to infer new knowledge of their abilities and characteristics, without the need for continuous calibration exercises or user configurations. This approach has been realized through the development of an open source software framework to support the creation of applications that make use of SUM to adapt interfaces that match the needs of individual users.