Automatically identifying targets users interact with during real world tasks

  • Authors:
  • Amy Hurst;Scott E. Hudson;Jennifer Mankoff

  • Affiliations:
  • Carnegie Mellon, Pittsburgh, PA, USA;Carnegie Mellon, Pittsburgh, PA, USA;Carnegie Mellon, Pittsburgh, PA, USA

  • Venue:
  • Proceedings of the 15th international conference on Intelligent user interfaces
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.