Beyond a visuocentric way of a visual web search clustering engine: the sonification of WhatsOnWeb

  • Authors:
  • Maria Laura Mele;Stefano Federici;Simone Borsci;Giuseppe Liotta

  • Affiliations:
  • ECoNA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, 'Sapienza' University of Rome, IT;ECoNA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, 'Sapienza' University of Rome, IT and Department of Human and Education Sciences, University o ...;ECoNA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, 'Sapienza' University of Rome, IT;Department of Electronic and Information Engineering, University of Perugia, IT

  • Venue:
  • ICCHP'10 Proceedings of the 12th international conference on Computers helping people with special needs: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

It is widely accepted that spatial representation is processed by an amodal system. Recent studies show that blind subjects have a better motion ability than sighted people in performing spatial exploration guided only by auditory cues. The sonification method offers an effective tool able to transmit graphic information, overcoming the digital divide risen by a visuocentric modality in which contents are conveyed. We present a usability evaluation aiming at investigate the interaction differences between both blind and sighted users while surfing WhatsOnWeb, a search engine that displays the information by using graph-drawing methods on semantically clustered data. We compare the visual presentation of three different layouts with the sonificated ones, demonstrating both qualitatively and quantitatively that blind and sighted users perform with no significant differences the interaction. These results remark that the digital divide could be decreased by going beyond the visuocentric way of the commonly adopted visual content representation.