Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Situated play in a tangible interface and adaptive audio museum guide
Personal and Ubiquitous Computing
Why we twitter: understanding microblogging usage and communities
Proceedings of the 9th WebKDD and 1st SNA-KDD 2007 workshop on Web mining and social network analysis
Activity-based serendipitous recommendations with the Magitti mobile leisure guide
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How and why people Twitter: the role that micro-blogging plays in informal communication at work
Proceedings of the ACM 2009 international conference on Supporting group work
Ubikequitous computing: designing interactive experiences for cyclists
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Eddi: interactive topic-based browsing of social status streams
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
AudioFeeds: a mobile auditory application for monitoring online activities
Proceedings of the international conference on Multimedia
Twitinfo: aggregating and visualizing microblogs for event exploration
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Organizing and presenting geospatial tags in location-based augmented reality
Personal and Ubiquitous Computing
The effects of walking speed on target acquisition on a touchscreen interface
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Exploring serendipitous social networks: sharing immediate situations among unacquainted individuals
Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Navigation your way: from spontaneous independent exploration to dynamic social journeys
Personal and Ubiquitous Computing
Mobile exploration of geotagged photographs
Personal and Ubiquitous Computing
Hi-index | 0.01 |
We present PULSE, a mobile application designed to allow users to gain a 'vibe', an intrinsic understanding of the people, places and activities around their current location, derived from messages on the Twitter social networking site. We compared two auditory presentations of the vibe. One presented message metadata implicitly through modification of spoken message attributes. The other presented the same metadata, but through additional auditory cues. We compared both techniques in a lab and real world study. Additional auditory cues were found to allow for smaller changes in metadata to be more accurately detected, but were least preferred when PULSE was used in context. Results also showed that PULSE enhanced and shaped user understanding, with audio presentation allowing a closer coupling of digital data to the physical world.