TapTell: understanding visual intents on-the-go

  • Authors:
  • Ning Zhang;Tao Mei;Xian-Sheng Hua;Ling Guan;Shipeng Li

  • Affiliations:
  • Ryerson University, Toronto, ON, Canada;Microsoft Research Asia, Beijing, China;Microsoft, Redmond, WA, USA;Ryerson University, Toronto, ON, Canada;Microsoft Research Asia, Beijing, China

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This demonstration presents a mobile-based visual recognition and recommendation application on Windows Phone 7 called TapTell. This is different from other mobile-based visual search mechanisms which merely focus on the search process. TapTell firstly discovers and understands users' visual intents via a circle based natural user interaction called "O" gestures. Following, a Tap action is operated to choose the "O" gestured regions. The context-aware visual search mechanism is utilized for recognizing the intents and associating them with indexed metadata. Finally, the "Tell" action recommends relevant entities utilizing contextual information. The TapTell system has been evaluated at different scenarios on million scale images.