Jointly exploiting visual and non-visual information for event-related social media retrieval

  • Authors:
  • Minh-Son Dao;Giulia Boato;Francesco G.B. De Natale;Truc-Vien Nguyen

  • Affiliations:
  • University of Trento, Trento, Italy;University of Trento, Trento, Italy;University of Trento, Trento, Italy;University of Trento, Trento, Italy

  • Venue:
  • Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this contribution, we propose a watershed-based method with support from external data sources and visual information to detect social events in web multimedia. The idea is based on two main observations: (1) people cannot be involved in more than one event at the same time, and (2) people tend to introduce similar annotations for all images associated to the same event. Based on these observations, the metadata is turned to an image so that each row contains all records belonging to one user; and these records are sorted by time. Thus, the social event detection is turned to watershed-based image segmentation, where Markers are generated by using (keyword, location, visual) features with support of external data sources, and the Flood progress is carried on by taking into account (tags set, time, visual) features. We test our algorithm on the MediaEval 2012 dataset both using only external data but also introducing visual information.