Predicting domain adaptivity: redo or recycle?

  • Authors:
  • Ting Yao;Chong-Wah Ngo;Shiai Zhu

  • Affiliations:
  • City University of Hong Kong, Hong Kong, Hong Kong;City University of Hong Kong, Hong Kong, Hong Kong;City University of Hong Kong, Hong Kong, Hong Kong

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Over the years, the academic researchers have contributed various visual concept classifiers. Nevertheless, given a new dataset, most researchers still prefer to develop large number of classifiers from scratch despite expensive labeling efforts and limited computing resources. A valid question is why not multimedia community ``embrace the green" and recycle off-the-shelf classifiers for new dataset. The difficulty originates from the domain gap that there are many different factors that govern the development of a classifier and eventually drive its performance to emphasize certain aspects of dataset. Reapplying a classifier to an unseen dataset may end up GIGO (garbage in, garbage out) and the performance could be much worse than re-developing a new classifier with very few training examples. In this paper, we explore different parameters, including shift of data distribution, visual and context diversities, that may hinder or otherwise encourage the recycling of old classifiers for new dataset. Particularly, we give empirical insights of when to recycle available resources, and when to redo from scratch by completely forgetting the past and train a brand new classifier. Based on these analysis, we further propose an approach for predicting the negative transfer of a concept classifier to a different domain given the observed parameters. Experimental results show that the prediction accuracy of over 75\% can be achieved when transferring concept classifiers learnt from LSCOM (news video domain), ImageNet (Web image domain) and Flickr-SF (weakly tagged Web image domain), respectively, to TRECVID 2011 dataset (Web video domain).