User preference learning for multimedia personalization in pervasive computing environment

  • Authors:
  • Zhiwen Yu;Daqing Zhang;Xingshe Zhou;Changde Li

  • Affiliations:
  • School of Computer Science, Northwestern Polytechnical University, P.R. China;Context Aware Systems Department, Institute for Infocomm Research, Singapore;School of Computer Science, Northwestern Polytechnical University, P.R. China;School of Computer Science, Northwestern Polytechnical University, P.R. China

  • Venue:
  • KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Pervasive computing environment and users' demand for multimedia personalization precipitate a need for personalization tools to help people access desired multimedia content at anytime, anywhere, through any devices. User preference learning plays an important role in multimedia personalization. In this paper, we propose a learning approach to acquire and update user preference for multimedia personalization in pervasive computing environment. The approach is based on Master-Slave architecture, of which master device is a device with strong capabilities, such as PC, TV with STB (set-on-box) or PDR (Personal Digital Recorder), etc, and slave devices are pervasive terminals with limited resources. The preference learning and update is done in the master device by utilizing overall user feedback information collected from different devices as opposed to other traditional learning methods that just use partial feedback information in one device. The slave devices are responsible for observing user behavior and uploading feedback information to the master device. The master device is designed to support multiple learning methods: explicit input/modification and implicit learning. The implicit user preference learning algorithm, which applies relevance feedback and Naïve Bayes classifier approach, is described in detail.