Measuring perceived quality of speech and video in multimedia conferencing applications
MULTIMEDIA '98 Proceedings of the sixth ACM international conference on Multimedia
Communications of the ACM
Quantifying Skype user satisfaction
Proceedings of the 2006 conference on Applications, technologies, architectures, and protocols for computer communications
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE MultiMedia
The effectiveness of a QoE-based video output scheme for audio-video ip transmission
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Crowdsourcing for relevance evaluation
ACM SIGIR Forum
Designing incentives for online question and answer forums
Proceedings of the 10th ACM conference on Electronic commerce
On formal models for social verification
Proceedings of the ACM SIGKDD Workshop on Human Computation
PhotoSlap: a multi-player online game for semantic annotation
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Quantitative assessment of user-level QoS and its mapping
IEEE Transactions on Multimedia
Proceedings of the international conference on Multimedia information retrieval
Randomised pair comparison: an economic and robust method for audiovisual quality assessment
Proceedings of the 20th international workshop on Network and operating systems support for digital audio and video
Quadrant of euphoria: a crowdsourcing platform for QoE assessment
IEEE Network: The Magazine of Global Internetworking - Special issue on improving quality of experience for network services
Memory matrix: a novel user experience for home video
Proceedings of the international conference on Multimedia
Effects of internet path selection on video-QoE
MMSys '11 Proceedings of the second annual ACM conference on Multimedia systems
Quantifying QoS requirements of network services: a cheat-proof framework
MMSys '11 Proceedings of the second annual ACM conference on Multimedia systems
Human-assisted graph search: it's okay to ask questions
Proceedings of the VLDB Endowment
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Random partial paired comparison for subjective video quality assessment via hodgerank
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Your opinion counts!: leveraging social comments for analyzing aesthetic perception of photographs.
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Whom to ask?: jury selection for decision making tasks on micro-blog services
Proceedings of the VLDB Endowment
Online crowdsourcing subjective image quality assessment
Proceedings of the 20th ACM international conference on Multimedia
GeoCrowd: enabling query answering with spatial crowdsourcing
Proceedings of the 20th International Conference on Advances in Geographic Information Systems
PCM'12 Proceedings of the 13th Pacific-Rim conference on Advances in Multimedia Information Processing
Towards a generic framework for trustworthy spatial crowdsourcing
Proceedings of the 12th International ACM Workshop on Data Engineering for Wireless and Mobile Acess
Robust evaluation for quality of experience in crowdsourcing
Proceedings of the 21st ACM international conference on Multimedia
Size does matter: how image size affects aesthetic perception?
Proceedings of the 21st ACM international conference on Multimedia
MediaQ: mobile multimedia management system
Proceedings of the 5th ACM Multimedia Systems Conference
Journal of Signal Processing Systems
Hi-index | 0.00 |
Until recently, QoE (Quality of Experience) experiments had to be conducted in academic laboratories; however, with the advent of ubiquitous Internet access, it is now possible to ask an Internet crowd to conduct experiments on their personal computers. Since such a crowd can be quite large, crowdsourcing enables researchers to conduct experiments with a more diverse set of participants at a lower economic cost than would be possible under laboratory conditions. However, because participants carry out experiments without supervision, they may give erroneous feedback perfunctorily, carelessly, or dishonestly, even if they receive a reward for each experiment. In this paper, we propose a crowdsourceable framework to quantify the QoE of multimedia content. The advantages of our framework over traditional MOS ratings are: 1) it enables crowdsourcing because it supports systematic verification of participants' inputs; 2) the rating procedure is simpler than that of MOS, so there is less burden on participants; and 3) it derives interval-scale scores that enable subsequent quantitative analysis and QoE provisioning. We conducted four case studies, which demonstrated that, with our framework, researchers can outsource their QoE evaluation experiments to an Internet crowd without risking the quality of the results; and at the same time, obtain a higher level of participant diversity at a lower monetary cost.