Information Content Measures of Visual Displays
INFOVIS '00 Proceedings of the IEEE Symposium on Information Vizualization 2000
Ensemble selection for evolutionary learning using information theory and price's theorem
Proceedings of the 8th annual conference on Genetic and evolutionary computation
Understanding the efficiency of social tagging systems using information theory
Proceedings of the nineteenth ACM conference on Hypertext and hypermedia
Get another label? improving data quality and data mining using multiple, noisy labelers
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
The anatomy of a large-scale human computation engine
Proceedings of the ACM SIGKDD Workshop on Human Computation
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Designing incentives for inexpert human raters
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Shepherding the crowd: managing and providing feedback to crowd workers
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Proceedings of the 5th Annual ACM Web Science Conference
Quizz: targeted crowdsourcing with a billion (potential) users
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
We consider the problem of evaluating the performance of human contributors for tasks involving answering a series of questions, each of which has a single correct answer. The answers may not be known a priori. We assert that the measure of a contributor's judgments is the amount by which having these judgments decreases the entropy of our discovering the answer. This quantity is the pointwise mutual information between the judgments and the answer. The expected value of this metric is the mutual information between the contributor and the answer prior, which can be computed using only the prior and the conditional probabilities of the contributor's judgments given a correct answer, without knowing the answers themselves. We also propose using multivariable information measures, such as conditional mutual information, to measure the interactions between contributors' judgments. These metrics have a variety of applications. They can be used as a basis for contributor performance evaluation and incentives. They can be used to measure the efficiency of the judgment collection process. If the collection process allows assignment of contributors to questions, they can also be used to optimize this scheduling.