Telling humans and computers apart automatically
Communications of the ACM - Information cities
Verbs semantics and lexical selection
ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
Asirra: a CAPTCHA that exploits interest-aligned manual image categorization
Proceedings of the 14th ACM conference on Computer and communications security
TagCaptcha: annotating images with CAPTCHAs
Proceedings of the international conference on Multimedia
SeaFish: a game for collaborative and visual image annotation and interlinking
ESWC'11 Proceedings of the 8th extended semantic web conference on The semanic web: research and applications - Volume Part II
Reliability and effectiveness of clickthrough data for automatic image annotation
Multimedia Tools and Applications
Hi-index | 0.00 |
Image retrieval has long been plagued by limitations on automatic methods because they cannot reliably extract semantic data from low-level features. The result is that users must formulate awkward and inefficient queries in terms these systems can understand. Humans, on the other hand, have the ability to quickly and accurately summarise visual data. This dichotomy, named the semantic gap, is a fundamental problem in image retrieval. We aim to narrow the semantic gap in a typical retrieval scenario by motivating users to provide semantic image annotations. We propose a system of collecting image annotations based on the need for human verification on the web. Similar in principle to work by von Ahn et al. [2, 3], the idea is to exploit the requirement of users to pass tests in order to incrementally annotate images.