Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
EMMCVPR'07 Proceedings of the 6th international conference on Energy minimization methods in computer vision and pattern recognition
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
VizWiz: nearly real-time answers to visual questions
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Dynamic Processing Allocation in Video
IEEE Transactions on Pattern Analysis and Machine Intelligence
CrossingGuard: exploring information content in navigation aids for visually impaired pedestrians
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Outdoor Scene Image Segmentation Based on Background Recognition and Perceptual Organization
IEEE Transactions on Image Processing
A feasibility study of crowdsourcing and google street view to determine sidewalk accessibility
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Uncovering information needs for independent spatial learning for users who are visually impaired
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Proceedings of the 15th International ACM SIGACCESS Conference on Computers and Accessibility
Feasibility of identifying eating moments from first-person images leveraging human computation
Proceedings of the 4th International SenseCam & Pervasive Imaging Conference
Hi-index | 0.01 |
Poorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility.