Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Toolscape: enhancing the learning experience of how-to videos
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Toolscape: enhancing the learning experience of how-to videos
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Hi-index | 0.00 |
How-to videos can be valuable for learning, but searching for and following along with them can be difficult. Having labeled events such as the tools used in how-to videos could improve video indexing, searching, and browsing. We introduce a crowdsourcing annotation tool for Photoshop how-to videos with a three-stage method that consists of: (1) gathering timestamps of important events, (2) labeling each event, and (3) capturing how each event affects the task of the tutorial. Our ultimate goal is to generalize our method to be applied to other domains of how-to videos. We evaluate our annotation tool with Amazon Mechanical Turk workers to investigate the accuracy, costs, and feasibility of our three-stage method for annotating large numbers of video tutorials. Improvements can be made for stages 1 and 3, but stage 2 produces accurate labels over 90% of the time using majority voting. We have observed that changes in the instructions and interfaces of each task can improve the accuracy of the results significantly.