Using Low-Cost Sensing to Support Nutritional Awareness
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 4th workshop on Embedded networked sensors
LabelMe: A Database and Web-Based Tool for Image Annotation
International Journal of Computer Vision
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The SenseCam as a tool for task observation
BCS-HCI '08 Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction - Volume 2
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Platemate: crowdsourcing nutritional analysis from food photographs
Proceedings of the 24th annual ACM symposium on User interface software and technology
Analysis of chewing sounds for dietary monitoring
UbiComp'05 Proceedings of the 7th international conference on Ubiquitous Computing
SenseCam: a retrospective memory aid
UbiComp'06 Proceedings of the 8th international conference on Ubiquitous Computing
Combining crowdsourcing and google street view to identify street-level accessibility problems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
There is widespread agreement in the medical research community that more effective mechanisms for dietary assessment and food journaling are needed to fight back against obesity and other nutrition-related diseases. However, it is presently not possible to automatically capture and objectively assess an individual's eating behavior. Currently used dietary assessment and journaling approaches have several limitations; they pose a significant burden on individuals and are often not detailed or accurate enough. In this paper, we describe an approach where we leverage human computation to identify eating moments in first-person point-of-view images taken with wearable cameras. Recognizing eating moments is a key first step both in terms of automating dietary assessment and building systems that help individuals reflect on their diet. In a feasibility study with 5 participants over 3 days, where 17,575 images were collected in total, our method was able to recognize eating moments with 89.68% accuracy.