Parent-driven use of wearable cameras for autism support: a field study with families
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Detecting eye contact using wearable eye-tracking glasses
Proceedings of the 2012 ACM Conference on Ubiquitous Computing
Learning to recognize daily actions using gaze
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part I
Scene semantics from long-term observation of people
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Egocentric activity monitoring and recovery
ACCV'12 Proceedings of the 11th Asian conference on Computer Vision - Volume Part III
Hand segmentation for gesture recognition in EGO-vision
Proceedings of the 3rd ACM international workshop on Interactive multimedia on mobile & portable devices
Low-complexity scalable distributed multicamera tracking of humans
ACM Transactions on Sensor Networks (TOSN)
Hi-index | 0.00 |
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.