Proceedings of the Workshop on Use of Context in Vision Processing
A framework of context-aware object recognition for smart home
ICOST'07 Proceedings of the 5th international conference on Smart homes and health telematics
Visual object-action recognition: Inferring object affordances from human demonstration
Computer Vision and Image Understanding
Content-based retrieval of functional objects in video using scene context
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part I
Unsupervised learning of functional categories in video scenes
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part II
Functional scene element recognition for video scene analysis
WMVC'09 Proceedings of the 2009 international conference on Motion and video computing
Human activity analysis: A review
ACM Computing Surveys (CSUR)
Robust sequence alignment for actor-object interaction recognition: Discovering actor-object states
Computer Vision and Image Understanding
A survey of vision-based methods for action representation, segmentation and recognition
Computer Vision and Image Understanding
Scene semantics from long-term observation of people
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part VI
Hi-index | 0.00 |
Traditional methods of object recognition are reliant on shape and so are very difficult to apply in cluttered, wide-angle and low-detail views such as surveillance scenes. To address this, a method of indirect object recognition is proposed, where human activity is used to infer both the location and identity of objects. No shape analysis is necessary. The concept is dubbed 驴interaction signatures驴, since the premise is that a human will interact with objects in ways characteristic of the function of that object 驴 for example, a person sits in a chair and drinks from a cup. The human-centred approach means that recognition is possible in low-detail views and is largely invariant to the shape of objects within the same functional class. This paper implements a Bayesian network for classifying region patches with object labels, building upon our previous work in automatically segmenting and recognising a human驴s interactions with the objects. Experiments show that interaction signatures can successfully find and label objects in low-detail views and are equally effective at recognising test objects that differ markedly in appearance from the training objects.