Smart Cameras as Embedded Systems
Computer
Sensing and Modeling Human Networks using the Sociometer
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Panoptes: scalable low-power video sensor networking technologies
MULTIMEDIA '03 Proceedings of the eleventh ACM international conference on Multimedia
Collaborative Knowledge Building by Smart Sensors
BT Technology Journal
Reality mining: sensing complex social systems
Personal and Ubiquitous Computing
A sensor network for social dynamics
Proceedings of the 5th international conference on Information processing in sensor networks
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
Proceedings of the 4th workshop on Embedded networked sensors
Rapid Prototyping of Activity Recognition Applications
IEEE Pervasive Computing
Automatically personalizing user interfaces
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
A long-term evaluation of sensing modalities for activity recognition
UbiComp '07 Proceedings of the 9th international conference on Ubiquitous computing
The handwave bluetooth skin conductance sensor
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
SenseCam: a retrospective memory aid
UbiComp'06 Proceedings of the 8th international conference on Ubiquitous Computing
Dynamic privacy management in pervasive sensor networks
AmI'10 Proceedings of the First international joint conference on Ambient intelligence
Hi-index | 0.01 |
We present a novel approach to the creation of user-generated, documentary video using a distributed network of sensor-enabled video cameras and wearable on-body sensor devices. The wearable sensors are used to identify the subjects in view of the camera system and label the captured video with real-time human-centric social and physical behavioral information. With these labels, massive amounts of continually recorded video can be browsed, searched, and automatically stitched into cohesive multimedia content. This system enables naturally occurring human behavior to drive and control a multimedia content creation system in order to create video output that is understandable, informative, and/or enjoyable to its human audience. The collected sensor data is further utilized to enhance the created multimedia content such as by using the data to edit and/or generate audio score, determine appropriate pacing of edits, and control the length and type of audio and video transitions directly from the content of the captured media. We present the design of the platform, the design of the multimedia content creation application, and the evaluated results from several live runs of the complete system.