IEEE Transactions on Mobile Computing
Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Online audio background determination for complex audio environments
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Power and accuracy trade-offs in sound-based context recognition systems
Pervasive and Mobile Computing
Gesture spotting with body-worn inertial sensors to detect user activities
Pattern Recognition
Acoustic event detection in meeting-room environments
Pattern Recognition Letters
Background subtraction for automated multisensor surveillance: a comprehensive review
EURASIP Journal on Advances in Signal Processing - Special issue on advanced image processing for defense and security applications
Design methodology for context-aware wearable sensor systems
PERVASIVE'05 Proceedings of the Third international conference on Pervasive Computing
Mobile context inference using low-cost sensors
LoCA'05 Proceedings of the First international conference on Location- and Context-Awareness
Analysis of chewing sounds for dietary monitoring
UbiComp'05 Proceedings of the 7th international conference on Ubiquitous Computing
Combining crowd-generated media and personal data: semi-supervised learning for context recognition
Proceedings of the 1st ACM international workshop on Personal data meets distributed multimedia
Towards scalable activity recognition: adapting zero-effort crowdsourced acoustic models
Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Hi-index | 0.00 |
The paper deals with the design of a sound recognitionsystem focused on an ultra low power hardware implementationin a button like miniature form factor. We present theresults of the first design phase focused on selection andexperimental evaluation of sound classes and algorithmssuitable for low power realization. We also present theVHDL model of the hardware showing that our method canbe implemented with minimal resources. Our approach isbased on spectrum analysis to distinguish between a subsetof sound sources with a clear audio signature. It alsouses intensity analysis from microphones placed at differentlocations to correlate the sounds with user activity.