Discrete Time Processing of Speech Signals
Discrete Time Processing of Speech Signals
Sensing and modeling human networks
Sensing and modeling human networks
An FFT-based companding front end for noise-robust automatic speech recognition
EURASIP Journal on Audio, Speech, and Music Processing
Predicting shoppers' interest from social interactions using sociometric sensors
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Speaker Identification Based on Robust AM-FM Features
ICETET '09 Proceedings of the 2009 Second International Conference on Emerging Trends in Engineering & Technology
Darwin phones: the evolution of sensing and inference on mobile phones
Proceedings of the 8th international conference on Mobile systems, applications, and services
EmotionSense: a mobile phones based adaptive platform for experimental social psychology research
Proceedings of the 12th ACM international conference on Ubiquitous computing
Noise-Robust Speaker Recognition Combining Missing Data Techniques and Universal Background Modeling
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
MyConverse is a personal conversation recogniser and visualiser for smartphones. MyConverse uses the smartphone's microphone to continuously recognise the user's conversations during daily life. While it recognises pre-trained speakers, unknown speakers are detected and subsequently trained for future identification. Based on the recognition, MyConverse visualises user's social interactions on the smartphone. An extensive system parameter evaluation has been done based on a freely available dataset. Additionally, MyConverse was tested in different real-life environments and in a full-day evaluation study. The speaker recognition system reached an identification accuracy of 75% for 24 speakers in meeting room conditions. In other daily life situations MyConverse reached accuracies from 60% to 84%.