MyConverse: recognising and visualising personal conversations using smartphones

  • Authors:
  • Mirco Rossi;Oliver Amft;Sebastian Feese;Christian Käslin;Gerhard Tröster

  • Affiliations:
  • Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;ACTLab, Signal Processing Systems, TU Eindhoven, Eindhoven, Netherlands;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland;Wearable Computing Lab., ETH Zurich, Zurich, Switzerland

  • Venue:
  • Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

MyConverse is a personal conversation recogniser and visualiser for smartphones. MyConverse uses the smartphone's microphone to continuously recognise the user's conversations during daily life. While it recognises pre-trained speakers, unknown speakers are detected and subsequently trained for future identification. Based on the recognition, MyConverse visualises user's social interactions on the smartphone. An extensive system parameter evaluation has been done based on a freely available dataset. Additionally, MyConverse was tested in different real-life environments and in a full-day evaluation study. The speaker recognition system reached an identification accuracy of 75% for 24 speakers in meeting room conditions. In other daily life situations MyConverse reached accuracies from 60% to 84%.