Speech Detection of Stakeholders' Non-Functional Requirements

  • Authors:
  • Adam Steele;Jason Arnold;Jane Cleland-Huang

  • Affiliations:
  • DePaul University;DePaul University;DePaul University

  • Venue:
  • MERE '06 Proceedings of the First International Workshop on Multimedia Requirements Engineering
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes an automatic speech recognition technique for capturing the non-functional requirements spoken by stakeholders at open meetings and interviews during the requirements elicitation process. As statements related to system qualities such as security, performance, and portability are often scattered throughout statements of functional need, the ability to "listen in” on a conversation and correctly capture these statements into a single view is very helpful. The approach is intended to enhance and not replace existing elicitation methods in which stakeholders are more directly asked to describe their needs. Training a speech detection tool to recognize individual users is time consuming while speech detection for un-enrolled users is notoriously difficult. Our approach uses a context-free grammar to boost recognition accuracy, segment the stakeholders' utterances and finally to classify the recognized statements by quality type. This paper describes the preliminary results from experiments with different subjects and then discusses methods for optimizing the recognition and capture of non-functional requirements and contextual domain terms.