Auditory brainstem response classification: A hybrid model using time and frequency features

  • Authors:
  • Robert Davey;Paul McCullagh;Gaye Lightbody;Gerry McAllister

  • Affiliations:
  • Department of Language and Communication Science, City University, Northampton Square, London EC1V 0HB, UK;School of Computing and Mathematics, University of Ulster, Jordanstown, Newtownabbey, Co. Antrim BT37 0QB, UK;School of Computing and Mathematics, University of Ulster, Jordanstown, Newtownabbey, Co. Antrim BT37 0QB, UK;School of Computing and Mathematics, University of Ulster, Jordanstown, Newtownabbey, Co. Antrim BT37 0QB, UK

  • Venue:
  • Artificial Intelligence in Medicine
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Objective: The auditory brainstem response (ABR) is an evoked response obtained from brain electrical activity when an auditory stimulus is applied to the ear. An audiologist can determine the threshold level of hearing by applying stimuli at reducing levels of intensity, and can also diagnose various otological, audiological, and neurological abnormalities by examining the morphology of the waveform and the latencies of the individual waves. This is a subjective process requiring considerable expertise. The aim of this research was to develop software classification models to assist the audiologist with an automated detection of the ABR waveform and also to provide objectivity and consistency in this detection. Materials and methods: The dataset used in this study consisted of 550 waveforms derived from tests using a range of stimulus levels applied to 85 subjects ranging in hearing ability. Each waveform had been classified by a human expert as 'response=Yes' or 'response=No'. Individual software classification models were generated using time, frequency and cross-correlation measures. Classification employed both artificial neural networks (NNs) and the C5.0 decision tree algorithm. Accuracies were validated using six-fold cross-validation, and by randomising training, validation and test datasets. Results: The result was a two stage classification process whereby strong responses were classified to an accuracy of 95.6% in the first stage. This used a ratio of post-stimulus to pre-stimulus power in the time domain, with power measures at 200, 500 and 900Hz in the frequency domain. In the second stage, outputs from time, frequency and cross-correlation classifiers were combined using the Dempster-Shafer method to produce a hybrid model with an accuracy of 85% (126 repeat waveforms). Conclusion: By combining the different approaches a hybrid system has been created that emulates the approach used by an audiologist in analysing an ABR waveform. Interpretation did not rely on one particular feature but brought together power and frequency analysis as well as consistency of subaverages. This provided a system that enhanced robustness to artefacts while maintaining classification accuracy.