Affordable Access

deepdyve-link
Publisher Website

A Realtime, Open-Source Speech-Processing Platform for Research in Hearing Loss Compensation.

Authors
  • Garudadri, Harinath1
  • Boothroyd, Arthur2
  • Lee, Ching-Hua1
  • Gadiyaram, Swaroop1
  • Bell, Justyn1
  • Sengupta, Dhiman3
  • Hamilton, Sean3
  • Vastare, Krishna Chaithanya1
  • Gupta, Rajesh3
  • Rao, Bhaskar D1
  • 1 Department of Electrical and Computer Engineering, University of California, San Diego.
  • 2 School of Speech, Language, and Hearing Sciences, San Diego State University.
  • 3 Department of Computer Science and Engineering, University of California, San Diego.
Type
Published Article
Journal
Conference record. Asilomar Conference on Signals, Systems & Computers
Publication Date
Jan 01, 2017
Volume
2017
Pages
1900–1904
Identifiers
DOI: 10.1109/acssc.2017.8335694
PMID: 35261536
Source
Medline
Keywords
Language
English
License
Unknown

Abstract

We are developing a realtime, wearable, open-source speech-processing platform (OSP) that can be configured at compile and run times by audiologists and hearing aid (HA) researchers to investigate advanced HA algorithms in lab and field studies. The goals of this contribution are to present the current system and propose areas for enhancements and extensions. We identify (i) basic and (ii) advanced features in commercial HAs and describe current signal processing libraries and reference designs to build a functional HA. We present performance of this system and compare with commercial HAs using "Specification of Hearing Aid Characteristics," the ANSI 3.22 standard. We then describe a wireless protocol stack for remote control of the HA parameters and uploading media and HA status for offline research. The proposed architecture enables advanced research to compensate for hearing loss by offloading processing from ear-level-assemblies, thereby eliminating the bottlenecks of CPU and communication between left and right HAs.

Report this publication

Statistics

Seen <100 times