Evaluating the neural basis for concurrent vowel identification in dry and reverberant conditions

Ananthakrishna Chintanpalli, Purdue University

Abstract

Reverberation is omnipresent in everyday listening situations (e.g., listening to a friend in a restaurant). Listeners with normal hearing (NH) have a phenomenal capacity to comprehend a single talker in the presence of multiple talkers in reverberant environments, especially when compared to listeners with sensorineural hearing loss. Even people with today's best hearing aids or cochlear implants often complain that they have major difficulties in the above-mentioned circumstances. Although a difference in voice pitch (fundamental frequency, F0) has been shown to be an important cue for talker segregation, the physiological basis for the difficulties faced by listeners with hearing impairment in complex environments remains unknown. Thus, the present research compared the effects of reverberation on perception and neural coding to better understand the cochlear and neural signal-processing mechanisms that contribute to a listener's ability to segregate multiple talkers in real-world listening environments. Concurrent vowel identification was used as an experimental task for understanding how differences in F0 are used to segregate multiple talkers. Perceptual data from NH listeners showed that reverberation reduced the overall identification of concurrent vowels compared to dry conditions (i.e., no reverberation), but that there was no perceptual degradation as reverberation increased in severity. To investigate the neural correlates underlying these perceptual observations, the effects of reverberation on pitch coding of single harmonic tone complexes (HTCs) and concurrent HTCs were quantified using a physiologically realistic auditory-nerve (AN) model. Pooled auto-correlation functions were computed from the AN model, and a periodic sieve template analysis was then used to estimate neural pitch and its salience. These results were then compared with similar acoustic analyses. For both single and concurrent HTCs, neural pitch salience was degraded in reverberation relative to dry, but much less so in the neural representation than in the acoustic representation. These comparisons between acoustic and neural analyses suggest that the cochlea may be the first stage in the auditory system that partially compensates for the acoustic degradation caused by reverberation. It was hypothesized that cochlear non-linearity associated with outer-hair-cell (OHC) function is a primary contributor to this cochlear compensation. This hypothesis was tested by comparing predictions from NH and hearing-impaired versions of the AN model. Predictions with OHC dysfunction showed a larger degradation due to reverberation than for NH, suggesting that OHC based cochlear non-linearity does contribute to cochlear compensation. Overall, the findings from the present study provide valuable physiological and perceptual insight to improve signal-processing strategies for hearing aids, cochlear implants, and automatic speech recognition systems in real-world listening environments.

Degree

Ph.D.

Advisors

Heinz, Purdue University.

Subject Area

Audiology|Neurosciences|Biomedical engineering

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS