9th Speech in Noise Workshop, 5-6 January 2017, Oldenburg

The discrimination of voice cues in simulations of bimodal electro-acoustic cochlear-implant hearing

Deniz Başkent(a), Annika Luckmann, Jessy Ceha
University of Groningen, University Medical Center Groningen, Department of Otorhinolaryngology / Head and Neck Surgery

Etienne Gaudrain(b)
Lyon Neuroscience Research Center, CNRS, Lyon

Terrin Tamati
University of Groningen, University Medical Center Groningen, Department of Otorhinolaryngology / Head and Neck Surgery

(a) Presenting
(b) Attending

Normal-hearing (NH) listeners enhance speech perception by taking advantage of talker's voice cues to separate and selectively attend to speech streams from multiple talkers (cocktail-party listening). The two voice cues, fundamental frequency (F0) and vocal-tract length (VTL), are particularly effective for voice discrimination. Due to the limitations of signal transmission through cochlear implants (CIs), most CI users show poor use of these cues. However, CI users with residual hearing in the non-implanted ear have been shown to benefit from additional speech information conveyed via acoustic hearing (usually with a hearing aid), even if the residual hearing is limited to low frequencies. Since some voice cues are also present in this frequency range, bimodal electric-acoustic hearing in CI users may also result in better discrimination of voice cues. We also expect that bimodal hearing may be particularly beneficial for discrimination of F0 voice cue, because low-frequency speech provides more salient cues for F0 than VTL.

In the current study, we investigated the potential benefits of bimodal hearing in the perception of F0 and VTL voice cues using acoustic simulations of CIs. The just noticeable differences (JNDs) for F0 and VTL were measured in an adaptive three-alternative forced-choice voice discrimination task using triplets of CV syllables. The task was to identify the odd triplet that differed from the standard triplets in F0 or VTL. The bimodal hearing was simulated by presenting low-pass filtered speech (LPF; cutoff frequencies 150 Hz and 300 Hz) in one ear to simulate residual hearing, and noise-band vocoded speech (Voc; 4, 8, and 16 spectral channels) in the other ear to simulate electric hearing. An unprocessed condition, LPF-only conditions, and Voc-only conditions were also included for comparison.

Results showed that F0 JNDs in the bimodal conditions were significantly smaller than in the Voc-only conditions (for all number of channels). There were no significant differences between the 150 Hz and 300 Hz LPF conditions. For VTL, no benefit for bimodal hearing was found. Thus, low-frequency information from the LPF speech improved F0 discrimination in CI-simulated speech. This suggests that low-frequency acoustic information provides a more salient F0 cue than what is conveyed by the noise-band vocoded part, and listeners can potentially exploit it to more accurately discriminate voices. The findings are consistent with previous studies showing a benefit for bimodal hearing in tasks involving a strong pitch component, such as in music or speech-in-noise perception.


Warning: Use of undefined constant s - assumed 's' (this will throw an Error in a future version of PHP) in /home/spinnluxnr/www/2017/pages/programme.php on line 208

Last modified 2017-01-04 23:51:47