Investigating the relationship between SpiN recognition, AM depth detection and evoked potentials in CI users
Extensive research studies have explored human temporal auditory processing abilities and their links to speech-in-noise (SpiN) perception. The proposed study design aims to investigate the influence of temporal processing abilities on speech-in-noise recognition scores. The temporal feature of interest is amplitude modulation (AM) depth detection, for which a potential objective measure will be explored based on electroencephalography (EEG) data. An additional loudness balanced condition within the EEG paradigm will account for inherent loudness differences between modulated and unmodulated stimuli to ensure these have no significant influence on the obtained EEG measures. At the workshop preliminary pilot data will be presented.
Based on previous work in normal-hearing (NH) participants by our group, we hypothesize significant correlations between SpiN thresholds and the AM depth detection thresholds, as well as between behavioural and neural thresholds of AM depth detection in both cohorts. We hypothesize no significant impact of loudness balancing on the EEG measures.
Cochlear implant (CI) users and NH participants will be recruited for this study. The test battery will consist of four paradigms: (1) an adaptive threshold procedure to determine the behavioural AM depth detection threshold, (2) an adaptive speech-reception threshold (SRT) test to determine the signal-to-noise ratio at which 50% of keywords are identified correctly and (3) a neurophysiological mismatch negativity paradigm to determine a neural threshold of AM depth. (4) Another paradigm determines the rms-level for which the modulated and unmodulated sounds reach the same overall loudness subjectively for each participant. The loudness matched stimuli are presented in the EEG task as a fourth block for the 100% AM depth condition.
The chosen AM rate is 8Hz, based on the importance of slow-envelope fluctuations on speech-recognition, which is imposed on a broadband carrier of speech-shaped noise. The AM depths for the mismatch paradigm are set to 50%, 75% and 100%. All stimuli will be presented at 60dBA (calibrated for the unmodulated noise), for CI users via an OtocubeTM (http://otocube.com) and for NH participants monolaterally via headphones. The SRTs are determined with an adaptive procedure as in standard literature with ten-talker babble background noise. The signal-to-noise ratio (SNR) is adjusted depending on the number of correctly identified keywords, converging on the SNR which provides 50% correct identification.