Decoding speaker attendance from EEG-data using deep machine learning in continuous speech
Previous research has investigated the question if signals obtained from EEG can be used to predict which speaker is attended in an acoustic scene. The long-term goal is to provide solutions for hearing aid users using EEG-based speaker selection or optimization. In this work, we analyze EEG data from listeners in a two-speaker scenario and test the application of algorithms borrowed from automatic speech recognition (ASR) to estimate which speaker was attended. Specifically, a deep neural net is trained to predict the envelope of the attended speech signal. We compare our results to previous research [Mirkovic et al., 2015], in which a linear model was applied to obtain the estimate. The DNN-based approach requires shorter data segments to be analyzed for a decision, which is partially explained by the transferred information in the experiment that is four times higher compared to the linear model.
Mirkovic B, Debener S, Jaeger M, De Vos M. "Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications." Journal of neural engineering 12.4 (2015): 046007.