UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Optimal time lags for linear cortical auditory attention detection: differences between speech and music listening

Simon, Adele; Østergaard, Jan; Bech, Søren; Loquet, Gerard; (2022) Optimal time lags for linear cortical auditory attention detection: differences between speech and music listening. zenodo.org: Proceedings of the 19th International Symposium on Hearing: Psychoacoustics, Physiology of Hearing, and Auditory Modelling, from the Ear to the Brain (ISH2022). Green open access

[thumbnail of ISH2022_Simon_etal.pdf]
Preview
Text
ISH2022_Simon_etal.pdf - Published Version

Download (2MB) | Preview

Abstract

In recent decades, there has been a lot of interest in detecting auditory attention from brain signals. Cortical recordings have been demonstrated to be useful in determining which speaker a person is listening to a mixed variety of sounds ( the cocktail party effect). Linear regression, often called the stimulus reconstruction method, shows that the envelope of the sounds heard can be reconstructed from continuous electroencephalogram recordings (EEG). The target sound, to which the listener is paying attention, can be reconstructed to a greater extent compared to other sounds present in the sound scene, which can allow attention decoding. Reconstruction can be obtained with EEG signals that are delayed compared to the audio signal, to consider the time for neural processing. It can be used to identify latencies where the reconstruction is optimal, which reflects a cortical process specific to the type of audio heard. However, most of these studies used only speech signals and did not investigate other types of auditory stimuli, such as music. In the present study, we applied this stimulus reconstruction method to decode auditory attention in a cocktail party scenario that includes both speech and music. Participants were presented with a target sound (either speech or music) and a distractor sound (either speech or music) while continuously recording their cortical response during the listening with a 64-channels EEG system. From these recordings, we reconstructed the envelope of the stimuli, both target and distractor, by using linear ridge regression decoding models at individual time lags. Results showed different time lags for maximal reconstruction accuracies between music and speech listening, suggesting separate underlying cortical processes. Results also suggest that an attentional aspect can influence the reconstruction accuracy for middle/late time-lags.

Type: Working / discussion paper
Title: Optimal time lags for linear cortical auditory attention detection: differences between speech and music listening
Event: 19th International Symposium on Hearing (ISH2022)
Location: Lyon
Open access status: An open access version is available from UCL Discovery
DOI: 10.5281/zenodo.6576990
Publisher version: https://zenodo.org/records/6576990
Language: English
Additional information: © The Authors 2025. Original content in this preprint is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0).
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > The Ear Institute
URI: https://discovery.ucl.ac.uk/id/eprint/10219102
Downloads since deposit
1Download
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item