Wang, Han;
(2024)
Perceptual learning and neural processing of noise-vocoded speech under divided attention.
Doctoral thesis (Ph.D), UCL (University College London).
Text
Han Wang - PhD Thesis - Final.pdf - Accepted Version Access restricted to UCL open access staff until 1 August 2025. Download (8MB) |
|
Text
Han Wang - Erratum for Approved Version.pdf - Supplemental Material Access restricted to UCL open access staff until 1 August 2025. Download (19kB) |
Abstract
Human listeners can understand acoustically degraded speech and their perception of such speech can improve with practice or exposure. Both perception and perceptual learning of degraded speech are thought to rely on on attention and theoretical accounts like predictive coding suggest a key role for attention supporting these processes. However, the nature of the attentional processes involved in perceptual learning was largely unknown—it was unclear whether undivided attention is necessary for learning and whether learning relies on language-specific or domain-general processes. Furthermore, the neural underpinnings of allocating attentional resources between degraded speech and concurrent sensory inputs are unresolved. In three studies, I created an online listening task to demonstrate the rapid perceptual learning of noise-vocoded speech (Study 1) and probed into the nature of the attentional resources for speech perceptual learning in dual tasks (Study 2). I also used fMRI and machine learning to explore how brain dispenses attentional resources for degraded speech processing while listeners are engaged in a concurrent task (Study 3). The results showed that the learnability of degraded speech was not restricted by either the difficulty or the processes (i.e., visual, phonological, or lexical) of a concurrent task. Neurobiologically, degraded speech processing under divided attention was associated with elevated responses in regions related to effortful listening and attentional control. Machine learning models further delineated a group of frontotemporal brain regions governing robust speech processing and dynamically dispensing attentional resources. Taken together, these results clarify current theoretical accounts by demonstrating that undivided attention is not required for rapid perceptual learning of speech. The work also resolves the neural underpinnings for the interaction between acoustic degradation and divided attention and developes a robust and explainable pipeline to applying machine learning to sophisticated analysis of task-based fMRI data that has a limited sample size.
Type: | Thesis (Doctoral) |
---|---|
Qualification: | Ph.D |
Title: | Perceptual learning and neural processing of noise-vocoded speech under divided attention |
Language: | English |
Additional information: | Copyright © The Author 2022. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request. |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences |
URI: | https://discovery.ucl.ac.uk/id/eprint/10195172 |
Archive Staff Only
View Item |