UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Perceptual Learning of Noise-Vocoded Speech Under Divided Attention

Wang, H; Chen, R; Yan, Y; McGettigan, C; Rosen, S; Adank, P; (2023) Perceptual Learning of Noise-Vocoded Speech Under Divided Attention. Trends in Hearing , 27 pp. 1-17. 10.1177/23312165231192297. Green open access

[thumbnail of Wang_TrendsinHearing_2023.pdf]
Preview
Text
Wang_TrendsinHearing_2023.pdf - Published Version

Download (1MB) | Preview

Abstract

Speech perception performance for degraded speech can improve with practice or exposure. Such perceptual learning is thought to be reliant on attention and theoretical accounts like the predictive coding framework suggest a key role for attention in supporting learning. However, it is unclear whether speech perceptual learning requires undivided attention. We evaluated the role of divided attention in speech perceptual learning in two online experiments (N = 336). Experiment 1 tested the reliance of perceptual learning on undivided attention. Participants completed a speech recognition task where they repeated forty noise-vocoded sentences in a between-group design. Participants performed the speech task alone or concurrently with a domain-general visual task (dual task) at one of three difficulty levels. We observed perceptual learning under divided attention for all four groups, moderated by dual-task difficulty. Listeners in easy and intermediate visual conditions improved as much as the single-task group. Those who completed the most challenging visual task showed faster learning and achieved similar ending performance compared to the single-task group. Experiment 2 tested whether learning relies on domain-specific or domain-general processes. Participants completed a single speech task or performed this task together with a dual task aiming to recruit domain-specific (lexical or phonological), or domain-general (visual) processes. All secondary task conditions produced patterns and amount of learning comparable to the single speech task. Our results demonstrate that the impact of divided attention on perceptual learning is not strictly dependent on domain-general or domain-specific processes and speech perceptual learning persists under divided attention.

Type: Article
Title: Perceptual Learning of Noise-Vocoded Speech Under Divided Attention
Open access status: An open access version is available from UCL Discovery
DOI: 10.1177/23312165231192297
Publisher version: https://doi.org/10.1177/23312165231192297
Language: English
Additional information: © The Author(s) 2023. This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons.org/licenses/by/4.0/).
Keywords: perceptual learning, noise-vocoded speech, divided attention, task difficulty, phonological processing, lexical processing
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > Speech, Hearing and Phonetic Sciences
URI: https://discovery.ucl.ac.uk/id/eprint/10175160
Downloads since deposit
38Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item