UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Estimating underlying articulatory targets of Thai vowels by using deep learning based on generating synthetic samples from a 3D vocal tract model and data augmentation

Lapthawan, T; Prom-On, S; Birkholz, P; Xu, Y; (2022) Estimating underlying articulatory targets of Thai vowels by using deep learning based on generating synthetic samples from a 3D vocal tract model and data augmentation. IEEE Access , 10 pp. 41489-41502. 10.1109/ACCESS.2022.3166922. Green open access

[thumbnail of Estimating_Underlying_Articulatory_Targets_of_Thai_Vowels_by_Using_Deep_Learning_Based_on_Generating_Synthetic_Samples_From_a_3D_Vocal_Tract_Model_and_Data_Augmentation.pdf]
Preview
Text
Estimating_Underlying_Articulatory_Targets_of_Thai_Vowels_by_Using_Deep_Learning_Based_on_Generating_Synthetic_Samples_From_a_3D_Vocal_Tract_Model_and_Data_Augmentation.pdf - Published Version

Download (1MB) | Preview

Abstract

Representation learning is one of the fundamental issues in modeling articulatory-based speech synthesis using target-driven models. This paper proposes a computational strategy for learning underlying articulatory targets from a 3D articulatory speech synthesis model using a bi-directional long short-term memory recurrent neural network based on a small set of representative seed samples. From a seeding set, a larger training set was generated that provided richer contextual variations for the model to learn. The deep learning model for acoustic-to-target mapping was then trained to model the inverse relation of the articulation process. This method allows the trained model to map the given acoustic data onto the articulatory target parameters which can then be used to identify the distribution based on linguistic contexts. The model was evaluated based on its effectiveness in mapping acoustics to articulation, and the perceptual accuracy of speech reproduced from the estimated articulation. The results indicate that the model can accurately imitate speech with a high degree of phonemic precision.

Type: Article
Title: Estimating underlying articulatory targets of Thai vowels by using deep learning based on generating synthetic samples from a 3D vocal tract model and data augmentation
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/ACCESS.2022.3166922
Publisher version: https://doi.org/10.1109/ACCESS.2022.3166922
Language: English
Additional information: This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third-party material in this article are included in the Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
Keywords: Tongue , Solid modeling , Acoustics , Hidden Markov models , Interpolation , Speech recognition , Shape
UCL classification: UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > Speech, Hearing and Phonetic Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
URI: https://discovery.ucl.ac.uk/id/eprint/10148252
Downloads since deposit
22Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item