UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

StaResGRU-CNN with CMedLMs: A stacked residual GRU-CNN with pre-trained biomedical language models for predictive intelligence

Ni, Pin; Li, Gangmin; Hung, Patrick CK; Chang, Victor; (2021) StaResGRU-CNN with CMedLMs: A stacked residual GRU-CNN with pre-trained biomedical language models for predictive intelligence. Applied Soft Computing , 113 (B) , Article 107975. 10.1016/j.asoc.2021.107975. Green open access

[thumbnail of Ni_ASOC_1_extracted.pdf]
Preview
Text
Ni_ASOC_1_extracted.pdf

Download (4MB) | Preview

Abstract

As a task requiring strong professional experience as supports, predictive biomedical intelligence cannot be separated from the support of a large amount of external domain knowledge. By using transfer learning to obtain sufficient prior experience from massive biomedical text data, it is essential to promote the performance of specific downstream predictive and decision-making task models. This is an efficient and convenient method, but it has not been fully developed for Chinese Natural Language Processing (NLP) in the biomedical field. This study proposes a Stacked Residual Gated Recurrent Unit-Convolutional Neural Networks (StaResGRU-CNN) combined with the pre-trained language models (PLMs) for biomedical text-based predictive tasks. Exploring related paradigms in biomedical NLP based on transfer learning of external expert knowledge and comparing some Chinese and English language models. We have identified some key issues that have not been developed or those present difficulties of application in the field of Chinese biomedicine. Therefore, we also propose a series of Chinese bioMedical Language Models (CMedLMs) with detailed evaluations of downstream tasks. By using transfer learning, language models are introduced with prior knowledge to improve the performance of downstream tasks and solve specific predictive NLP tasks related to the Chinese biomedical field to serve the predictive medical system better. Additionally, a free-form text Electronic Medical Record (EMR)-based Disease Diagnosis Prediction task is proposed, which is used in the evaluation of the analyzed language models together with Clinical Named Entity Recognition, Biomedical Text Classification tasks. Our experiments prove that the introduction of biomedical knowledge in the analyzed models significantly improves their performance in the predictive biomedical NLP tasks with different granularity. And our proposed model also achieved competitive performance in these predictive intelligence tasks.

Type: Article
Title: StaResGRU-CNN with CMedLMs: A stacked residual GRU-CNN with pre-trained biomedical language models for predictive intelligence
Open access status: An open access version is available from UCL Discovery
DOI: 10.1016/j.asoc.2021.107975
Publisher version: https://doi.org/10.1016/j.asoc.2021.107975
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Natural language processing, Predictive intelligence, Biomedical text mining, Named Entity Recognition, Text classification, Transfer learning, Pre-trained language model
UCL classification: UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Civil, Environ and Geomatic Eng
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL
URI: https://discovery.ucl.ac.uk/id/eprint/10157987
Downloads since deposit
Loading...
42Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
Loading...

Archive Staff Only

View Item View Item