UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings

Turton, J; Vinson, D; Smith, RE; (2021) Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings. In: Proceedings of the 6th Workshop on Representation Learning for NLP. (pp. pp. 248-262). Association for Computational Linguistics Green open access

[thumbnail of 2021.repl4nlp-1.26.pdf]
Preview
Text
2021.repl4nlp-1.26.pdf - Published Version

Download (3MB) | Preview

Abstract

Models based on the transformer architecture, such as BERT, have marked a crucial step for- ward in the field of Natural Language Pro- cessing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Un- fortunately, the space only exists for a small data-set of 535 words, limiting its uses. Pre- vious work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and success- fully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are repre- sented across the different layers of the BERT model.

Type: Proceedings paper
Title: Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings
Event: Joint Conference of 59th Annual Meeting of the Association-for-Computational-Linguistics (ACL) / 11th International Joint Conference on Natural Language Processing (IJCNLP) / 6th Workshop on Representation Learning for NLP (RepL4NLP)
Location: ELECTR NETWORK
Dates: 01 August 2021 - 06 August 2021
Open access status: An open access version is available from UCL Discovery
Publisher version: https://aclanthology.org/2021.repl4nlp-1.26.pdf
Language: English
Additional information: This version is the version of record, available under the Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10135843
Downloads since deposit
Loading...
37Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
1.United States
4
2.Germany
2
3.United Kingdom
2
4.Lithuania
1
5.China
1

Archive Staff Only

View Item View Item