UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Constructing Semantic Models From Words, Images, and Emojis

Rotaru, AS; Vigliocco, G; (2020) Constructing Semantic Models From Words, Images, and Emojis. Cognitive Science , 44 (4) , Article e12830. 10.1111/cogs.12830. Green open access

[thumbnail of Vigliocco_Constructing semantic models from words, images and emojis - final version.pdf]
Preview
Text
Vigliocco_Constructing semantic models from words, images and emojis - final version.pdf - Accepted Version

Download (914kB) | Preview

Abstract

A number of recent models of semantics combine linguistic information, derived from text corpora, and visual information, derived from image collections, demonstrating that the resulting multimodal models are better than either of their unimodal counterparts, in accounting for behavioral data. Empirical work on semantic processing has shown that emotion also plays an important role especially in abstract concepts; however, models integrating emotion along with linguistic and visual information are lacking. Here, we first improve on visual and affective representations, derived from state-of-the-art existing models, by choosing models that best fit available human semantic data and extending the number of concepts they cover. Crucially then, we assess whether adding affective representations (obtained from a neural network model designed to predict emojis from co-occurring text) improves the model's ability to fit semantic similarity/relatedness judgments from a purely linguistic and linguistic-visual model. We find that, given specific weights assigned to the models, adding both visual and affective representations improves performance, with visual representations providing an improvement especially for more concrete words, and affective representations improving especially the fit for more abstract words.

Type: Article
Title: Constructing Semantic Models From Words, Images, and Emojis
Location: United States
Open access status: An open access version is available from UCL Discovery
DOI: 10.1111/cogs.12830
Publisher version: https://doi.org/10.1111/cogs.12830
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Concreteness, Distributional models, Emotion, Language, Multimodal models, Similarity/relatedness, Vision
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > Experimental Psychology
URI: https://discovery.ucl.ac.uk/id/eprint/10095158
Downloads since deposit
97Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item