UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Autoregressive Entity Retrieval

De Cao, N; Izacard, G; Riedel, S; Petroni, F; (2020) Autoregressive Entity Retrieval. In: ICLR 2021 - 9th International Conference on Learning Representations. ICLR: Vienna, Austria. Green open access

[thumbnail of Riedel_1115_autoregressive_entity_retrieva.pdf]
Preview
Text
Riedel_1115_autoregressive_entity_retrieva.pdf

Download (791kB) | Preview

Abstract

Entities are at the center of how we represent and aggregate knowledge. For instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one per Wikipedia article). The ability to retrieve such entities given a query is fundamental for knowledge-intensive tasks such as entity linking and open-domain question answering. One way to understand current approaches is as classifiers among atomic labels, one for each entity. Their weight vectors are dense entity representations produced by encoding entity meta information such as their descriptions. This approach leads to several shortcomings: (i) context and entity affinity is mainly captured through a vector dot product, potentially missing fine-grained interactions between the two; (ii) a large memory footprint is needed to store dense representations when considering large entity sets; (iii) an appropriately hard set of negative data has to be subsampled at training time. In this work, we propose GENRE, the first system that retrieves entities by generating their unique names, left to right, token-by-token in an autoregressive fashion and conditioned on the context. This enables us to mitigate the aforementioned technical issues since: (i) the autoregressive formulation allows us to directly capture relations between context and entity name, effectively cross encoding both; (ii) the memory footprint is greatly reduced because the parameters of our encoder-decoder architecture scale with vocabulary size, not entity count; (iii) the exact softmax loss can be efficiently computed without the need to subsample negative data. We show the efficacy of the approach, experimenting with more than 20 datasets on entity disambiguation, end-to-end entity linking and document retrieval tasks, achieving new state-of-the-art or very competitive results while using a tiny fraction of the memory footprint of competing systems. Finally, we demonstrate that new entities can be added by simply specifying their unambiguous name. Code and pre-trained models at https://github.com/facebookresearch/GENRE.

Type: Proceedings paper
Title: Autoregressive Entity Retrieval
Event: ICLR 2021 - 9th International Conference on Learning Representations
Open access status: An open access version is available from UCL Discovery
Publisher version: https://openreview.net/forum?id=5k8F6UU39V
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: entity retrieval, document retrieval, autoregressive language model, entity linking, end-to-end entity linking, entity disambiguation, constrained beam search
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10167460
Downloads since deposit
Loading...
70Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
Loading...

Archive Staff Only

View Item View Item