UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Adversarially regularising neural NLI models to integrate logical background knowledge

Minervini, P; Riedel, S; (2018) Adversarially regularising neural NLI models to integrate logical background knowledge. In: Proceedings of the 22nd Conference on Computational Natural Language Learning. (pp. pp. 65-74). Association for Computational Linguistics: Brussels, Belgium. Green open access

[thumbnail of K18-1007.pdf]
Preview
Text
K18-1007.pdf - Published Version

Download (335kB) | Preview

Abstract

Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. They are useful for understanding the shortcomings of machine learning models, interpreting their results, and for regularisation. In NLP, however, most example generation strategies produce input text by using known, pre-specified semantic transformations, requiring significant manual effort and in-depth understanding of the problem and domain. In this paper, we investigate the problem of automatically generating adversarial examples that violate a set of given First-Order Logic constraints in Natural Language Inference (NLI). We reduce the problem of identifying such adversarial examples to a combinatorial optimisation problem, by maximising a quantity measuring the degree of violation of such constraints and by using a language model for generating linguistically-plausible examples. Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets – up to a 79.6% relative improvement – while drastically reducing the number of background knowledge violations. Furthermore, we show that adversarial examples transfer among model architectures, and that the proposed adversarial training procedure improves the robustness of NLI models to adversarial examples.

Type: Proceedings paper
Title: Adversarially regularising neural NLI models to integrate logical background knowledge
Event: 22nd Conference on Computational Natural Language Learning
ISBN-13: 9781948087728
Open access status: An open access version is available from UCL Discovery
DOI: 10.18653/v1/K18-1007
Publisher version: http://dx.doi.org/10.18653/v1/K18-1007
Language: English
Additional information: ACL materials are Copyright © 1963–2019 ACL; other materials are copyrighted by their respective copyright holders. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License. http://creativecommons.org/licenses/by/4.0/
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10090230
Downloads since deposit
14Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item