UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Do Invariances in Deep Neural Networks Align with Human Perception?

Nanda, V; Majumdar, A; Kolling, C; Dickerson, JP; Gummadi, KP; Love, BC; Weller, A; (2023) Do Invariances in Deep Neural Networks Align with Human Perception? In: Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023. (pp. pp. 9277-9285). AAAI: Washington DC, USA. Green open access

[thumbnail of 2111.14726.pdf]
Preview
Text
2111.14726.pdf - Accepted Version

Download (2MB) | Preview

Abstract

An evaluation criterion for safe and trustworthy deep learning is how well the invariances captured by representations of deep neural networks (DNNs) are shared with humans. We identify challenges in measuring these invariances. Prior works used gradient-based methods to generate identically represented inputs (IRIs), i.e., inputs which have identical representations (on a given layer) of a neural network, and thus capture invariances of a given network. One necessary criterion for a network's invariances to align with human perception is for its IRIs look “similar” to humans. Prior works, however, have mixed takeaways; some argue that later layers of DNNs do not learn human-like invariances yet others seem to indicate otherwise. We argue that the loss function used to generate IRIs can heavily affect takeaways about invariances of the network and is the primary reason for these conflicting findings. We propose an adversarial regularizer on the IRI-generation loss that finds IRIs that make any model appear to have very little shared invariance with humans. Based on this evidence, we argue that there is scope for improving models to have human-like invariances, and further, to have meaningful comparisons between models one should use IRIs generated using the regularizer-free loss. We then conduct an in-depth investigation of how different components (e.g. architectures, training losses, data augmentations) of the deep learning pipeline contribute to learning models that have good alignment with humans. We find that architectures with residual connections trained using a (self-supervised) contrastive loss with `p ball adversarial data augmentation tend to learn invariances that are most aligned with humans. Code: github.com/nvedant07/Human-NN-Alignment. We strongly recommend reading the arxiv version of this paper: https://arxiv.org/abs/2111.14726.

Type: Proceedings paper
Title: Do Invariances in Deep Neural Networks Align with Human Perception?
Event: 37th AAAI Conference on Artificial Intelligence, AAAI 2023
Location: Washington, D.C., USA
Dates: 7 Feb 2023 - 14 Feb 2023
ISBN-13: 9781577358800
Open access status: An open access version is available from UCL Discovery
Publisher version: https://ojs.aaai.org/index.php/AAAI/issue/archive
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > Experimental Psychology
URI: https://discovery.ucl.ac.uk/id/eprint/10176094
Downloads since deposit
14Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item