UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

Disparate Vulnerability to Membership Inference Attacks

Yaghini, M; Kulynych, B; Cherubin, G; Veale, M; Troncoso, C; (2022) Disparate Vulnerability to Membership Inference Attacks. Proceedings on Privacy Enhancing Technologies , 2022 (1) pp. 460-480. 10.2478/popets-2022-0023. Green open access

[thumbnail of Veale_10.2478_popets-2022-0023.pdf]
Preview
Text
Veale_10.2478_popets-2022-0023.pdf

Download (781kB) | Preview

Abstract

A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model's training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability to MIAs by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding statistically significant evidence of disparate vulnerability in realistic settings.

Type: Article
Title: Disparate Vulnerability to Membership Inference Attacks
Open access status: An open access version is available from UCL Discovery
DOI: 10.2478/popets-2022-0023
Publisher version: https://doi.org/10.2478/popets-2022-0023
Language: English
Additional information: © 2022 Bogdan Kulynych et al., published by Sciendo This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.
Keywords: membership inference attacks, machine learning, fairness
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL SLASH
UCL > Provost and Vice Provost Offices > UCL SLASH > Faculty of Laws
URI: https://discovery.ucl.ac.uk/id/eprint/10134618
Downloads since deposit
102Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item