UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Liu, Yugeng; Wen, Rui; He, Xinlei; Salem, Ahmed; Zhang, Zhikun; Backes, Michael; De Cristofaro, Emiliano; ... Zhang, Yang; + view all (2022) ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. In: Proceedings of the 31st USENIX Security Symposium. (pp. pp. 4525-4542). USENIX Green open access

[thumbnail of De Cristofaro_Holistic Risk Assessment of Inference Attacks Against Machine Learning Models_VoR.pdf]
Preview
Text
De Cristofaro_Holistic Risk Assessment of Inference Attacks Against Machine Learning Models_VoR.pdf

Download (2MB) | Preview

Abstract

Inference attacks against Machine Learning (ML) models allow adversaries to learn sensitive information about training data, model parameters, etc. While researchers have studied, in depth, several kinds of attacks, they have done so in isolation. As a result, we lack a comprehensive picture of the risks caused by the attacks, e.g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effectiveness of possible defenses. In this paper, we fill this gap by presenting a first-of-its-kind holistic risk assessment of different inference attacks against machine learning models. We concentrate on four attacks -- namely, membership inference, model inversion, attribute inference, and model stealing -- and establish a threat model taxonomy. Our extensive experimental evaluation, run on five model architectures and four image datasets, shows that the complexity of the training dataset plays an important role with respect to the attack's performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also show that defenses like DP-SGD and Knowledge Distillation can only mitigate some of the inference attacks. Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models, and equally serves as a benchmark tool for researchers and practitioners.

Type: Proceedings paper
Title: ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Event: 31st USENIX Security Symposium
Location: Boston, MA, USA
Dates: 10th-12th August 2022
ISBN-13: 978-1-939133-31-1
Open access status: An open access version is available from UCL Discovery
Publisher version: https://www.usenix.org/conference/usenixsecurity22...
Language: English
Additional information: This version is the version of record. For information on re-use, please refer to the publisher's terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10158923
Downloads since deposit
10Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item