UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

'Real Attackers Don't Compute Gradients': Bridging the Gap between Adversarial ML Research and Practice

Apruzzese, G; Anderson, HS; Dambra, S; Freeman, D; Pierazzi, F; Roundy, K; (2023) 'Real Attackers Don't Compute Gradients': Bridging the Gap between Adversarial ML Research and Practice. In: Proceedings - 2023 IEEE Conference on Secure and Trustworthy Machine Learning, SaTML 2023. (pp. pp. 339-364). IEEE: Raleigh, NC, USA. Green open access

[thumbnail of satml23_real-gradients.pdf]
Preview
Text
satml23_real-gradients.pdf - Accepted Version

Download (1MB) | Preview

Abstract

Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simple tactics to subvert ML-driven systems, and as a result security practitioners have not prioritized adversarial ML defenses. Motivated by the apparent gap between researchers and practitioners, this position paper aims to bridge the two domains. We first present three real-world case studies from which we can glean practical insights unknown or neglected in research. Next we analyze all adversarial ML papers recently published in top security conferences, highlighting positive trends and blind spots. Finally, we state positions on precise and cost-driven threat modeling, collaboration between industry and academia, and reproducible research. We believe that our positions, if adopted, will increase the real-world impact of future endeavours in adver-sarial ML, bringing both researchers and practitioners closer to their shared goal of improving the security of ML systems.

Type: Proceedings paper
Title: 'Real Attackers Don't Compute Gradients': Bridging the Gap between Adversarial ML Research and Practice
Event: 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)
Dates: 8 Feb 2023 - 10 Feb 2023
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/SaTML54575.2023.00031
Publisher version: https://doi.org/10.1109/satml54575.2023.00031
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Threat Model, Economics, Cybersecurity, Machine Learning, Research, Practice, Adversarial
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10201638
Downloads since deposit
Loading...
2Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
1.United Kingdom
1
2.France
1

Archive Staff Only

View Item View Item