UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS

Apruzzese, G; Fass, A; Pierazzi, F; (2024) When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS. In: AISec 2024 - Proceedings of the 2024 Workshop on Artificial Intelligence and Security, Co-Located with: CCS 2024. (pp. pp. 149-160). ACM Green open access

[thumbnail of aisec24-apruzzese.pdf]
Preview
Text
aisec24-apruzzese.pdf - Accepted Version

Download (1MB) | Preview

Abstract

We scrutinize the effects of “blind” adversarial perturbations against machine learning (ML)-based network intrusion detection systems (NIDS) affected by concept drift. There may be cases in which a real attacker – unable to access and hence unaware that the ML-NIDS is weakened by concept drift – attempts to evade the ML-NIDS with data perturbations. It is currently unknown if the cumulative effect of such adversarial perturbations and concept drift leads to a greater or lower impact on ML-NIDS. In this “open problem” paper, we seek to investigate this unusual, but realistic, setting—we are not interested in perfect knowledge attackers. We begin by retrieving a publicly available dataset of documented network traces captured in a real, large (>300 hosts) organization. Overall, these traces include several years of raw traffic packets—both benign and malicious. Then, we adversarially manipulate malicious packets with problem-space perturbations, representing a physically realizable attack. Finally, we carry out the first exploratory analysis focused on comparing the effects of our “adversarial examples” with their respective unperturbed malicious variants in concept-drift scenarios. Through two case studies (a “short-term” one of 8 days; and a “long-term” one of 4 years) encompassing 48 detector variants, we find that, although our perturbations induce a lower detection rate in concept-drift scenarios, some perturbations yield adverse effects for the attacker in intriguing use cases. Overall, our study shows that the topics we covered are still an open problem which require a re-assessment from future research.

Type: Proceedings paper
Title: When Adversarial Perturbations meet Concept Drift: an Exploratory Analysis on ML-NIDS
Event: CCS '24: ACM SIGSAC Conference on Computer and Communications Security
ISBN-13: 9798400712289
Open access status: An open access version is available from UCL Discovery
DOI: 10.1145/3689932.3694757
Publisher version: https://doi.org/10.1145/3689932.3694757
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Network intrusion detection, adversarial example, machine learning, mcfp, ctu13, data drift, distribution shift, temporal evaluation
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10205190
Downloads since deposit
Loading...
3Downloads
Download activity - last month
Loading...
Download activity - last 12 months
Loading...
Downloads by country - last 12 months
Loading...

Archive Staff Only

View Item View Item