eprintid: 10205333
rev_number: 9
eprint_status: archive
userid: 699
dir: disk0/10/20/53/33
datestamp: 2025-02-27 15:08:16
lastmod: 2025-02-27 15:08:16
status_changed: 2025-02-27 15:08:16
type: article
metadata_visibility: show
sword_depositor: 699
creators_name: Warner, Mark
creators_name: Strohmayer, Angelika
creators_name: Higgs, Matthew
creators_name: Coventry, Lynne
title: A critical reflection on the use of toxicity detection algorithms in proactive content moderation systems
ispublished: pub
divisions: UCL
divisions: B04
divisions: F48
keywords: Proactive moderation, Moderation, Hate speech, Context, Toxicity-detection, Abusability
note: This is an Open Access article published under a Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/).
abstract: Toxicity detection algorithms, originally designed for reactive content moderation systems, are being deployed into proactive end-user interventions to moderate content. Yet, there has been little critique on the use of these algorithms within this moderation paradigm. We conducted design workshops with four stakeholder groups, asking participants to embed a toxicity detection algorithm into an imagined mobile phone keyboard. This allowed us to critically explore how such algorithms could be used to proactively reduce the sending of toxic content. We found contextual factors such as platform culture and affordances, and scales of abuse, impacting on perceptions of toxicity and effectiveness of the system. We identify different types of end-users across a continuum of intention to send toxic messages, from unaware users, to those that are determined and organised. Finally, we highlight the potential for certain end-user groups to misuse these systems to validate their attacks, to gamify hate, and to manipulate algorithmic models to exacerbate harm.
date: 2025-04
date_type: published
publisher: Elsevier BV
official_url: https://doi.org/10.1016/j.ijhcs.2025.103468
oa_status: green
full_text_type: pub
language: eng
primo: open
primo_central: open_green
verified: verified_manual
elements_id: 2363767
doi: 10.1016/j.ijhcs.2025.103468
lyricists_name: Warner, Mark
lyricists_id: MWARN90
actors_name: Flynn, Bernadette
actors_id: BFFLY94
actors_role: owner
full_text_status: public
publication: International Journal of Human-Computer Studies
volume: 198
article_number: 103468
issn: 1071-5819
citation:        Warner, Mark;    Strohmayer, Angelika;    Higgs, Matthew;    Coventry, Lynne;      (2025)    A critical reflection on the use of toxicity detection algorithms in proactive content moderation systems.                   International Journal of Human-Computer Studies , 198     , Article 103468.  10.1016/j.ijhcs.2025.103468 <https://doi.org/10.1016/j.ijhcs.2025.103468>.       Green open access   
 
document_url: https://discovery.ucl.ac.uk/id/eprint/10205333/1/Warner_1-s2.0-S1071581925000254-main.pdf