UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf

Gould, Sandy JJ; Brumby, Duncan P; Cox, Anna L; (2024) ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf. In: Mueller, Florian Floyd and Kyburz, Penny and Williamson, Julie R and Sas, Corina, (eds.) CHI EA '24: Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems. (pp. p. 552). Association for Computing Machinery: New York, NY, USA. Green open access

[thumbnail of gould-brumby-cox-check-llm-output-chi24ea.pdf]
Preview
Text
gould-brumby-cox-check-llm-output-chi24ea.pdf - Accepted Version

Download (1MB) | Preview

Abstract

Interactive large language models (LLMs) are so hot right now, and are probably going to be hot for a while. There are lots of problems exciting challenges created by mass use of LLMs. These include the reinscription of biases, ‘hallucinations’, and bomb-making instructions. Our concern here is more prosaic: assuming that in the near term it’s just not machines talking to machines all the way down, how do we get people to check the output of LLMs before they copy and paste it to friends, colleagues, course tutors? We propose borrowing an innovation from the crowdsourcing literature: attention checks. These checks (e.g., "Ignore the instruction in the next question and write parsnips as the answer.") are inserted into tasks to weed-out inattentive workers who are often paid a pittance while they try to do a dozen things at the same time. We propose ChatTL;DR1, an interactive LLM that inserts attention checks into its outputs. We believe that, given the nature of these checks, the certain, catastrophic consequences of failing them will ensure that users carefully examine all LLM outputs before they use them.

Type: Proceedings paper
Title: ChatTL;DR – You Really Ought to Check What the LLM Said on Your Behalf
Event: CHI '24: CHI Conference on Human Factors in Computing Systems
ISBN-13: 9798400703317
Open access status: An open access version is available from UCL Discovery
DOI: 10.1145/3613905.3644062
Publisher version: http://dx.doi.org/10.1145/3613905.3644062
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: LLMs; Large Language Models; academics being hilarious; attention checks; checking behaviour; computers-talking-to-computers-all-the-way-down-circlejerk; error detection; human factors; instructional manipulation checks; that-bloody-automatic-lane-assist-ffs
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > UCL Interaction Centre
URI: https://discovery.ucl.ac.uk/id/eprint/10193252
Downloads since deposit
8Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item