Ruffle, James K;
Foulon, Chris;
Nachev, Parashkev;
(2023)
The human cost of ethical artificial intelligence.
Brain Structure and Function
, 228
pp. 1365-2369.
10.1007/s00429-023-02662-7.
Preview |
Text
Manuscript_v2.pdf - Accepted Version Download (2MB) | Preview |
Abstract
Foundational models such as ChatGPT critically depend on vast data scales the internet uniquely enables. This implies exposure to material varying widely in logical sense, factual fidelity, moral value, and even legal status. Whereas data scaling is a technical challenge, soluble with greater computational resource, complex semantic filtering cannot be performed reliably without human intervention: the self-supervision that makes foundational models possible at least in part presupposes the abilities they seek to acquire. This unavoidably introduces the need for large-scale human supervision-not just of training input but also model output-and imbues any model with subjectivity reflecting the beliefs of its creator. The pressure to minimize the cost of the former is in direct conflict with the pressure to maximise the quality of the latter. Moreover, it is unclear how complex semantics, especially in the realm of the moral, could ever be reduced to an objective function any machine could plausibly maximise. We suggest the development of foundational models necessitates urgent innovation in quantitative ethics and outline possible avenues for its realisation.
Archive Staff Only
View Item |