Accessibility navigation


Culture machines

Jones, R. ORCID: https://orcid.org/0000-0002-9426-727X (2024) Culture machines. Applied Linguistics Review. ISSN 1868-6311 (In Press)

[img] Text - Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

817kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1515/applirev-2024-0188

Abstract/Summary

This paper discusses the way the concept of culture is discursively con- structed by large language models that are trained on massive collections of cultural artefacts and designed to produce probabilistic representations of culture based on this training data. It makes the argument that, no matter how ‘diverse’ their training data is, large language models will always be prone to stereotyping and over- simplification because of the mathematical models that underpin their operations. Efforts to build ‘guardrails’ into systems to reduce their tendency to stereotype can often result in the opposite problem, with issues around culture and ethnicity being ‘invisiblised’. To illustrate this, examples are provided of the stereotypical linguistic styles and cultural attitudes models produce when asked to portray different kinds of ‘persona’. The tendency of large language models to gravitate towards cultural and linguistic generalities is contrasted with trends in intercultural communication towards more fluid, socially situated understandings of interculturality, and implications for the future of cultural representation are discussed.

Item Type:Article
Refereed:Yes
Divisions:Arts, Humanities and Social Science > School of Literature and Languages > English Language and Applied Linguistics
ID Code:117625
Publisher:De Gruyter

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation