Culture machinesJones, R. ORCID: https://orcid.org/0000-0002-9426-727X (2024) Culture machines. Applied Linguistics Review. ISSN 1868-6311 (In Press)
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1515/applirev-2024-0188 Abstract/SummaryThis paper discusses the way the concept of culture is discursively con- structed by large language models that are trained on massive collections of cultural artefacts and designed to produce probabilistic representations of culture based on this training data. It makes the argument that, no matter how ‘diverse’ their training data is, large language models will always be prone to stereotyping and over- simplification because of the mathematical models that underpin their operations. Efforts to build ‘guardrails’ into systems to reduce their tendency to stereotype can often result in the opposite problem, with issues around culture and ethnicity being ‘invisiblised’. To illustrate this, examples are provided of the stereotypical linguistic styles and cultural attitudes models produce when asked to portray different kinds of ‘persona’. The tendency of large language models to gravitate towards cultural and linguistic generalities is contrasted with trends in intercultural communication towards more fluid, socially situated understandings of interculturality, and implications for the future of cultural representation are discussed.
Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |