Accessibility navigation


Personalized uncertainty quantification in artificial intelligence

Chakraborti, T., Banerji, C. R.S., Marandon, A., Hellon, V., Mitra, R., Lehmann, B., Bräuninger, L., McGough, S., Turkay, C., Frangi, A. F., Bianconi, G., Li, W. ORCID: https://orcid.org/0000-0003-2878-3185, Rackham, O., Parashar, D., Harbron, C. and MacArthur, B. (2025) Personalized uncertainty quantification in artificial intelligence. Nature Machine Intelligence, 7. pp. 522-530. ISSN 2522-5839

[thumbnail of 9569_2_revised_manuscript_marked_up_130320_ss1pl6_convrt.pdf] Text - Accepted Version
· Restricted to Repository staff only until 23 October 2025.

182kB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1038/s42256-025-01024-8

Abstract/Summary

Artificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification(PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.

Item Type:Article
Refereed:Yes
Divisions:Henley Business School > Digitalisation, Marketing and Entrepreneurship
ID Code:122008
Publisher:Nature

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation