Chakraborti, T., Banerji, C. R.S., Marandon, A., Hellon, V., Mitra, R., Lehmann, B., Bräuninger, L., McGough, S., Turkay, C., Frangi, A. F., Bianconi, G., Li, W.
ORCID: https://orcid.org/0000-0003-2878-3185, Rackham, O., Parashar, D., Harbron, C. and MacArthur, B.
(2025)
Personalized uncertainty quantification in artificial intelligence.
Nature Machine Intelligence, 7.
pp. 522-530.
ISSN 2522-5839
doi: 10.1038/s42256-025-01024-8
Abstract/Summary
Artificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification(PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.
Altmetric Badge
| Item Type | Article |
| URI | https://centaur.reading.ac.uk/id/eprint/122008 |
| Identification Number/DOI | 10.1038/s42256-025-01024-8 |
| Refereed | Yes |
| Divisions | Henley Business School > Digitalisation, Marketing and Entrepreneurship |
| Publisher | Nature |
| Download/View statistics | View download statistics for this item |
Downloads
Downloads per month over past year
University Staff: Request a correction | Centaur Editors: Update this record
Download
Download