Personalized uncertainty quantification in artificial intelligence
Chakraborti, T., Banerji, C. R.S., Marandon, A., Hellon, V., Mitra, R., Lehmann, B., Bräuninger, L., McGough, S., Turkay, C., Frangi, A. F., Bianconi, G., Li, W.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1038/s42256-025-01024-8 Abstract/SummaryArtificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification(PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.
Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |