Explainability and trust in deep learning for cancer imaging: systematic barriers, clinical misalignment, and a translational roadmap

[thumbnail of Open Access]
Preview
Text (Open Access)
- Published Version
ยท Available under License Creative Commons Attribution.

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Borra, S., Dey, N., Fong, S., Sherratt, R. S. ORCID: https://orcid.org/0000-0001-7899-4445 and Shi, F. (2026) Explainability and trust in deep learning for cancer imaging: systematic barriers, clinical misalignment, and a translational roadmap. Cancers, 18 (19). 1361. ISSN 2072-6694 doi: 10.3390/cancers18091361

Abstract/Summary

Deep learning (DL) has transformed cancer imaging by enabling automated tumour detection, classification, and risk prediction. Despite impressive diagnostic performance, limited explainability and poor uncertainty calibration continue to restrict clinical integration. This review is guided by five research questions that examine the challenges, impact, and translational implications of explainable artificial intelligence (XAI) in oncology imaging. We identify key barriers to trust, including dataset bias, shortcut learning, opacity of convolutional neural networks, and workflow misalignment. Evidence suggests that explainable models can increase clinician confidence, reduce false positives, and improve collaborative decision-making when explanations are faithful, semantically meaningful, and uncertainty aware. We evaluate architectural strategies that embed interpretability such as concept-bottleneck models, prototype-based learning, and attention regularization along with post hoc techniques. Beyond performance metrics, we examine how interpretable AI aligns with clinical reasoning processes and analyse regulatory, ethical, and medico-legal considerations influencing deployment. The findings indicate that explainability alone is insufficient, durable trust requires epistemic alignment, prospective validation, lifecycle governance, and equity-focused evaluation. By reframing explainability as a structural design principle rather than a supplementary feature, this review outlines a pathway toward accountable and clinically dependable AI systems in oncology.

Altmetric Badge

Dimensions Badge

Item Type Article
URI https://centaur.reading.ac.uk/id/eprint/129524
Identification Number/DOI 10.3390/cancers18091361
Refereed Yes
Divisions Life Sciences > School of Biological Sciences > Biomedical Sciences
Life Sciences > School of Biological Sciences > Department of Bio-Engineering
Publisher MDPI
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record