Challenges of deep learning in medical image analysis – improving explainability and trustDhar, T., Dey, N., Borra, S. and Sherratt, R. S. ORCID: https://orcid.org/0000-0001-7899-4445 (2023) Challenges of deep learning in medical image analysis – improving explainability and trust. IEEE Transactions on Technology and Society, 4 (1). pp. 68-75. ISSN 2637-6415
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1109/TTS.2023.3234203 Abstract/SummaryDeep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. But deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.
Download Statistics DownloadsDownloads per month over past year Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |