Interpretable Deep Learning for Brain Tumor Diagnosis: Occlusion Sensitivity-Driven Explainability in MRI Classification
DOI:
https://doi.org/10.21015/vtse.v13i2.2082Abstract
Magnetic resonance imaging (MRI) serves as a crucial diagnostic tool, particularly for brain tumors where early detection significantly improves patient prognosis. The growing use of deep learning in medical imaging has led to substantial progress, yet the opaque nature of these models creates barriers to clinical acceptance, especially for critical applications such as tumor diagnosis. Our research applies explainable AI (XAI) techniques to improve the transparency of CNN-based brain tumor detection using MRI data. Working with a dataset containing 7,022 images spanning four tumor categories, our model attains 80\% accuracy while employing occlusion sensitivity analysis to produce visual interpretations. These heatmaps identify the most influential regions for predictions, giving clinicians insight into the model's decision process. This XAI integration enhances both understanding and accountability in healthcare AI systems, facilitating more reliable diagnostic tools.Precise early identification of brain tumors through MRI dramatically affects survival outcomes, though human interpretation remains time-consuming and variable. While CNNs show impressive classification results, their unclear reasoning limits clinical implementation. Our study introduces an XAI approach that pairs an accurate CNN classifier (80% on 7,024 multi-class scans) with occlusion analysis to create intuitive visual explanations. By methodically altering image areas and measuring prediction variations, we generate heatmaps that accurately pinpoint tumor-distinguishing features, matching radiological assessment. Comparative results demonstrate occlusion analysis's superiority over gradient methods like Grad-CAM in spatial precision for tumor classification (meningioma, glioma, pituitary). This research progresses clinically useful AI by connecting model effectiveness with interpretability in brain tumor imaging.
References
B. H. Van der Velden, H. J. Kuijf, K. G. Gilhuijs, and M. A. Viergever, "Explainable artificial intelligence (XAI) in deep learning-based medical image analysis," *Medical Image Analysis*, vol. 79, p. 102470, 2022.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," *Communications of the ACM*, vol. 60, no. 6, pp. 84–90, 2017. DOI: https://doi.org/10.1145/3065386
W. J. Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, "Definitions, methods, and applications in interpretable machine learning," *Proceedings of the National Academy of Sciences*, vol. 116, no. 44, pp. 22071–22080, 2019.
A. B. Arrieta *et al.*, "Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI," *Information Fusion*, vol. 58, pp. 82–115, 2020.
K. Simonyan, A. Vedaldi, and A. Zisserman, "Deep inside convolutional networks: Visualising image classification models and saliency maps," *arXiv preprint arXiv:1312.6034*, 2013.
R. R. Selvaraju *et al.*, "Grad-CAM: Visual explanations from deep networks via gradient-based localization," *International Journal of Computer Vision*, vol. 128, pp. 336–359, 2020. DOI: https://doi.org/10.1007/s11263-019-01228-7
S. Vaidyanathan, V. B. Kolachalama, and K. Natarajan, "Towards improving the visual explainability of artificial intelligence in the clinical setting," *BMC Digital Health*, vol. 1, no. 1, p. 22, 2023. [Online]. Available: https://doi.org/10.1186/s44247-023-00022-3
P. Valois, K. Niinuma, and K. Fukui, "Occlusion sensitivity analysis with augmentation subspace perturbation in deep feature space," *arXiv preprint arXiv:2311.15022*, 2023.
F. Coppola, L. Faggioni, D. Regge, and A. Giovagnoni, "Explainable AI in medical imaging: An overview for clinical practitioners– beyond saliency-based XAI approaches," *European Journal of Radiology*, vol. 161, p. 110398, 2023. [Online]. Available: https://doi.org/10.1016/j.ejrad.2023.110787
E. Tjoa, C.-K. Guan, and L. Lin, "Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks," *Computers in Biology and Medicine*, vol. 153, p. 106461, 2023. [Online]. Available: https://doi.org/10.1016/j.compbiomed.2023.106668
E. J. Topol, "High-performance medicine: the convergence of human and artificial intelligence," *Nature Medicine*, vol. 25, no. 1, pp. 44–56, 2019.
K. Simonyan, A. Vedaldi, and A. Zisserman, "Deep inside convolutional networks: Visualising image classification models and saliency maps," *arXiv preprint arXiv:1312.6034*, 2013.
M. R. Arbabshirani *et al.*, "Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration," *NPJ Digital Medicine*, vol. 1, no. 1, p. 9, 2018. DOI: https://doi.org/10.1038/s41746-017-0015-z
R. R. Selvaraju *et al.*, "Grad-CAM: Visual explanations from deep networks via gradient-based localization," in *Proc. IEEE Int. Conf. Comput. Vis.*, pp. 618–626, 2017. DOI: https://doi.org/10.1109/ICCV.2017.74
K. A. Kim *et al.*, "Artificial intelligence-enhanced neurocritical care for traumatic brain injury: Past, present and future," *J. Korean Neurosurg. Soc.*, vol. 67, no. 5, pp. 493–509, 2024.
S. Jetley, N. A. Lord, N. Lee, and P. H. Torr, "Learn to pay attention," *arXiv preprint arXiv:1804.02391*, 2018.
J. Schlemper *et al.*, "Attention gated networks: Learning to leverage salient regions in medical images," *Medical Image Analysis*, vol. 53, pp. 197–207, 2019.
J. R. Zech *et al.*, "Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study," *PLoS Medicine*, vol. 15, no. 11, p. e1002683, 2018.
G. Yang, L. P. King, S. Liu, and Z. Zhang, "Towards robust explainability: A study of adversarial attacks on visual explanations," in *Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR)*, pp. 4515–4524, 2024.
J. Hou *et al.*, "Self-explainable AI for medical image analysis: A survey and new outlooks," *arXiv preprint arXiv:2410.02331*, 2024.
M. Sundararajan, A. Taly, and Q. Yan, "Axiomatic attribution for deep networks," in *Proc. Int. Conf. Mach. Learn. (ICML)*, pp. 3319–3328, 2017.
A. Dosovitskiy *et al.*, "An image is worth 16x16 words: Transformers for image recognition at scale," *arXiv preprint arXiv:2010.11929*, 2021.
Y. Zhang *et al.*, "XAI benchmark for visual explanation," *arXiv preprint arXiv:2310.08537*, 2023.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," *Proc. IEEE*, vol. 86, no. 11, pp. 2278–2324, 1998. DOI: https://doi.org/10.1109/5.726791
I. Goodfellow, Y. Bengio, and A. Courville, *Deep Learning*, MIT Press, 2016. [Online]. Available: https://www.deeplearningbook.org/
M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in *Computer Vision–ECCV 2014*, Zurich, Switzerland, Sept. 6–12, 2014, pp. 818–833, Springer. DOI: https://doi.org/10.1007/978-3-319-10590-1_53
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC-By) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This work is licensed under a Creative Commons Attribution License CC BY