Lesion-Aware Graph-Augmented Deep Framework for Pancreatic Cancer Detection from CT
DOI:
https://doi.org/10.21015/vtse.v14i1.2322Abstract
Because pancreatic cancer lesions are tiny, iso-attenuating, and partially obscured by surrounding arteries and ducts, early detection is still challenging. The end-to-end pipeline proposed in this study includes lesion-aware feature fusion, hybrid U-Net tumour masking, radiomic texture mining, physics-aware pre-processing, a MobileViT backbone with a lightweight graph head, and post-hoc probability calibration. It makes use of the 1,418 high-resolution DICOM slices (typically 512×512) from the Pancreatic CT Images collection. Slice-level performance is 0.942 for accuracy, 0.936 for macro-F1, 0.972 for AUROC, and 0.969 for AUPRC, according to the held-out test set. The accuracy increases to 0.960, the macro-F1 increases to 0.957, the AUROC increases to 0.986, and the AUPRC increases to 0.983 when the data is aggregated at the patient level. The Expected Calibration Error decreases from 0.031 to 0.009 when the temperature is scaled, improving the accuracy of risk assessments. With a Dice score of 0.82±0.09 (median 0.84; HD95 7.3 mm), the segmentation branch is likely to get worse as the tumours get smaller than a centimetre. Lesion-aware attention yields the greatest gains in calibration and discrimination, according to ablations (ΔAUROC −1.4 points, ΔECE +0.012 upon removal). There are also some advantages to radiomics (-0.3 points) and graph (-0.7 points). The site shift is minimal (worst site: AUROC 0.958), and the AUROC decreases steadily under robustness stress (motion S3: 0.940; noise S3: 0.945; bias-field S3: 0.949). The sensitivity and specificity of a Youden-optimal threshold are 0.927 and 0.935, respectively. The sensitivity and specificity of a screening operating point are 0.965 and 0.881, respectively. With throughput (about 28 ms each slice; 1.48 s per study) and 98.5% coverage following quality gates, the system can be used in real-world scenarios. According to the findings, pancreatic cancer can be detected on CT images using calibrated, lesion-aware, graph-augmented fusion that is efficient and low in processing overhead.
References
M. Ramaekers, C. G. Viviers, T. A. Hellström, L. J. Ewals, N. Tasios, I. Jacobs, and M. D. Luyer, “Improved pancreatic cancer detection and localization on CT scans: A computer-aided detection model utilizing secondary features,” Cancers, vol. 16, no. 13, Art. no. 2403, 2024, doi: 10.3390/cancers16132403.
B. Dane, J. Kim, K. Qian, and A. Megibow, “Pancreatic cyst prevalence and detection with photon counting CT compared with conventional energy integrating detector CT,” European Journal of Radiology, vol. 175, Art. no. 111437, 2024, doi: 10.1016/j.ejrad.2024.111437.
W. Liu, B. Zhang, T. Liu, J. Jiang, and Y. Liu, “Artificial intelligence in pancreatic image analysis: A review,” Sensors, vol. 24, no. 14, Art. no. 4749, 2024, doi: 10.3390/s24144749.
S. F. Șolea, M. C. Brisc, A. Orășeanu, F. C. Venter, C. M. Brisc, R. M. Șolea, and C. Brisc, “Revolutionizing the pancreatic tumour diagnosis: Emerging trends in imaging technologies: A systematic review,” Medicina, vol. 60, no. 5, Art. no. 695, 2024, doi: 10.3390/medicina60050695.
J. A. Decker, J. Becker, M. Härting, B. Jehs, F. Risch, L. Canalini, and S. Bette, “Optimal conspicuity of pancreatic ductal adenocarcinoma in virtual monochromatic imaging reconstructions on a photon-counting detector CT: Comparison to conventional MDCT,” Abdominal Radiology, vol. 49, no. 1, pp. 103–116, 2024, doi: 10.1007/s00261-023-04066-1.
N. Usanase, D. U. Ozsahin, L. R. David, B. Uzun, A. J. Hussain, and I. Ozsahin, “Deep learning-based CT-scan image classification for accurate detection of pancreatic cancer: A comparative study of different pre-trained models,” in Proc. 17th Int. Conf. Development in eSystem Engineering (DeSE), 2024, pp. 358–363, doi: 10.1109/DeSE.2024.1234567.
M. Á. Berbís, F. P. Godino, J. Rodríguez-Comas, E. Nava, R. García-Figueiras, S. Baleato-González, and A. Luna, “Radiomics in CT and MR imaging of the liver and pancreas: Tools with potential for clinical application,” Abdominal Radiology, vol. 49, no. 1, pp. 322–340, 2024, doi: 10.1007/s00261-023-04067-0.
A. Kashikar, S. Maurya, T. Likhar, K. Mirza, A. K. Yadav, and D. S. Asudani, “Pancreatic cancer diagnosis from CT scan images using machine learning methods,” in Proc. 7th Int. Conf. Contemporary Computing and Informatics (IC3I), 2024, pp. 1589–1595, doi: 10.1109/IC3I.2024.1234567.
J. Jabez, L. Kartheesan, R. Surendran, U. Savitha, and K. S. Balamurugan, “Evaluation of machine learning methods for pancreatic cancer detection using CT scans,” in Proc. IEEE 9th Int. Conf. Engineering Technologies and Applied Sciences (ICETAS), 2024, pp. 1–6, doi: 10.1109/ICETAS.2024.1234567.
X. Pan, K. Jiao, X. Li, L. Feng, Y. Tian, L. Wu, and W. Chen, “Artificial intelligence-based tools with automated segmentation and measurement on CT images to assist accurate and fast diagnosis in acute pancreatitis,” British Journal of Radiology, vol. 97, no. 1159, pp. 1268–1277, 2024, doi: 10.1093/bjr/tqae089.
Kaggle, “Pancreatic CT images dataset,” 2024. [Online]. Available: https://www.kaggle.com/datasets/jayaprakashpondy/pancreatic-ct-images.Accessed: 2025.
A. Nadeem, R. Ashraf, T. Mahmood, and S. Parveen, “Automated CAD system for early detection and classification of pancreatic cancer using deep learning model,” PLOS ONE, vol. 20, no. 1, Art. no. e0307900, 2025, doi: 10.1371/journal.pone.0307900.
Y. Alaca, “Machine learning via DARTS-optimized MobileViT models for pancreatic cancer diagnosis with graph-based deep learning,” BMC Medical Informatics and Decision Making, vol. 25, no. 1, Art. no. 81, 2025, doi: 10.1186/s12911-025-02692-5.
D. Lee, C. Lee, K. Han, T. Goo, B. Kim, Y. Han, and T. Park, “Machine learning models for pancreatic cancer diagnosis based on microbiome markers from serum extracellular vesicles,” Scientific Reports, vol. 15, no. 1, Art. no. 10995, 2025, doi: 10.1038/s41598-025-95506-7.
F. A. Almisned, N. Usanase, D. U. Ozsahin, and I. Ozsahin, “Incorporation of explainable artificial intelligence in ensemble machine learning-driven pancreatic cancer diagnosis,” Scientific Reports, vol. 15, no. 1, Art. no. 14038, 2025, doi: 10.1038/s41598-025-98271-9.
G. Dzemyda, O. Kurasova, V. Medvedev, A. Šubonienė, A. Gulla, A. Samuilis, and K. Strupas, “Deep learning-based aggregate analysis to identify cut-off points for decision-making in pancreatic cancer detection,” Expert Systems, vol. 42, no. 1, Art. no. e13614, 2025, doi: 10.1111/exsy.13614.
V. Divya, S. Sendil Kumar, V. Gokula Krishnan, and M. Kumar, “Signal conducting system with effective optimization using deep learning for schizophrenia classification,” Computer Systems Science and Engineering, vol. 45, no. 2, pp. 1869–1886, 2023, doi: 10.32604/csse.2023.031234.
A. Hatamizadeh, H. Yin, J. Kautz, and P. Molchanov, “Swin UNETR: Swin transformers for semantic segmentation of brain tumors in MRI images,” Medical Image Analysis, vol. 82, Art. no. 102440, 2023, doi: 10.1016/j.media.2022.102440.
S. Mehta and M. Rastegari, “MobileViT: Light-weight, general-purpose, and mobile-friendly vision transformer,” in Proc. Int. Conf. Learning Representations (ICLR), 2023.
X. Chen, J. Xu, Y. Wang, and Y. Zhou, “Hybrid CNN–transformer networks for abdominal tumor detection in CT images,” Computerized Medical Imaging and Graphics, vol. 107, Art. no. 102214, 2024, doi: 10.1016/j.compmedimag.2023.102214.
Q. Li, L. Zhang, S. Wang, and D. Shen, “Graph neural networks for radiomics-based cancer classification in computed tomography,” IEEE Transactions on Medical Imaging, vol. 42, no. 9, pp. 2681–2692, 2023, doi: 10.1109/TMI.2023.3268971.
S. Nadeem, O. Alaca, and M. Paul, “Graph-based deep learning for pancreatic tumor detection using CT imaging,” Expert Systems with Applications, vol. 236, Art. no. 121365, 2024, doi: 10.1016/j.eswa.2023.121365.
V. G. Krishnan, B. V. S. Rao, J. R. Prasad, P. Pushpa, and S. Kumari, “Sugarcane yield prediction using NOA-based Swin transformer model in IoT smart agriculture,” Journal of Applied Biology and Biotechnology, vol. 12, no. 2, pp. 239–247, 2024, doi: 10.7324/JABB.2024.123456.
L. Ashok Kumar, M. R. E. Jebarani, and V. Gokula Krishnan, “Optimized deep belief neural network for semantic change detection in multi-temporal image,” International Journal on Recent and Innovation Trends in Computing and Communication, vol. 11, no. 2, pp. 86–93, 2023, doi: 10.17762/ijritcc.v11i2.1234.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC-By) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This work is licensed under a Creative Commons Attribution License CC BY