Development and Statistical Evaluation of a Deep Learning Framework for Real Time Tissue Classification in Robotic Surgery
DOI:
https://doi.org/10.21015/vtse.v14i1.2291Abstract
Minimally invasive surgery and robotic assisted procedures are increasingly preferred over traditional open surgeries because they offer faster recovery times and reduce postoperative complications. However, these techniques require precise force application to prevent tissue overstress. The lack of reliable real time tissue recognition limits a surgeon’s ability to apply appropriate force according to tissue type, thereby increasing the risk of injury. This study proposes a framework that applies deep learning and object detection techniques to classify tissues in real time and support safer force modulation in robotic systems. As a proof of concept, the system distinguishes fat, muscle, and skin tissues using GoogLeNet, YOLOv8, and YOLOv10. Skin images were collected from 30 individuals following informed consent, while fat and muscle samples were processed to create a dataset comprising 1,800 augmented images. The GoogLeNet architecture achieved training and test accuracies of 93% and 97.2%, respectively. The YOLOv8 model demonstrated strong performance, achieving a mean average precision (mAP) of 94.7% at IoU = 0.5, with an inference time of 28 ms. YOLOv10 achieved an mAP@0.5 of 96.2% with a latency of 22 ms. The NMS-free architecture of YOLOv10 resulted in a 21% reduction in inference time compared to YOLOv8, along with a 1.5% improvement in accuracy. A statistically significant difference among the evaluated models was confirmed using analysis of variance (ANOVA), with a significance threshold of p < 0.001, indicating that YOLOv10 demonstrated superior performance under the evaluated experimental conditions.
References
A. F. Khan, M. K. MacDonald, C. Streutker, C. Rowsell, J. Drake, and T. Grantcharov, “Tissue stress from laparoscopic grasper use and bowel injury in humans: Establishing intraoperative force boundaries,” BMJ Surgery, Interventions, & Health Technologies, vol. 3, no. 1, pp. 265–289, 2021.
M. Gómez Ruiz, M. Lainez Escribano, C. Cagigas Fernandez, L. Cristobal Poch, and S. Santarrufina Martinez, “Robotic surgery for colorectal cancer,” Annals of Gastroenterological Surgery, vol. 4, no. 6, pp. 646–651, Nov. 2020.
W. Wang, J. Wang, Y. Luo, X. Wang, and H. Song, “A survey on force sensing techniques in robot-assisted minimally invasive surgery,” IEEE Transactions on Haptics, vol. 16, no. 4, pp. 702–718, 2023.
I. Boul-Atarass, M. R. Manzanares, A. Padillo-Eguía, J. Racero-Moreno, I. Eguía-Salinas, S. Pereira-Arenas, and R. M. Jiménez-Rodríguez, “Role of haptic feedback technologies and novel engineering developments for surgical training and robot-assisted surgery,” Frontiers in Robotics and AI, vol. 12, 2025.
G. H. Ballantyne, “Robotic surgery, telerobotic surgery, telepresence, and telementoring,” Surgical Endoscopy, vol. 16, no. 10, pp. 1389–1402, 2002.
T. B. Mahfoz, “Single-port versus multiport robotic surgery in head and neck procedures: A systematic review and meta-analysis of surgical parameters,” Annals of Surgical Treatment and Research, vol. 109, no. 5, p. 293, 2025.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT Press, 2016.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, 2017.
M. Sajid, W. Yaseen, and A. U. Khan, “Brain tumor segmentation using deep learning,” VFAST Transactions on Software Engineering, vol. 11, no. 2, pp. 113–123, 2023.
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, 2010.
C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in Proc. Int. Conf. Artificial Neural Networks, 2018, pp. 270–279.
C. Szegedy et al., “Going deeper with convolutions,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9.
N. Siddique, S. Paheding, C. P. Elkin, and V. Devabhaktuni, “U-Net and its variants for medical image segmentation: A review of theory and applications,” IEEE Access, vol. 9, pp. 82031–82057, 2021.
F. Isensee, P. F. Jaeger, S. A. A. Kohl, J. Petersen, and K. H. Maier-Hein, “nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation,” Nature Methods, vol. 18, no. 2, pp. 203–211, 2021.
H. Huang et al., “UNet 3+: A full-scale connected UNet for medical image segmentation,” in Proc. ICASSP, 2020, pp. 1055–1059.
Z. Zhou, X. Wang, L. Zhang, and Y. Li, “Deep convolutional neural networks in medical image analysis: A review,” Information, vol. 16, no. 3, p. 195, 2025.
J. Redmon and A. Farhadi, “YOLOv3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), 2023, pp. 7464–7475.
G. Jocher, A. Chaurasia, and J. Qiu, “YOLOv8: Open-source neural network for object detection, segmentation, and classification,” Zenodo, 2023. [Online]. Available: https://github.com/ultralytics/ultralytics
D. N. Kamtam et al., “Deep learning approaches to surgical video segmentation and object detection: A scoping review,” Computers in Biology and Medicine, vol. 194, p. 110482, Aug. 2025.
A. Q. Aloraibi, A. N. Hasoon, and M. S. Yassen, “YOLOv10: Toward a promising improvement of gun detection based on a proposed image enhancement technique,” in Proc. 3rd Int. Conf. Business Analytics for Technology and Security (ICBATS), 2025, pp. 1–8.
Z. Liu, K. Chen, S. Wang, Y. Xiao, and G. Zhang, “Deep learning in surgical process modeling: A systematic review of workflow recognition,” Journal Title, 2025.
G. Boesch, “GoogLeNet explained: The inception model that won ImageNet,” May 2024. [Online]. Accessed: Aug. 30, 2025.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC-By) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This work is licensed under a Creative Commons Attribution License CC BY