训练用单针/双针带线【出售】-->外科训练模块总目录
0.5、1、2、3.5、5mm仿生血管仿生体 - 胸腹一体式腹腔镜模拟训练器
仿气腹/半球形腹腔镜模拟训练器
[单端多孔折叠]腹腔镜模拟训练器
「训练教具器械汇总」管理员微信/QQ12087382[问题反馈]
开启左侧

[病历讨论] 用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

[复制链接]
发表于 2023-4-27 00:00:04 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?注册

×
阐明基于深度学习的手术器械分割网络在不同手术环境中的普遍性对于认识到手术设备开发中过度拟合的挑战非常重要。 本研究使用从 128 个术中视频中随机提取的 5238 张图像,综合评估了深度神经网络对手术器械分割的普遍性。 视频数据集包含 112 例腹腔镜结直肠切除术、5 例腹腔镜远端胃切除术、5 例腹腔镜胆囊切除术和 6 例腹腔镜肝部分切除术。

对具有
(1)与训练集相同条件的测试集进行基于深度学习的手术器械分割;
(2)识别目标手术器械和手术类型相同但腹腔镜记录系统不同的;
(3)相同的腹腔镜记录系统和手术类型但识别目标的腹腔镜手术钳略有不同;
(4)相同的腹腔镜记录系统和识别目标手术器械但手术类型不同。

测试集 1、2、3 和 4 的平均精度和并集平均交集分别为 0.941 和 0.887、0.866 和 0.671、0.772 和 0.676,以及 0.588 和 0.395。 因此,即使在稍微不同的条件下,识别精度也会下降。 这项研究的结果揭示了深度神经网络在外科人工智能领域的通用性有限,并警告人们不要使用基于深度学习的有偏见的数据集和模型。

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

图 1
本研究中识别目标手术器械的代表性图像。 (A) 训练集中包含的手术器械(T1:谐波剪;T2:内窥镜手术电烙术;T3:Aesculap AdTec 无创通用钳)。 (B) 训练集中未包含的腹腔镜手术钳(T4:Maryland;T5:Croce-Olmi;T6:持针器)。

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

图 2
手术器械识别精度结果(AP 平均精度,IoU 交并比,mAP 平均平均精度,mIoU 平均交并比)。 (A) 与训练集相同条件下的 AP 和 IoU(T1:谐波剪;T2:内窥镜手术电烙术;T3:Aesculap AdTec 无创通用钳)。 (B) 不同类型腹腔镜记录系统的 mAP 和 mIoU。 (C) 不同类型腹腔镜手术钳的 AP 和 IoU(T3:Aesculap AdTec 无创通用钳;T4:Maryland;T5:Croce-Olmi;T6:持针器)。 (D) 不同类型手术的 mAP 和 mIoU(LCRR 腹腔镜结直肠切除术、LDG 腹腔镜远端胃切除术、LC 腹腔镜胆囊切除术、LPH 腹腔镜肝部分切除术)。

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

图 3
每个腹腔镜记录系统记录的代表性图像。 (A) Endoeye 腹腔镜(奥林巴斯有限公司,日本东京)和 Visera Elite II 系统(奥林巴斯有限公司,日本东京)。 (B) 1488 HD 3 芯片摄像系统(Stryker Corp.,Kalamazoo,MI,USA)。 (C) Image 1 S 相机系统(Karl Storz SE & Co., KG, Tuttlingen, Germany)。

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

用于不同手术环境中手术器械分割的单个深度神经网络的通用性有限

图 4
每种手术的代表性图像。 (A) LCRR; (B) LDG; (C) LC; (D) LPH。

参考资料:
1. Siddaiah-Subramanya M, Tiang KW, Nyandowe M. A new era of minimally invasive surgery: Progress and development of major technical innovations in general surgery over the last decade. Surg. J. (N Y) 2017;3:e163–e166. doi: 10.1055/s-0037-1608651.   
2. Maier-Hein L, et al. Surgical data science for next-generation interventions. Nat. Biomed. Eng. 2017;1:691–696. doi: 10.1038/s41551-017-0132-7.   
3. Hashimoto DA, Rosman G, Rus D, Meireles OR. Artificial intelligence in surgery: Promises and perils. Ann. Surg. 2018;268:70–76. doi: 10.1097/SLA.0000000000002693.   
4. Mori Y, et al. Real-time use of artificial intelligence in identification of diminutive polyps during colonoscopy: A prospective study. Ann. Intern. Med. 2018;169:357–366. doi: 10.7326/M18-0249.   
5. Li C, et al. Development and validation of an endoscopic images-based deep learning model for detection with nasopharyngeal malignancies. Cancer Commun. (Lond.) 2018;38:59. doi: 10.1186/s40880-018-0325-9.   
6. Dascalu A, David EO. Skin cancer detection by deep learning and sound analysis algorithms: A prospective clinical study of an elementary dermoscope. EBioMedicine. 2019;43:107–113. doi: 10.1016/j.ebiom.2019.04.055.   
7. Phillips M, et al. Assessment of accuracy of an artificial intelligence algorithm to detect melanoma in images of skin lesions. JAMA Netw. Open. 2019;2:e1913436. doi: 10.1001/jamanetworkopen.2019.13436.   
8. Hashimoto DA, et al. Computer vision analysis of intraoperative video: Automated recognition of operative steps in laparoscopic sleeve gastrectomy. Ann. Surg. 2019;270:414–421. doi: 10.1097/SLA.0000000000003460.   
9. Ward TM, et al. Automated operative phase identification in peroral endoscopic myotomy. Surg. Endosc. 2021;35:4008–4015. doi: 10.1007/s00464-020-07833-9.   
10. Lee D, et al. Evaluation of surgical skills during robotic surgery by deep learning-based multiple surgical instrument tracking in training and actual operations. J. Clin. Med. 2020;9:1964. doi: 10.3390/jcm9061964.   
11. Levin M, McKechnie T, Khalid S, Grantcharov TP, Goldenberg M. Automated methods of technical skill assessment in surgery: A systematic review. J. Surg. Educ. 2019;76:1629–1639. doi: 10.1016/j.jsurg.2019.06.011.   
12. Zhang J, Gao X. Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1335–1345. doi: 10.1007/s11548-020-02214-y.   
13. Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017;39:640–651. doi: 10.1109/TPAMI.2016.2572683.   
14. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020;42:386–397. doi: 10.1109/TPAMI.2018.2844175.   
15. Hasan SMK, Linte CA. U-NetPlus: A modified encoder-decoder U-Net architecture for semantic and instance segmentation of surgical instruments from laparoscopic images. Biol. Soc. Annu. Int. Conf. IEEE Eng. Med. 2019;2019:7205–7211.   
16. Kanakatte A, Ramaswamy A, Gubbi J, Ghose A, Purushothaman B. Surgical tool segmentation and localization using spatio-temporal deep network. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. Annu. Int. Conf. IEEE Eng. 2020;2020:1658–1661.  
17. Ni ZL, et al. RASNet: Segmentation for tracking surgical instruments in surgical videos using refined attention segmentation network. Int. Conf. IEEE Eng. Med. Biol. Soc. Annu. Int. Conf. IEEE Eng. 2019;2019:5735–5738.  
18. Du X, et al. Articulated multi-instrument 2-D pose estimation using fully convolutional networks. IEEE Trans. Med. Imaging. 2018;37:1276–1287. doi: 10.1109/TMI.2017.2787672.   
19. Zhao Z, Cai T, Chang F, Cheng X. Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade. Healthc. Technol. Lett. 2019;6:275–279. doi: 10.1049/htl.2019.0064.   
20. von Elm E, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies. Int. J. Surg. 2014;12:1495–1499. doi: 10.1016/j.ijsu.2014.07.013.   
21. Dai, J. et al., (2017). Deformable convolutional networks in Proc. ICCV 764–773.
22. He, K., Zhang, X., Ren, S. & Sun, J., (2016). Deep residual learning for image recognition. Proc. IEEE Conf. CVPR 770–778.
23. Russakovsky O, et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 2015;115:211–252. doi: 10.1007/s11263-015-0816-y.  
24. Lin, T. Y. et al. Microsoft COCO: common objects in context. Lecture Notes in Computer Science. Proc. IEEE ECCV, 740–755 (2014).
25. Chen, K. et al. MMDetection: Open MMLab detection toolbox and benchmark. arXiv:1906.07155 (2019).
26. Everingham M, Van Gool L, Williams CKI, Winn J, Zisserman A. The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010;88:303–338. doi: 10.1007/s11263-009-0275-4.  
27. Zech JR, et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Med. 2018;15:e1002683.   
28. AlBadawy EA, Saha A, Mazurowski MA. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med. Phys. 2018;45:1150–1158. doi: 10.1002/mp.12752.   
29. Mårtensson G, et al. The reliability of a deep learning model in clinical out-of-distribution MRI data: A multicohort study. Med. Image Anal. 2020;66:101714. doi: 10.1016/j.media.2020.101714.   
30. Hutchinson S, Hager GD, Corke PI. A tutorial on visual servo control. IEEE Trans. Robot. Automat. 1996;12:651–670. doi: 10.1109/70.538972.  
31. Uecker DR, Lee C, Wang YF, Wang Y. Automated instrument tracking in robotically assisted laparoscopic surgery. J. Image Guid. Surg. 1995;1:308–325. doi: 10.1002/(SICI)1522-712X(1995)1:6<308::AID-IGS3>3.0.CO;2-E.   
32. Ko SY, Kim J, Kwon DS, Lee WJ. Intelligent interaction between surgeon and laparoscopic assistant robot system. ROMAN. IEEE Int. Works Robot Hum. Interact. Commun. 2005;20:60–65.
33. Martin JA, et al. Objective structured assessment of technical skill (OSATS) for surgical residents. Br. J. Surg. 1997;84:273–278.  
34. Vassiliou MC, et al. A global assessment tool for evaluation of intraoperative laparoscopic skills. Am. J. Surg. 2005;190:107–113. doi: 10.1016/j.amjsurg.2005.04.004.   
35. Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): A tool to assess surgical competence. Acad. Med. 2012;87:1401–1407. doi: 10.1097/ACM.0b013e3182677805.   
36.Published online 2022 Jul 22. doi: 10.1038/s41598-022-16923-8
发表于 2023-4-27 10:29:50 来自手机 | 显示全部楼层
看看学习学习
您需要登录后才可以回帖 登录 | 注册

本版积分规则

丁香叶与你快乐分享

微信公众号

管理员微信

服务时间:8:30-21:30

站长微信/QQ

← 微信/微信群

← QQ

Copyright © 2013-2024 丁香叶 Powered by dxye.com  手机版 
快速回复 返回列表 返回顶部