训练用单针/双针带线【出售】-->外科训练模块总目录
0.5、1、2、3.5、5mm仿生血管仿生体 - 胸腹一体式腹腔镜模拟训练器
仿气腹/半球形腹腔镜模拟训练器
[单端多孔折叠]腹腔镜模拟训练器
「训练教具器械汇总」管理员微信/QQ12087382[问题反馈]
开启左侧

[病历讨论] 结合腹腔镜视频和光谱图像数据的图像配准方法比较

[复制链接]
发表于 2023-1-30 00:00:17 | 显示全部楼层 |阅读模式

马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。

您需要 登录 才可以下载或查看,没有账号?注册

×
腹腔镜手术可以通过术中方式辅助,例如基于荧光或高光谱数据的定量灌注成像。 如果这些模式在视频帧速率下不可用,则需要快速图像配准以实现增强现实中的可视化。 测试了三种基于特征的算法和一种预训练的深度单应性神经网络 (DH-NN),用于单应性和多应性估计。 微调用于弥合 DH-NN 的域间隙,用于腹腔镜图像的非刚性配准。 这些方法在两个数据集上得到了验证:在这项工作中呈现的 750 个手动注释的腹腔镜图像的开源记录,以及来自新型腹腔镜高光谱成像系统的体内数据。 所有基于特征的单一单应性方法在重投影误差、结构相似性指数度量和处理时间方面都优于微调的 DH-NN。 特征检测器和描述符 ORB1000 能够在标准硬件上以亚毫米精度对腹腔镜图像进行视频速率配准。

结合腹腔镜视频和光谱图像数据的图像配准方法比较

结合腹腔镜视频和光谱图像数据的图像配准方法比较

左图:CVAT 的图形用户界面,显示由于腹腔镜器械而具有标志性遮挡的框架。 右:在场景的所有 750 帧中用作地面实况的 28 个手动注释标志的运动路径。

结合腹腔镜视频和光谱图像数据的图像配准方法比较

结合腹腔镜视频和光谱图像数据的图像配准方法比较

用作四个场景的起始帧的图像。 视频的帧数 (a) 20、(b) 200、(c) 400 和 (d) 600。

结合腹腔镜视频和光谱图像数据的图像配准方法比较

结合腹腔镜视频和光谱图像数据的图像配准方法比较

彩色视频和高光谱数据的一次性配准。 (a) 来自带有三个示例性标记的颜色传感器的图像。 (b) 在同一物体的 HSI 期间重建的伪彩色图像。 蓝色光谱范围内的信息缺失会导致颜色发散。 (a) 中标记的对应点用圆圈标记。 (c) 基于 25 个手动注释点的透视变换后两幅图像的半透明叠加。

结合腹腔镜视频和光谱图像数据的图像配准方法比较

结合腹腔镜视频和光谱图像数据的图像配准方法比较

单应性方法的图像配准结果。 (a) 帧 20 用作场景 1 的起始帧。(b) 帧 220 相对于帧 20 具有较小的透视变化、仪器移动和器官变形。(c) 帧 430 显示仪器和器官变形造成的遮挡 由于与第 20 帧相比的操作。(d)–(i) ORB1000 (d, e) 的转换起始帧 20 和当前帧 220(左列,红色)和 430(右列,黄色)的半透明叠加, A-KAZE (f, g) 和 MG-DHNN (h, i)。 绿色十字表示地面实况,蓝色十字表示注释点的估计位置。 白色箭头突出显示较大的配准错误,非重叠区域以灰度显示。 对于 d、f 和 g,所示帧的注释的归一化 RE 平均值为 0.1; e为0.26; h 和 i 为 0.13。
 楼主| 发表于 2023-1-30 00:00:18 | 显示全部楼层
1. Shapey J, et al. Intraoperative multispectral and hyperspectral label-free imaging: a systematic review of in vivo clinical studies. J. Biophotonics. 2019 doi: 10.1002/jbio.201800455.   
2. Clancy NT, Jones G, Maier-Hein L, Elson DS, Stoyanov D. Surgical spectral imaging. Med. Image Anal. 2020;63:101699. doi: 10.1016/j.media.2020.101699.   
3. Baltussen EJM, et al. Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery. J. Biomed. Opt. 2019;24:1–9. doi: 10.1117/1.JBO.24.1.016002.   
4. Yoon J, et al. A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract. Nat. Commun. 2019;10:1902. doi: 10.1038/s41467-019-09484-4.   
5. Köhler H, et al. Laparoscopic system for simultaneous high-resolution video and rapid hyperspectral imaging in the visible and near-infrared spectral range. J. Biomed. Opt. 2020;25:086004. doi: 10.1117/1.JBO.25.8.086004.   
6. Barberio M, et al. Intraoperative guidance using hyperspectral imaging: a review for surgeons. Diagnostics. 2021;11:2066. doi: 10.3390/diagnostics11112066.   
7. Barberio M, et al. HYPerspectral enhanced reality (HYPER): a physiology-based surgical guidance tool. Surg. Endosc. 2020;34:1736–1744. doi: 10.1007/s00464-019-06959-9.   
8. Selka F, et al. Fluorescence-based enhanced reality for colorectal endoscopic surgery. In: Ourselin S, Modat M, et al., editors. Biomedical Image Registration. Springer; 2014. pp. 114–123.
9. Bernhardt S, Nicolau SA, Soler L, Doignon C. The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 2017;37:66–90. doi: 10.1016/j.media.2017.01.007.   
10. Puerto-Souza GA, Cadeddu JA, Mariottini G-L. Toward long-term and accurate augmented-reality for monocular endoscopic videos. IEEE Trans. Biomed. Eng. 2014;61:2609–2620. doi: 10.1109/TBME.2014.2323999.   
11. Collins T, et al. Augmented reality guided laparoscopic surgery of the uterus. IEEE Trans. Med. Imaging. 2020 doi: 10.1109/TMI.2020.3027442.   
12. Schaefer, S., McPhail, T., Warren, J. Image deformation using moving least squares. In ACM SIGGRAPH 2006 Papers 533–540 (Association for Computing Machinery, 2006). 10.1145/1179352.1141920
13. Tareen, S. A. K., Saleem, Z. A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. in 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET) 1–10 (IEEE, 2018). 10.1109/ICOMET.2018.8346440
14. Alcantarilla, P. F., Bartoli, A., Davison, A. J. KAZE features. in Proceedings of the 12th European conference on computer vision - volume part VI 214–227 (2012). 10.1007/978-3-642-33783-3_16
15. Sieler K, Naber A, Nahm W. An evaluation of image feature detectors based on spatial density and temporal robustness in microsurgical image processing. Curr. Dir. Biomed. Eng. 2019;5:273–276. doi: 10.1515/cdbme-2019-0069.  
16. Bailo O, et al. Efficient adaptive non-maximal suppression algorithms for homogeneous spatial keypoint distribution. Pattern Recognit. Lett. 2018;106:53–60. doi: 10.1016/j.patrec.2018.02.020.  
17. Suárez I, Sfeir G, Buenaposada JM, Baumela L. BEBLID: Boosted efficient binary local image descriptor. Pattern Recognit. Lett. 2020;133:366–372. doi: 10.1016/j.patrec.2020.04.005.  
18. Puerto-Souza GA, Mariottini G-L. A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images. IEEE Trans. Med. Imaging. 2013;32:1201–1214. doi: 10.1109/TMI.2013.2239306.   
19. Yip MC, Lowe DG, Salcudean SE, Rohling RN, Nguan CY. Real-time methods for long-term tissue feature tracking in endoscopic scenes. In: Abolmaesumi P, Joskowicz L, Navab N, Jannin P, editors. Information Processing in Computer-Assisted Interventions. Berlin, Heidelberg: Springer; 2012. pp. 33–43.
20. Giannarou S, Visentini-Scarzanella M, Yang G-Z. Probabilistic tracking of affine-invariant anisotropic regions. IEEE Trans. Pattern Anal. Mach. Intell. 2013;35:130–143. doi: 10.1109/TPAMI.2012.81.   
21. Selka F, et al. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology. Comput. Med. Imaging Graph. 2015;40:49–61. doi: 10.1016/j.compmedimag.2014.11.012.   
22. DeTone, D., Malisiewicz, T., Rabinovich, A. Deep image homography estimation. ArXiv160603798 Cs (2016).
23. Gomes S, Valério MT, Salgado M, Oliveira HP, Cunha A. Unsupervised neural network for homography estimation in capsule endoscopy frames. Procedia Comput. Sci. 2019;164:602–609. doi: 10.1016/j.procs.2019.12.226.  
24. Huber, M., Ourselin, S., Bergeles, C., Vercauteren, T. Deep homography estimation in dynamic surgical scenes for laparoscopic camera motion extraction. ArXiv210915098 Cs Eess (2021).
25. Bano S, et al. Deep learning-based fetoscopic mosaicking for field-of-view expansion. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1807–1816. doi: 10.1007/s11548-020-02242-8.   
26. Zhang, J. et al. Content-aware unsupervised deep homography estimation. ArXiv190905983 Cs (2020).
27. Nie L, Lin C, Liao K, Liu S, Zhao Y. Depth-aware multi-grid deep homography estimation with contextual correlation. IEEE Trans. Circuits Syst. Video Technol. 2021 doi: 10.1109/TCSVT.2021.3125736.  
28. Bradski G, Kaehler A. Learning OpenCV: Computer Vision with the OpenCV Library. O’Reilly Media Inc; 2008.
29. Moulla Y, et al. Hybridösophagektomie mit intraoperativem hyperspektral-imaging: Videobeitrag. Chir. 2020 doi: 10.1007/s00104-020-01139-1.  
30. Pizer SM, et al. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987;39:355–368. doi: 10.1016/S0734-189X(87)80186-X.  
31. Rublee, E., Rabaud, V., Konolige, K., Bradski, G. ORB: An efficient alternative to SIFT or SURF. In 2011 International Conference on Computer Vision 2564–2571 (IEEE, 2011). 10.1109/ICCV.2011.6126544
32. Alcantarilla, P., Nuevo, J., Bartoli, A. Fast explicit diffusion for accelerated features in nonlinear scale spaces. In Proceedings of the British Machine Vision Conference 2013 13.1–13.11 (British Machine Vision Association, 2013). 10.5244/C.27.13
33. Leutenegger, S., Chli, M., Siegwart, R. Y. BRISK: Binary robust invariant scalable keypoints. In 2011 International Conference on Computer Vision 2548–2555 (2011). 10.1109/ICCV.2011.6126542
34. Fischler MA, Bolles RC. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM. 1981;24:381–395. doi: 10.1145/358669.358692.  
35. Nie L, Lin C, Liao K, Liu S, Zhao Y. Unsupervised deep image stitching: Reconstructing stitched features to images. IEEE Trans. Image Process. 2021;30:6184–6197. doi: 10.1109/TIP.2021.3092828.   
36. Stauder, R. et al. The TUM LapChole dataset for the M2CAI 2016 workflow challenge. ArXiv161009278 Cs (2017).
37. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004;13:600–612. doi: 10.1109/TIP.2003.819861.   
您需要登录后才可以回帖 登录 | 注册

本版积分规则

丁香叶与你快乐分享

微信公众号

管理员微信

服务时间:8:30-21:30

站长微信/QQ

← 微信/微信群

← QQ

Copyright © 2013-2024 丁香叶 Powered by dxye.com  手机版 
快速回复 返回列表 返回顶部