1
|
Yang C, Wang K, Wang Y, Dou Q, Yang X, Shen W. Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3211-3223. [PMID: 38625765 DOI: 10.1109/tmi.2024.3388559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Intraoperative imaging techniques for reconstructing deformable tissues in vivo are pivotal for advanced surgical systems. Existing methods either compromise on rendering quality or are excessively computationally intensive, often demanding dozens of hours to perform, which significantly hinders their practical application. In this paper, we introduce Fast Orthogonal Plane (Forplane), a novel, efficient framework based on neural radiance fields (NeRF) for the reconstruction of deformable tissues. We conceptualize surgical procedures as 4D volumes, and break them down into static and dynamic fields comprised of orthogonal neural planes. This factorization discretizes the four-dimensional space, leading to a decreased memory usage and faster optimization. A spatiotemporal importance sampling scheme is introduced to improve performance in regions with tool occlusion as well as large motions and accelerate training. An efficient ray marching method is applied to skip sampling among empty regions, significantly improving inference speed. Forplane accommodates both binocular and monocular endoscopy videos, demonstrating its extensive applicability and flexibility. Our experiments, carried out on two in vivo datasets, the EndoNeRF and Hamlyn datasets, demonstrate the effectiveness of our framework. In all cases, Forplane substantially accelerates both the optimization process (by over 100 times) and the inference process (by over 15 times) while maintaining or even improving the quality across a variety of non-rigid deformations. This significant performance improvement promises to be a valuable asset for future intraoperative surgical applications. The code of our project is now available at https://github.com/Loping151/ForPlane.
Collapse
|
2
|
Göbel B, Reiterer A, Möller K. Image-Based 3D Reconstruction in Laparoscopy: A Review Focusing on the Quantitative Evaluation by Applying the Reconstruction Error. J Imaging 2024; 10:180. [PMID: 39194969 DOI: 10.3390/jimaging10080180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 08/29/2024] Open
Abstract
Image-based 3D reconstruction enables laparoscopic applications as image-guided navigation and (autonomous) robot-assisted interventions, which require a high accuracy. The review's purpose is to present the accuracy of different techniques to label the most promising. A systematic literature search with PubMed and google scholar from 2015 to 2023 was applied by following the framework of "Review articles: purpose, process, and structure". Articles were considered when presenting a quantitative evaluation (root mean squared error and mean absolute error) of the reconstruction error (Euclidean distance between real and reconstructed surface). The search provides 995 articles, which were reduced to 48 articles after applying exclusion criteria. From these, a reconstruction error data set could be generated for the techniques of stereo vision, Shape-from-Motion, Simultaneous Localization and Mapping, deep-learning, and structured light. The reconstruction error varies from below one millimeter to higher than ten millimeters-with deep-learning and Simultaneous Localization and Mapping delivering the best results under intraoperative conditions. The high variance emerges from different experimental conditions. In conclusion, submillimeter accuracy is challenging, but promising image-based 3D reconstruction techniques could be identified. For future research, we recommend computing the reconstruction error for comparison purposes and use ex/in vivo organs as reference objects for realistic experiments.
Collapse
Affiliation(s)
- Birthe Göbel
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- KARL STORZ SE & Co. KG, Dr.-Karl-Storz-Street 34, 78532 Tuttlingen, Germany
| | - Alexander Reiterer
- Department of Sustainable Systems Engineering-INATECH, University of Freiburg, Emmy-Noether-Street 2, 79110 Freiburg im Breisgau, Germany
- Fraunhofer Institute for Physical Measurement Techniques IPM, 79110 Freiburg im Breisgau, Germany
| | - Knut Möller
- Institute of Technical Medicine-ITeM, Furtwangen University (HFU), 78054 Villingen-Schwenningen, Germany
- Mechanical Engineering, University of Canterbury, Christchurch 8140, New Zealand
| |
Collapse
|
3
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
4
|
Heiselman JS, Collins JA, Ringel MJ, Peter Kingham T, Jarnagin WR, Miga MI. The Image-to-Physical Liver Registration Sparse Data Challenge: comparison of state-of-the-art using a common dataset. J Med Imaging (Bellingham) 2024; 11:015001. [PMID: 38196401 PMCID: PMC10773576 DOI: 10.1117/1.jmi.11.1.015001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/25/2023] [Accepted: 12/05/2023] [Indexed: 01/11/2024] Open
Abstract
Purpose Computational methods for image-to-physical registration during surgical guidance frequently rely on sparse point clouds obtained over a limited region of the organ surface. However, soft tissue deformations complicate the ability to accurately infer anatomical alignments from sparse descriptors of the organ surface. The Image-to-Physical Liver Registration Sparse Data Challenge introduced at SPIE Medical Imaging 2019 seeks to characterize the performance of sparse data registration methods on a common dataset to benchmark and identify effective tactics and limitations that will continue to inform the evolution of image-to-physical registration algorithms. Approach Three rigid and five deformable registration methods were contributed to the challenge. The deformable approaches consisted of two deep learning and three biomechanical boundary condition reconstruction methods. These algorithms were compared on a common dataset of 112 registration scenarios derived from a tissue-mimicking phantom with 159 subsurface validation targets. Target registration errors (TRE) were evaluated under varying conditions of data extent, target location, and measurement noise. Jacobian determinants and strain magnitudes were compared to assess displacement field consistency. Results Rigid registration algorithms produced significant differences in TRE ranging from 3.8 ± 2.4 mm to 7.7 ± 4.5 mm , depending on the choice of technique. Two biomechanical methods yielded TRE of 3.1 ± 1.8 mm and 3.3 ± 1.9 mm , which outperformed optimal rigid registration of targets. These methods demonstrated good performance under varying degrees of surface data coverage and across all anatomical segments of the liver. Deep learning methods exhibited TRE ranging from 4.3 ± 3.3 mm to 7.6 ± 5.3 mm but are likely to improve with continued development. TRE was weakly correlated among methods, with greatest agreement and field consistency observed among the biomechanical approaches. Conclusions The choice of registration algorithm significantly impacts registration accuracy and variability of deformation fields. Among current sparse data driven image-to-physical registration algorithms, biomechanical simulations that incorporate task-specific insight into boundary conditions seem to offer best performance.
Collapse
Affiliation(s)
- Jon S. Heiselman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Memorial Sloan Kettering Cancer Center, Department of Surgery, Hepatopancreatobiliary Unit, New York, New York, United States
| | - Jarrod A. Collins
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Morgan J. Ringel
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - T. Peter Kingham
- Memorial Sloan Kettering Cancer Center, Department of Surgery, Hepatopancreatobiliary Unit, New York, New York, United States
| | - William R. Jarnagin
- Memorial Sloan Kettering Cancer Center, Department of Surgery, Hepatopancreatobiliary Unit, New York, New York, United States
| | - Michael I. Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| |
Collapse
|
5
|
Bobrow TL, Golhar M, Vijayan R, Akshintala VS, Garcia JR, Durr NJ. Colonoscopy 3D video dataset with paired depth from 2D-3D registration. Med Image Anal 2023; 90:102956. [PMID: 37713764 PMCID: PMC10591895 DOI: 10.1016/j.media.2023.102956] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/29/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023]
Abstract
Screening colonoscopy is an important clinical application for several 3D computer vision techniques, including depth estimation, surface reconstruction, and missing region detection. However, the development, evaluation, and comparison of these techniques in real colonoscopy videos remain largely qualitative due to the difficulty of acquiring ground truth data. In this work, we present a Colonoscopy 3D Video Dataset (C3VD) acquired with a high definition clinical colonoscope and high-fidelity colon models for benchmarking computer vision methods in colonoscopy. We introduce a novel multimodal 2D-3D registration technique to register optical video sequences with ground truth rendered views of a known 3D model. The different modalities are registered by transforming optical images to depth maps with a Generative Adversarial Network and aligning edge features with an evolutionary optimizer. This registration method achieves an average translation error of 0.321 millimeters and an average rotation error of 0.159 degrees in simulation experiments where error-free ground truth is available. The method also leverages video information, improving registration accuracy by 55.6% for translation and 60.4% for rotation compared to single frame registration. 22 short video sequences were registered to generate 10,015 total frames with paired ground truth depth, surface normals, optical flow, occlusion, six degree-of-freedom pose, coverage maps, and 3D models. The dataset also includes screening videos acquired by a gastroenterologist with paired ground truth pose and 3D surface models. The dataset and registration source code are available at https://durr.jhu.edu/C3VD.
Collapse
Affiliation(s)
- Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Mayank Golhar
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Rohan Vijayan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Venkata S Akshintala
- Division of Gastroenterology and Hepatology, Johns Hopkins Medicine, Baltimore, MD 21287, USA
| | - Juan R Garcia
- Department of Art as Applied to Medicine, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
6
|
Shi H, Wang Z, Zhou Y, Li D, Yang X, Li Q. Bidirectional Semi-Supervised Dual-Branch CNN for Robust 3D Reconstruction of Stereo Endoscopic Images via Adaptive Cross and Parallel Supervisions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3269-3282. [PMID: 37227904 DOI: 10.1109/tmi.2023.3279899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Semi-supervised learning via teacher-student network can train a model effectively on a few labeled samples. It enables a student model to distill knowledge from the teacher's predictions of extra unlabeled data. However, such knowledge flow is typically unidirectional, having the accuracy vulnerable to the quality of teacher model. In this paper, we seek to robust 3D reconstruction of stereo endoscopic images by proposing a novel fashion of bidirectional learning between two learners, each of which can play both roles of teacher and student concurrently. Specifically, we introduce two self-supervisions, i.e., Adaptive Cross Supervision (ACS) and Adaptive Parallel Supervision (APS), to learn a dual-branch convolutional neural network. The two branches predict two different disparity probability distributions for the same position, and output their expectations as disparity values. The learned knowledge flows across branches along two directions: a cross direction (disparity guides distribution in ACS) and a parallel direction (disparity guides disparity in APS). Moreover, each branch also learns confidences to dynamically refine its provided supervisions. In ACS, the predicted disparity is softened into a unimodal distribution, and the lower the confidence, the smoother the distribution. In APS, the incorrect predictions are suppressed by lowering the weights of those with low confidence. With the adaptive bidirectional learning, the two branches enjoy well-tuned mutual supervisions, and eventually converge on a consistent and more accurate disparity estimation. The experimental results on four public datasets demonstrate our superior accuracy over other state-of-the-arts with a relative decrease of averaged disparity error by at least 9.76%.
Collapse
|
7
|
Zhang X, Ji X, Wang J, Fan Y, Tao C. Renal surface reconstruction and segmentation for image-guided surgical navigation of laparoscopic partial nephrectomy. Biomed Eng Lett 2023; 13:165-174. [PMID: 37124114 PMCID: PMC10130295 DOI: 10.1007/s13534-023-00263-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/01/2022] [Accepted: 01/22/2023] [Indexed: 02/04/2023] Open
Abstract
An unpredictable dynamic surgical environment makes it necessary to measure morphological information of target tissue real-time for laparoscopic image-guided navigation. The stereo vision method for intraoperative tissue 3D reconstruction has the most potential for clinical development benefiting from its high reconstruction accuracy and laparoscopy compatibility. However, existing stereo vision methods have difficulty in achieving high reconstruction accuracy in real time. Also, intraoperative tissue reconstruction results often contain complex background and instrument information that prevents clinical development for image-guided systems. Taking laparoscopic partial nephrectomy (LPN) as the research object, this paper realizes a real-time dense reconstruction and extraction of the kidney tissue surface. The central symmetrical Census based semi-global block stereo matching algorithm is proposed to generate a dense disparity map. A GPU-based pixel-by-pixel connectivity segmentation mechanism is designed to segment the renal tissue area. An in-vitro porcine heart, in-vivo porcine kidney and offline clinical LPN data were performed to evaluate the accuracy and effectiveness of our approach. The algorithm achieved a reconstruction accuracy of ± 2 mm with a real-time update rate of 21 fps for an HD image size of 960 × 540, and 91.0% target tissue segmentation accuracy even with surgical instrument occlusions. Experimental results have demonstrated that the proposed method could accurately reconstruct and extract renal surface in real-time in LPN. The measurement results can be used directly for image-guided systems. Our method provides a new way to measure geometric information of target tissue intraoperatively in laparoscopy surgery. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00263-1.
Collapse
Affiliation(s)
- Xiaohui Zhang
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Xuquan Ji
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang Unviersity, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Yubo Fan
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
- School of Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| | - Chunjing Tao
- School of Engineering Medicine, Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, No. 37 Xueyuan Road, Haidian District, Beijing, 100083 China
| |
Collapse
|
8
|
Roß T, Bruno P, Reinke A, Wiesenfarth M, Koeppel L, Full PM, Pekdemir B, Godau P, Trofimova D, Isensee F, Adler TJ, Tran TN, Moccia S, Calimeri F, Müller-Stich BP, Kopp-Schneider A, Maier-Hein L. Beyond rankings: Learning (more) from algorithm validation. Med Image Anal 2023; 86:102765. [PMID: 36965252 DOI: 10.1016/j.media.2023.102765] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 05/24/2022] [Accepted: 02/08/2023] [Indexed: 03/06/2023]
Abstract
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.
Collapse
Affiliation(s)
- Tobias Roß
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Pierangela Bruno
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Annika Reinke
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lisa Koeppel
- Section Clinical Tropical Medicine, Heidelberg University, Heidelberg, Germany
| | - Peter M Full
- Medical Faculty, Heidelberg University, Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bünyamin Pekdemir
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Patrick Godau
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany
| | - Darya Trofimova
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), Heidelberg, Germany; HIP Applied Computer Vision Lab, MIC, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tim J Adler
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy N Tran
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Beat P Müller-Stich
- Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Lena Maier-Hein
- Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany; Medical Faculty, Heidelberg University, Heidelberg, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, Heidelberg University, Germany; Germany and National Center for Tumor Diseases (NCT), Heidelberg, Germany
| |
Collapse
|
9
|
Zhang S, Chen J, Liu Z, Wang X, Zhang C, Yang J. Key theories and technologies and implementation mechanism of parallel computing for ternary optical computer. PLoS One 2023; 18:e0284700. [PMID: 37155611 PMCID: PMC10166507 DOI: 10.1371/journal.pone.0284700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 04/05/2023] [Indexed: 05/10/2023] Open
Abstract
Ternary Optical Computer (TOC) is more advanced than traditional computer systems in parallel computing, which is characterized by huge amounts of repeated computations. However, the application of the TOC is still limited because of lack of key theories and technologies. In order to make the TOC applicable and advantageous, this paper systematically elaborates the key theories and technologies of parallel computing for the TOC through a programming platform, including reconfigurability and groupable usability of optical processor bits, parallel carry-free optical adder and the TOC's application characteristics, communication file to express user's needs and data organization method of the TOC. Finally, experiments are carried out to show the effectiveness of the present theories and technologies for parallel computing, as well as the feasibility of the implementation method of the programming platform. For a special instance, it is shown that the clock cycle on the TOC is only 0.26% of on a traditional computer, and the computing resource spent on the TOC is 25% of that on a traditional computer. Based on the study of the TOC in this paper, more complex parallel computing can be realized in the future.
Collapse
Affiliation(s)
- Sulan Zhang
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
- Key Laboratory of Medical Electronic and Digital Health of Zhejiang Province, Jiaxing University, Jiaxing, Zhejiang, China
| | - Junwei Chen
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
| | - Zihao Liu
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
| | - Xiaolin Wang
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
- Shanghai Business School, University of Shanghai for Science and Technology, Shanghai, China
- Jujiang Construction Group Co., Ltd., Jiaxing, Zhejiang, China
| | - Chunhua Zhang
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
| | - Jun Yang
- School of Information Science and Engineering, Jiaxing University, Jiaxing, Zhejiang, China
| |
Collapse
|
10
|
Xia W, Chen ECS, Pautler S, Peters TM. A Robust Edge-Preserving Stereo Matching Method for Laparoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1651-1664. [PMID: 35085075 DOI: 10.1109/tmi.2022.3147414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Stereo matching has become an active area of research in the field of computer vision. In minimally invasive surgery, stereo matching provides depth information to surgeons, with the potential to increase the safety of surgical procedures, particularly those performed laparoscopically. Many stereo matching methods have been reported to perform well for natural images, but for images acquired during a laparoscopic procedure, they are limited by image characteristics including illumination differences, weak texture content, specular highlights, and occlusions. To overcome these limitations, we propose a robust edge-preserving stereo matching method for laparoscopic images, comprising an efficient sparse-dense feature matching step, left and right image illumination equalization, and refined disparity optimization. We validated the proposed method using both benchmark biological phantoms and surgical stereoscopic data. Experimental results illustrated that, in the presence of heavy illumination differences between image pairs, texture and textureless surfaces, specular highlights and occlusions, our proposed approach consistently obtains a more accurate estimate of the disparity map than state-of-the-art stereo matching methods in terms of robustness and boundary preservation.
Collapse
|
11
|
Liu S, Fan J, Song D, Fu T, Lin Y, Xiao D, Song H, Wang Y, Yang J. Joint estimation of depth and motion from a monocular endoscopy image sequence using a multi-loss rebalancing network. BIOMEDICAL OPTICS EXPRESS 2022; 13:2707-2727. [PMID: 35774318 PMCID: PMC9203100 DOI: 10.1364/boe.457475] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/01/2022] [Accepted: 04/01/2022] [Indexed: 06/15/2023]
Abstract
Building an in vivo three-dimensional (3D) surface model from a monocular endoscopy is an effective technology to improve the intuitiveness and precision of clinical laparoscopic surgery. This paper proposes a multi-loss rebalancing-based method for joint estimation of depth and motion from a monocular endoscopy image sequence. The feature descriptors are used to provide monitoring signals for the depth estimation network and motion estimation network. The epipolar constraints of the sequence frame is considered in the neighborhood spatial information by depth estimation network to enhance the accuracy of depth estimation. The reprojection information of depth estimation is used to reconstruct the camera motion by motion estimation network with a multi-view relative pose fusion mechanism. The relative response loss, feature consistency loss, and epipolar consistency loss function are defined to improve the robustness and accuracy of the proposed unsupervised learning-based method. Evaluations are implemented on public datasets. The error of motion estimation in three scenes decreased by 42.1%,53.6%, and 50.2%, respectively. And the average error of 3D reconstruction is 6.456 ± 1.798mm. This demonstrates its capability to generate reliable depth estimation and trajectory reconstruction results for endoscopy images and meaningful applications in clinical.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Dengpan Song
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yucong Lin
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
12
|
Liu S, Fan J, Ai D, Song H, Fu T, Wang Y, Yang J. Feature matching for texture-less endoscopy images via superpixel vector field consistency. BIOMEDICAL OPTICS EXPRESS 2022; 13:2247-2265. [PMID: 35519251 PMCID: PMC9045917 DOI: 10.1364/boe.450259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/05/2022] [Accepted: 01/23/2022] [Indexed: 06/14/2023]
Abstract
Feature matching is an important technology to obtain the surface morphology of soft tissues in intraoperative endoscopy images. The extraction of features from clinical endoscopy images is a difficult problem, especially for texture-less images. The reduction of surface details makes the problem more challenging. We proposed an adaptive gradient-preserving method to improve the visual feature of texture-less images. For feature matching, we first constructed a spatial motion field by using the superpixel blocks and estimated its information entropy matching with the motion consistency algorithm to obtain the initial outlier feature screening. Second, we extended the superpixel spatial motion field to the vector field and constrained it with the vector feature to optimize the confidence of the initial matching set. Evaluations were implemented on public and undisclosed datasets. Our method increased by an order of magnitude in the three feature point extraction methods than the original image. In the public dataset, the accuracy and F1-score increased to 92.6% and 91.5%. The matching score was improved by 1.92%. In the undisclosed dataset, the reconstructed surface integrity of the proposed method was improved from 30% to 85%. Furthermore, we also presented the surface reconstruction result of differently sized images to validate the robustness of our method, which showed high-quality feature matching results. Overall, the experiment results proved the effectiveness of the proposed matching method. This demonstrates its capability to extract sufficient visual feature points and generate reliable feature matches for 3D reconstruction and meaningful applications in clinical.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
13
|
Xia W, Chen E, Pautler S, Peters T. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Edwards PJE, Psychogyios D, Speidel S, Maier-Hein L, Stoyanov D. SERV-CT: A disparity dataset from cone-beam CT for validation of endoscopic 3D reconstruction. Med Image Anal 2022; 76:102302. [PMID: 34906918 PMCID: PMC8961000 DOI: 10.1016/j.media.2021.102302] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/01/2021] [Accepted: 11/04/2021] [Indexed: 11/27/2022]
Abstract
In computer vision, reference datasets from simulation and real outdoor scenes have been highly successful in promoting algorithmic development in stereo reconstruction. Endoscopic stereo reconstruction for surgical scenes gives rise to specific problems, including the lack of clear corner features, highly specular surface properties and the presence of blood and smoke. These issues present difficulties for both stereo reconstruction itself and also for standardised dataset production. Previous datasets have been produced using computed tomography (CT) or structured light reconstruction on phantom or ex vivo models. We present a stereo-endoscopic reconstruction validation dataset based on cone-beam CT (SERV-CT). Two ex vivo small porcine full torso cadavers were placed within the view of the endoscope with both the endoscope and target anatomy visible in the CT scan. Subsequent orientation of the endoscope was manually aligned to match the stereoscopic view and benchmark disparities, depths and occlusions are calculated. The requirement of a CT scan limited the number of stereo pairs to 8 from each ex vivo sample. For the second sample an RGB surface was acquired to aid alignment of smooth, featureless surfaces. Repeated manual alignments showed an RMS disparity accuracy of around 2 pixels and a depth accuracy of about 2 mm. A simplified reference dataset is provided consisting of endoscope image pairs with corresponding calibration, disparities, depths and occlusions covering the majority of the endoscopic image and a range of tissue types, including smooth specular surfaces, as well as significant variation of depth. We assessed the performance of various stereo algorithms from online available repositories. There is a significant variation between algorithms, highlighting some of the challenges of surgical endoscopic images. The SERV-CT dataset provides an easy to use stereoscopic validation for surgical applications with smooth reference disparities and depths covering the majority of the endoscopic image. This complements existing resources well and we hope will aid the development of surgical endoscopic anatomical reconstruction algorithms.
Collapse
Affiliation(s)
- P J Eddie Edwards
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK.
| | - Dimitris Psychogyios
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| | - Stefanie Speidel
- Division of Translational Surgical Oncology, National Center for Tumor Diseases (NCT) Dresden, Dresden, 01307, Germany
| | - Lena Maier-Hein
- Division of Medical and Biological Informatics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London (UCL), Charles Bell House, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
15
|
Rabbani N, Calvet L, Espinel Y, Le Roy B, Ribeiro M, Buc E, Bartoli A. A methodology and clinical dataset with ground-truth to evaluate registration accuracy quantitatively in computer-assisted Laparoscopic Liver Resection. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2021.1997642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- N. Rabbani
- EnCoV, Institut Pascal, Clermont-Ferrand, France
| | - L. Calvet
- EnCoV, Institut Pascal, Clermont-Ferrand, France
- CHU, Clermont-Ferrand, France
- IRIT, University of Toulouse
| | - Y. Espinel
- EnCoV, Institut Pascal, Clermont-Ferrand, France
| | - B. Le Roy
- EnCoV, Institut Pascal, Clermont-Ferrand, France
- CHU, Saint-Etienne, France
| | - M. Ribeiro
- EnCoV, Institut Pascal, Clermont-Ferrand, France
- CHU, Clermont-Ferrand, France
| | - E. Buc
- EnCoV, Institut Pascal, Clermont-Ferrand, France
- CHU, Clermont-Ferrand, France
| | - A. Bartoli
- EnCoV, Institut Pascal, Clermont-Ferrand, France
| |
Collapse
|
16
|
Rosenthal JC, Wisotzky EL, Matuschek C, Hobl M, Hilsmann A, Eisert P, Uecker FC. Endoscopic measurement of nasal septum perforations. HNO 2021; 70:1-7. [PMID: 34633475 PMCID: PMC8837565 DOI: 10.1007/s00106-021-01102-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2021] [Indexed: 11/25/2022]
Abstract
Background Nasal septum perforations (NSP) have many uncomfortable symptoms for the patient and a highly negative impact on quality of life. NSPs are closed using patient-specific implants or surgery. Implants are created either under anesthesia using silicone impressions or using 3D models from CT data. Disadvantages for patient safety are the increased risk of morbidity or radiation exposure. Materials and methods In the context of otorhinolaryngologic surgery, we present a gentle approach to treating NSP with a new image-based, contactless, and radiation-free measurement method using a 3D endoscope. The method relies on image information only and makes use of real-time capable computer vision algorithms to compute 3D information. This endoscopic method can be repeated as often as desired in the clinical course and has already proven its accuracy and robustness for robotic-assisted surgery (RAS) and surgical microscopy. We expand our method for nasal surgery, as there are additional spatial and stereoperspective challenges. Results After measuring 3 relevant parameters (NSP extension: axial, coronal, and NSP circumference) of 6 patients and comparing the results of 2 stereoendoscopes with CT data, it was shown that the image-based measurements can achieve comparable accuracies to CT data. One patient could be only partially evaluated because the NSP was larger than the endoscopic field of view. Conclusion Based on the very good measurements, we outline a therapeutic procedure which should enable the production of patient-specific NSP implants based on endoscopic data only.
Collapse
Affiliation(s)
- Jean-Claude Rosenthal
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany. .,, Berlin, Germany.
| | - Eric L Wisotzky
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany.,Visual Computing, Humboldt Universität zu Berlin, Berlin, Germany
| | | | - Melanie Hobl
- HNO-Klinik, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Anna Hilsmann
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany
| | - Peter Eisert
- Vision and Imaging Technologies, Fraunhofer Heinrich-Hertz-Institut HHI, Einsteinufer 37, 10587, Berlin, Germany.,Visual Computing, Humboldt Universität zu Berlin, Berlin, Germany
| | - Florian C Uecker
- HNO-Klinik, Charité - Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
17
|
[Endoscopic measurement of nasal septum perforations. German version]. HNO 2021; 70:206-213. [PMID: 34477908 PMCID: PMC8866253 DOI: 10.1007/s00106-021-01101-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 10/28/2022]
Abstract
BACKGROUND Nasal septum perforations (NSP) have many uncomfortable symptoms for the patient and a highly negative impact on quality of life. NSPs are closed using patient-specific implants or surgery. Implants are created either under anesthesia using silicone impressions or using 3D models from CT data. Disadvantages for patient safety are the increased risk of morbidity or radiation exposure. MATERIALS AND METHODS In the context of otorhinolaryngologic surgery, we present a gentle approach to treating NSP with a new image-based, contactless, and radiation-free measurement method using a 3D endoscope. The method relies on image information only and makes use of real-time capable computer vision algorithms to compute 3D information. This endoscopic method can be repeated as often as desired in the clinical course and has already proven its accuracy and robustness for robotic-assisted surgery (RAS) and surgical microscopy. We expand our method for nasal surgery, as there are additional spatial and stereoperspective challenges. RESULTS After measuring 3 relevant parameters (NSP extension: axial, coronal, and NSP circumference) of 6 patients and comparing the results of 2 stereoendoscopes with CT data, it was shown that the image-based measurements can achieve comparable accuracies to CT data. One patient could be only partially evaluated because the NSP was larger than the endoscopic field of view. CONCLUSION Based on the very good measurements, we outline a therapeutic procedure which should enable the production of patient-specific NSP implants based on endoscopic data only.
Collapse
|
18
|
Liang J. Punching holes in light: recent progress in single-shot coded-aperture optical imaging. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2020; 83:116101. [PMID: 33125347 DOI: 10.1088/1361-6633/abaf43] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Single-shot coded-aperture optical imaging physically captures a code-aperture-modulated optical signal in one exposure and then recovers the scene via computational image reconstruction. Recent years have witnessed dazzling advances in various modalities in this hybrid imaging scheme in concomitant technical improvement and widespread applications in physical, chemical and biological sciences. This review comprehensively surveys state-of-the-art single-shot coded-aperture optical imaging. Based on the detected photon tags, this field is divided into six categories: planar imaging, depth imaging, light-field imaging, temporal imaging, spectral imaging, and polarization imaging. In each category, we start with a general description of the available techniques and design principles, then provide two representative examples of active-encoding and passive-encoding approaches, with a particular emphasis on their methodology and applications as well as their advantages and challenges. Finally, we envision prospects for further technical advancement in this field.
Collapse
Affiliation(s)
- Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, Canada
| |
Collapse
|
19
|
Freedman D, Blau Y, Katzir L, Aides A, Shimshoni I, Veikherman D, Golany T, Gordon A, Corrado G, Matias Y, Rivlin E. Detecting Deficient Coverage in Colonoscopies. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3451-3462. [PMID: 32746092 DOI: 10.1109/tmi.2020.2994221] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Colonoscopy is tool of choice for preventing Colorectal Cancer, by detecting and removing polyps before they become cancerous. However, colonoscopy is hampered by the fact that endoscopists routinely miss 22-28% of polyps. While some of these missed polyps appear in the endoscopist's field of view, others are missed simply because of substandard coverage of the procedure, i.e. not all of the colon is seen. This paper attempts to rectify the problem of substandard coverage in colonoscopy through the introduction of the C2D2 (Colonoscopy Coverage Deficiency via Depth) algorithm which detects deficient coverage, and can thereby alert the endoscopist to revisit a given area. More specifically, C2D2 consists of two separate algorithms: the first performs depth estimation of the colon given an ordinary RGB video stream; while the second computes coverage given these depth estimates. Rather than compute coverage for the entire colon, our algorithm computes coverage locally, on a segment-by-segment basis; C2D2 can then indicate in real-time whether a particular area of the colon has suffered from deficient coverage, and if so the endoscopist can return to that area. Our coverage algorithm is the first such algorithm to be evaluated in a large-scale way; while our depth estimation technique is the first calibration-free unsupervised method applied to colonoscopies. The C2D2 algorithm achieves state of the art results in the detection of deficient coverage. On synthetic sequences with ground truth, it is 2.4 times more accurate than human experts; while on real sequences, C2D2 achieves a 93.0% agreement with experts.
Collapse
|
20
|
SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation With Surgical Robotics. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2970659] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
21
|
Luo H, Hu Q, Jia F. Details preserved unsupervised depth estimation by fusing traditional stereo knowledge from laparoscopic images. Healthc Technol Lett 2019; 6:154-158. [PMID: 32038849 PMCID: PMC6945682 DOI: 10.1049/htl.2019.0063] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 10/02/2019] [Indexed: 12/22/2022] Open
Abstract
Depth estimation plays an important role in vision-based laparoscope surgical navigation systems. Most learning-based depth estimation methods require ground truth depth or disparity images for training; however, these data are difficult to obtain in laparoscopy. The authors present an unsupervised learning depth estimation approach by fusing traditional stereo knowledge. The traditional stereo method is used to generate proxy disparity labels, in which unreliable depth measurements are removed via a confidence measure to improve stereo accuracy. The disparity images are generated by training a dual encoder-decoder convolutional neural network from rectified stereo images coupled with proxy labels generated by the traditional stereo method. A principled mask is computed to exclude the pixels, which are not seen in one of views due to parallax effects from the calculation of loss function. Moreover, the neighbourhood smoothness term is employed to constrain neighbouring pixels with similar appearances to generate a smooth depth surface. This approach can make the depth of the projected point cloud closer to the real surgical site and preserve realistic details. The authors demonstrate the performance of the method by training and evaluation with a partial nephrectomy da Vinci surgery dataset and heart phantom data from the Hamlyn Centre.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, People's Republic of China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, People's Republic of China
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, People's Republic of China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, People's Republic of China
| |
Collapse
|
22
|
Widya AR, Monno Y, Okutomi M, Suzuki S, Gotoda T, Miki K. Whole Stomach 3D Reconstruction and Frame Localization From Monocular Endoscope Video. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2019; 7:3300310. [PMID: 32309059 PMCID: PMC6830857 DOI: 10.1109/jtehm.2019.2946802] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 09/03/2019] [Accepted: 09/25/2019] [Indexed: 12/22/2022]
Abstract
Gastric endoscopy is a common clinical practice that enables medical doctors to diagnose various lesions inside a stomach. In order to identify the location of a gastric lesion such as early cancer and a peptic ulcer within the stomach, this work addresses to reconstruct the color-textured 3D model of a whole stomach from a standard monocular endoscope video and localize any selected video frame to the 3D model. We examine how to enable structure-from-motion (SfM) to reconstruct the whole shape of a stomach from endoscope images, which is a challenging task due to the texture-less nature of the stomach surface. We specifically investigate the combined effect of chromo-endoscopy and color channel selection on SfM to increase the number of feature points. We also design a plane fitting-based algorithm for 3D point outliers removal to improve the 3D model quality. We show that whole stomach 3D reconstruction can be achieved (more than 90% of the frames can be reconstructed) by using red channel images captured under chromo-endoscopy by spreading indigo carmine (IC) dye on the stomach surface. In experimental results, we demonstrate the reconstructed 3D models for seven subjects and the application of lesion localization and reconstruction. The methodology and results presented in this paper could offer some valuable reference to other researchers and also could be an excellent tool for gastric surgeons in various computer-aided diagnosis applications.
Collapse
Affiliation(s)
- Aji Resindra Widya
- Department of Systems and Control EngineeringSchool of EngineeringTokyo Institute of TechnologyTokyo152-8550Japan
| | - Yusuke Monno
- Department of Systems and Control EngineeringSchool of EngineeringTokyo Institute of TechnologyTokyo152-8550Japan
| | - Masatoshi Okutomi
- Department of Systems and Control EngineeringSchool of EngineeringTokyo Institute of TechnologyTokyo152-8550Japan
| | - Sho Suzuki
- Division of Gastroenterology and HepatologyDepartment of MedicineNihon University School of MedicineTokyo101-8309Japan
| | - Takuji Gotoda
- Division of Gastroenterology and HepatologyDepartment of MedicineNihon University School of MedicineTokyo101-8309Japan
| | - Kenji Miki
- Department of Internal MedicineTsujinaka Hospital KashiwanohaKashiwa277-0871Japan
| |
Collapse
|
23
|
Marmol A, Banach A, Peynot T. Dense-ArthroSLAM: Dense Intra-Articular 3-D Reconstruction With Robust Localization Prior for Arthroscopy. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2892199] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
24
|
Sdiri B, Kaaniche M, Cheikh FA, Beghdadi A, Elle OJ. Efficient Enhancement of Stereo Endoscopic Images Based on Joint Wavelet Decomposition and Binocular Combination. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:33-45. [PMID: 29994612 DOI: 10.1109/tmi.2018.2853808] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The success of minimally invasive interventions and the remarkable technological and medical progress have made endoscopic image enhancement a very active research field. Due to the intrinsic endoscopic domain characteristics and the surgical exercise, stereo endoscopic images may suffer from different degradations which affect its quality. Therefore, in order to provide the surgeons with a better visual feedback and improve the outcomes of possible subsequent processing steps, namely, a 3-D organ reconstruction/registration, it would be interesting to improve the stereo endoscopic image quality. To this end, we propose, in this paper, two joint enhancement methods which operate in the wavelet transform domain. More precisely, by resorting to a joint wavelet decomposition, the wavelet subbands of the right and left views are simultaneously processed to exploit the binocular vision properties. While the first proposed technique combines only the approximation subbands of both views, the second method combines all the wavelet subbands yielding an inter-view processing fully adapted to the local features of the stereo endoscopic images. Experimental results, carried out on various stereo endoscopic datasets, have demonstrated the efficiency of the proposed enhancement methods in terms of perceived visual image quality.
Collapse
|
25
|
Trucco E, McNeil A, McGrory S, Ballerini L, Mookiah MRK, Hogg S, Doney A, MacGillivray T. Validation. COMPUTATIONAL RETINAL IMAGE ANALYSIS 2019:157-170. [DOI: 10.1016/b978-0-08-102816-2.00009-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
|
26
|
Mahmoud N, Collins T, Hostettler A, Soler L, Doignon C, Montiel JMM. Live Tracking and Dense Reconstruction for Handheld Monocular Endoscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:79-89. [PMID: 30010552 DOI: 10.1109/tmi.2018.2856109] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Contemporary endoscopic simultaneous localization and mapping (SLAM) methods accurately compute endoscope poses; however, they only provide a sparse 3-D reconstruction that poorly describes the surgical scene. We propose a novel dense SLAM method whose qualities are: 1) monocular, requiring only RGB images of a handheld monocular endoscope; 2) fast, providing endoscope positional tracking and 3-D scene reconstruction, running in parallel threads; 3) dense, yielding an accurate dense reconstruction; 4) robust, to the severe illumination changes, poor texture and small deformations that are typical in endoscopy; and 5) self-contained, without needing any fiducials nor external tracking devices and, therefore, it can be smoothly integrated into the surgical workflow. It works as follows. First, accurate cluster frame poses are estimated using the sparse SLAM feature matches. The system segments clusters of video frames according to parallax criteria. Next, dense matches between cluster frames are computed in parallel by a variational approach that combines zero mean normalized cross correlation and a gradient Huber norm regularizer. This combination copes with challenging lighting and textures at an affordable time budget on a modern GPU. It can outperform pure stereo reconstructions, because the frames cluster can provide larger parallax from the endoscope's motion. We provide an extensive experimental validation on real sequences of the porcine abdominal cavity, both in-vivo and ex-vivo. We also show a qualitative evaluation on human liver. In addition, we show a comparison with the other dense SLAM methods showing the performance gain in terms of accuracy, density, and computation time.
Collapse
|
27
|
Wang C, Alaya Cheikh F, Kaaniche M, Beghdadi A, Elle OJ. Variational based smoke removal in laparoscopic images. Biomed Eng Online 2018; 17:139. [PMID: 30340594 PMCID: PMC6194583 DOI: 10.1186/s12938-018-0590-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Accepted: 10/11/2018] [Indexed: 11/13/2022] Open
Abstract
Background In laparoscopic surgery, image quality can be severely degraded by surgical smoke, which not only introduces errors for the image processing algorithms (used in image guided surgery), but also reduces the visibility of the observed organs and tissues. To overcome these drawbacks, this work aims to remove smoke in laparoscopic images using an image preprocessing method based on a variational approach. Methods In this paper, we present the physical smoke model where the degraded image is separated into two parts: direct attenuation and smoke veil and propose an efficient variational-based desmoking method for laparoscopic images. To estimate the smoke veil, the proposed method relies on the observation that smoke veil has low contrast and low inter-channel differences. A cost function is defined based on this prior knowledge and is solved using an augmented Lagrangian method. The obtained smoke veil is then subtracted from the original degraded image, resulting in the direct attenuation part. Finally, the smoke free image is computed using a linear intensity transformation of the direct attenuation part. Results The performance of the proposed method is evaluated quantitatively and qualitatively using three datasets: two public real smoked laparoscopic datasets and one generated synthetic dataset. No-reference and reduced-reference image quality assessment metrics are used with the two real datasets, and show that the proposed method outperforms the state-of-the-art ones. Besides, standard full-reference ones are employed with the synthetic dataset, and indicate also the good performance of the proposed method. Furthermore, the qualitative visual inspection of the results shows that our method removes smoke effectively from the laparoscopic images. Conclusion All the obtained results show that the proposed approach reduces the smoke effectively while preserving the important perceptual information of the image. This allows to provide a better visualization of the operation field for surgeons and improve the image guided laparoscopic surgery procedure.
Collapse
Affiliation(s)
- Congcong Wang
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Gjøvik, Norway.
| | - Faouzi Alaya Cheikh
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Mounir Kaaniche
- L2TI-Institut Galilée, Université Paris 13, Sorbonne Paris Cité, Villetaneuse, France
| | - Azeddine Beghdadi
- L2TI-Institut Galilée, Université Paris 13, Sorbonne Paris Cité, Villetaneuse, France
| | - Ole Jacob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,The Department of Informatics, University of Oslo, Oslo, Norway
| |
Collapse
|
28
|
Pellegrini E, Ballerini L, Hernandez MDCV, Chappell FM, González-Castro V, Anblagan D, Danso S, Muñoz-Maniega S, Job D, Pernet C, Mair G, MacGillivray TJ, Trucco E, Wardlaw JM. Machine learning of neuroimaging for assisted diagnosis of cognitive impairment and dementia: A systematic review. ALZHEIMER'S & DEMENTIA (AMSTERDAM, NETHERLANDS) 2018; 10:519-535. [PMID: 30364671 PMCID: PMC6197752 DOI: 10.1016/j.dadm.2018.07.004] [Citation(s) in RCA: 108] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
INTRODUCTION Advanced machine learning methods might help to identify dementia risk from neuroimaging, but their accuracy to date is unclear. METHODS We systematically reviewed the literature, 2006 to late 2016, for machine learning studies differentiating healthy aging from dementia of various types, assessing study quality, and comparing accuracy at different disease boundaries. RESULTS Of 111 relevant studies, most assessed Alzheimer's disease versus healthy controls, using AD Neuroimaging Initiative data, support vector machines, and only T1-weighted sequences. Accuracy was highest for differentiating Alzheimer's disease from healthy controls and poor for differentiating healthy controls versus mild cognitive impairment versus Alzheimer's disease or mild cognitive impairment converters versus nonconverters. Accuracy increased using combined data types, but not by data source, sample size, or machine learning method. DISCUSSION Machine learning does not differentiate clinically relevant disease categories yet. More diverse data sets, combinations of different types of data, and close clinical integration of machine learning would help to advance the field.
Collapse
Affiliation(s)
- Enrico Pellegrini
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Lucia Ballerini
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Maria del C. Valdes Hernandez
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Francesca M. Chappell
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Victor González-Castro
- Department of Electrical, Systems and Automatics Engineering, Universidad de León, León, Spain
| | - Devasuda Anblagan
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Samuel Danso
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Susana Muñoz-Maniega
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Dominic Job
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Cyril Pernet
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Grant Mair
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
| | - Tom J. MacGillivray
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
- VAMPIRE project, University of Edinburgh, Scotland, UK
| | - Emanuele Trucco
- VAMPIRE project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Joanna M. Wardlaw
- Division of Neuroimaging, Centre for Clinical Brain Sciences and Edinburgh Imaging, University of Edinburgh, Scotland, UK
- UK Dementia Institute, University of Edinburgh, Scotland, UK
| |
Collapse
|
29
|
Penza V, Ciullo AS, Moccia S, Mattos LS, De Momi E. EndoAbS dataset: Endoscopic abdominal stereo image dataset for benchmarking 3D stereo reconstruction algorithms. Int J Med Robot 2018; 14:e1926. [DOI: 10.1002/rcs.1926] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 05/01/2018] [Accepted: 05/02/2018] [Indexed: 11/05/2022]
Affiliation(s)
- Veronica Penza
- Department of Advanced Robotics; Istituto Italiano di Tecnologia; 16163 Genova Italy
- Department of Electronics Information and Bioengineering; Politecnico di Milano; 20133 Milano Italy
| | - Andrea S. Ciullo
- Department of Electronics Information and Bioengineering; Politecnico di Milano; 20133 Milano Italy
| | - Sara Moccia
- Department of Advanced Robotics; Istituto Italiano di Tecnologia; 16163 Genova Italy
- Department of Electronics Information and Bioengineering; Politecnico di Milano; 20133 Milano Italy
| | - Leonardo S. Mattos
- Department of Advanced Robotics; Istituto Italiano di Tecnologia; 16163 Genova Italy
| | - Elena De Momi
- Department of Electronics Information and Bioengineering; Politecnico di Milano; 20133 Milano Italy
| |
Collapse
|
30
|
Instrument detection and pose estimation with rigid part mixtures model in video-assisted surgeries. Med Image Anal 2018; 46:244-265. [DOI: 10.1016/j.media.2018.03.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Revised: 03/19/2018] [Accepted: 03/26/2018] [Indexed: 11/24/2022]
|
31
|
Computer-assisted 3D bowel length measurement for quantitative laparoscopy. Surg Endosc 2018; 32:4052-4061. [PMID: 29508142 DOI: 10.1007/s00464-018-6135-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2017] [Accepted: 02/23/2018] [Indexed: 01/25/2023]
Abstract
BACKGROUND This study aimed at developing and evaluating a tool for computer-assisted 3D bowel length measurement (BMS) to improve objective measurement in minimally invasive surgery. Standardization and quality of surgery as well as its documentation are currently limited by lack of objective intraoperative measurements. To solve this problem, we developed BMS as a clinical application of Quantitative Laparoscopy (QL). METHODS BMS processes images from a conventional 3D laparoscope. Computer vision algorithms are used to measure the distance between laparoscopic instruments along a 3D reconstruction of the bowel surface. Preclinical evaluation was performed in phantom, ex vivo porcine, and in vivo porcine models. A bowel length of 70 cm was measured with BMS and compared to a manually obtained ground truth. Afterwards 70 cm of bowel (ground truth) was measured and compared to BMS. RESULTS Ground truth was 66.1 ± 2.7 cm (relative error + 5.8%) in phantom, 65.8 ± 2.5 cm (relative error + 6.4%) in ex vivo, and 67.5 ± 6.6 cm (relative error + 3.7%) in in vivo porcine evaluation when 70 cm was measured with BMS. Using 70 cm of bowel, BMS measured 75.0 ± 2.9 cm (relative error + 7.2%) in phantom and 74.4 ± 2.8 cm (relative error + 6.3%) in ex vivo porcine evaluation. After thorough preclinical evaluation, BMS was successfully used in a patient undergoing laparoscopic Roux-en-Y gastric bypass for morbid obesity. CONCLUSIONS QL using BMS was shown to be feasible and was successfully translated from studies on phantom, ex vivo, and in vivo porcine bowel to a clinical feasibility study.
Collapse
|
32
|
Zihni A, Gerull WD, Cavallo JA, Ge T, Ray S, Chiu J, Brunt LM, Awad MM. Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance. J Surg Res 2018; 223:29-33. [DOI: 10.1016/j.jss.2017.07.037] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 07/18/2017] [Accepted: 07/25/2017] [Indexed: 11/27/2022]
|
33
|
Edgcumbe P, Singla R, Pratt P, Schneider C, Nguan C, Rohling R. Follow the light: projector-based augmented reality intracorporeal system for laparoscopic surgery. J Med Imaging (Bellingham) 2018; 5:021216. [PMID: 29487888 DOI: 10.1117/1.jmi.5.2.021216] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 01/22/2018] [Indexed: 01/20/2023] Open
Abstract
A projector-based augmented reality intracorporeal system (PARIS) is presented that includes a miniature tracked projector, tracked marker, and laparoscopic ultrasound (LUS) transducer. PARIS was developed to improve the efficacy and safety of laparoscopic partial nephrectomy (LPN). In particular, it has been demonstrated to effectively assist in the identification of tumor boundaries during surgery and to improve the surgeon's understanding of the underlying anatomy. PARIS achieves this by displaying the orthographic projection of the cancerous tumor on the kidney's surface. The performance of PARIS was evaluated in a user study with two surgeons who performed 32 simulated robot-assisted partial nephrectomies. They performed 16 simulated partial nephrectomies with PARIS for guidance and 16 simulated partial nephrectomies with only an LUS transducer for guidance. With PARIS, there was a significant reduction [30% ([Formula: see text])] in the amount of healthy tissue excised and a trend toward a more accurate dissection around the tumor and more negative margins. The combined point tracking and reprojection root-mean-square error of PARIS was 0.8 mm. PARIS' proven ability to improve key metrics of LPN surgery and qualitative feedback from surgeons about PARIS supports the hypothesis that it is an effective surgical navigation tool.
Collapse
Affiliation(s)
- Philip Edgcumbe
- University of British Columbia, MD/PhD Program, Vancouver, Canada
| | - Rohit Singla
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Philip Pratt
- Imperial College London, Department of Surgery and Cancer, London, United Kingdom
| | - Caitlin Schneider
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada
| | - Christopher Nguan
- University of British Columbia, Department of Urological Sciences, Vancouver, Canada
| | - Robert Rohling
- University of British Columbia, Department of Electrical and Computer Engineering, Vancouver, Canada.,University of British Columbia, Department of Mechanical Engineering, Vancouver, Canada
| |
Collapse
|
34
|
Heiselman JS, Clements LW, Collins JA, Weis JA, Simpson AL, Geevarghese SK, Kingham TP, Jarnagin WR, Miga MI. Characterization and correction of intraoperative soft tissue deformation in image-guided laparoscopic liver surgery. J Med Imaging (Bellingham) 2017; 5:021203. [PMID: 29285519 DOI: 10.1117/1.jmi.5.2.021203] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Accepted: 11/21/2017] [Indexed: 12/12/2022] Open
Abstract
Laparoscopic liver surgery is challenging to perform due to a compromised ability of the surgeon to localize subsurface anatomy in the constrained environment. While image guidance has the potential to address this barrier, intraoperative factors, such as insufflation and variable degrees of organ mobilization from supporting ligaments, may generate substantial deformation. The severity of laparoscopic deformation in humans has not been characterized, and current laparoscopic correction methods do not account for the mechanics of how intraoperative deformation is applied to the liver. We first measure the degree of laparoscopic deformation at two insufflation pressures over the course of laparoscopic-to-open conversion in 25 patients. With this clinical data alongside a mock laparoscopic phantom setup, we report a biomechanical correction approach that leverages anatomically load-bearing support surfaces from ligament attachments to iteratively reconstruct and account for intraoperative deformations. Laparoscopic deformations were significantly larger than deformations associated with open surgery, and our correction approach yielded subsurface target error of [Formula: see text] and surface error of [Formula: see text] using only sparse surface data with realistic surgical extent. Laparoscopic surface data extents were examined and found to impact registration accuracy. Finally, we demonstrate viability of the correction method with clinical data.
Collapse
Affiliation(s)
- Jon S Heiselman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Logan W Clements
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jarrod A Collins
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| | - Jared A Weis
- Wake Forest School of Medicine, Department of Biomedical Engineering, Winston-Salem, North Carolina, United States
| | - Amber L Simpson
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Sunil K Geevarghese
- Vanderbilt University Medical Center, Division of Hepatobiliary Surgery and Liver Transplantation, Nashville, Tennessee, United States
| | - T Peter Kingham
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - William R Jarnagin
- Memorial Sloan-Kettering Cancer Center, Hepatopancreatobiliary Service, Department of Surgery, New York, New York, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University, Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, United States
| |
Collapse
|
35
|
Gao Y, Li J, Li J, Wang S. Modeling the convergence accommodation of stereo vision for binocular endoscopy. Int J Med Robot 2017; 14. [PMID: 29052314 DOI: 10.1002/rcs.1866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 07/21/2017] [Accepted: 09/01/2017] [Indexed: 11/10/2022]
Abstract
BACKGROUND The stereo laparoscope is an important tool for achieving depth perception in robot-assisted minimally invasive surgery (MIS). METHODS A dynamic convergence accommodation algorithm is proposed to improve the viewing experience and achieve accurate depth perception. Based on the principle of the human vision system, a positional kinematic model of the binocular view system is established. The imaging plane pair is rectified to ensure that the two rectified virtual optical axes intersect at the fixation target to provide immersive depth perception. RESULTS Stereo disparity was simulated with the roll and pitch movements of the binocular system. The chessboard test and the endoscopic peg transfer task were performed, and the results demonstrated the improved disparity distribution and robustness of the proposed convergence accommodation method with respect to the position of the fixation target. CONCLUSIONS This method offers a new solution for effective depth perception with the stereo laparoscopes used in robot-assisted MIS.
Collapse
Affiliation(s)
- Yuanqian Gao
- School of Mechanical Engineering, Tianjin University, China
| | - Jinhua Li
- School of Mechanical Engineering, Tianjin University, China
| | - Jianmin Li
- School of Mechanical Engineering, Tianjin University, China
| | - Shuxin Wang
- School of Mechanical Engineering, Tianjin University, China
| |
Collapse
|
36
|
Bronte S, Bergasa LM, Pizarro D, Barea R. Model-Based Real-Time Non-Rigid Tracking. SENSORS (BASEL, SWITZERLAND) 2017; 17:s17102342. [PMID: 29036886 PMCID: PMC5677346 DOI: 10.3390/s17102342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Revised: 09/29/2017] [Accepted: 10/10/2017] [Indexed: 06/07/2023]
Abstract
This paper presents a sequential non-rigid reconstruction method that recovers the 3D shape and the camera pose of a deforming object from a video sequence and a previous shape model of the object. We take PTAM (Parallel Mapping and Tracking), a state-of-the-art sequential real-time SfM (Structure-from-Motion) engine, and we upgrade it to solve non-rigid reconstruction. Our method provides a good trade-off between processing time and reconstruction error without the need for specific processing hardware, such as GPUs. We improve the original PTAM matching by using descriptor-based features, as well as smoothness priors to better constrain the 3D error. This paper works with perspective projection and deals with outliers and missing data. We evaluate the tracking algorithm performance through different tests over several datasets of non-rigid deforming objects. Our method achieves state-of-the-art accuracy and can be used as a real-time method suitable for being embedded in portable devices.
Collapse
Affiliation(s)
- Sebastián Bronte
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Luis M Bergasa
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Daniel Pizarro
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| | - Rafael Barea
- Electronics Department, University of Alcalá, Campus Universitario, 28805 Alcalá de Henares, Spain.
| |
Collapse
|
37
|
Marmol A, Peynot T, Eriksson A, Jaiprakash A, Roberts J, Crawford R. Evaluation of Keypoint Detectors and Descriptors in Arthroscopic Images for Feature-Based Matching Applications. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2017.2714150] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
38
|
Detmer FJ, Hettig J, Schindele D, Schostak M, Hansen C. Virtual and Augmented Reality Systems for Renal Interventions: A Systematic Review. IEEE Rev Biomed Eng 2017; 10:78-94. [PMID: 28885161 DOI: 10.1109/rbme.2017.2749527] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
PURPOSE Many virtual and augmented reality systems have been proposed to support renal interventions. This paper reviews such systems employed in the treatment of renal cell carcinoma and renal stones. METHODS A systematic literature search was performed. Inclusion criteria were virtual and augmented reality systems for radical or partial nephrectomy and renal stone treatment, excluding systems solely developed or evaluated for training purposes. RESULTS In total, 52 research papers were identified and analyzed. Most of the identified literature (87%) deals with systems for renal cell carcinoma treatment. About 44% of the systems have already been employed in clinical practice, but only 20% in studies with ten or more patients. Main challenges remaining for future research include the consideration of organ movement and deformation, human factor issues, and the conduction of large clinical studies. CONCLUSION Augmented and virtual reality systems have the potential to improve safety and outcomes of renal interventions. In the last ten years, many technical advances have led to more sophisticated systems, which are already applied in clinical practice. Further research is required to cope with current limitations of virtual and augmented reality assistance in clinical environments.
Collapse
|
39
|
Garbey M, Nguyen TB, Huang AY, Fikfak V, Dunkin BJ. A method for going from 2D laparoscope to 3D acquisition of surface landmarks by a novel computer vision approach. Int J Comput Assist Radiol Surg 2017; 13:267-280. [PMID: 28861700 DOI: 10.1007/s11548-017-1655-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 07/28/2017] [Indexed: 01/08/2023]
Abstract
PURPOSE This paper presents a method to use the Smart Trocars-our new surgical instrument recognition system-or any accurate localization system of surgical instrument for acquiring intraoperative surface data. Complex laparoscopic surgeries need a proper guidance system which requires registering the preoperative data from a CT or MRI scan to the intraoperative patient state. The Smart Trocar can be used to localize the instruments when it comes to contact with the soft tissue surface. METHOD Two successive views through the laparoscope at different angles with the 3D localization of a fixed tool at one single location using the Smart Trocars can point out visible features during the surgery and acquire their location in 3D to provide a depth map in the region of interest. In other words, our method transforms a standard laparoscope system into a system with three-dimensional registration capability. RESULT This method was initially tested on a simulation for uncertainty assessment and then on a rigid model for verification with an accuracy within 2 mm distance. In addition, an in vivo experiment on pig model was also conducted to investigate how the method might be used during a physiologic respiratory cycle. CONCLUSION This method can be applied in a large number of surgical applications as a guidance system on its own or in conjunction with other navigation techniques. Our work encourages further testing with realistic surgical applications in the near future.
Collapse
Affiliation(s)
- Marc Garbey
- Center for Computational Surgery, Houston Methodist Research Institute, Houston, TX, USA.,Methodist Institute for Technology, Innovation and Education- Houston Methodist Hospital, Houston, TX, USA.,LaSIE UMR - 7356 CNRS - University of La Rochelle, La Rochelle, France
| | - Toan B Nguyen
- Center for Computational Surgery, Houston Methodist Research Institute, Houston, TX, USA. .,Department of Computer Science, University of Houston, Houston, TX, USA.
| | - Albert Y Huang
- Methodist Institute for Technology, Innovation and Education- Houston Methodist Hospital, Houston, TX, USA
| | - Vid Fikfak
- Methodist Institute for Technology, Innovation and Education- Houston Methodist Hospital, Houston, TX, USA
| | - Brian J Dunkin
- Methodist Institute for Technology, Innovation and Education- Houston Methodist Hospital, Houston, TX, USA
| |
Collapse
|
40
|
Bernal J, Tajkbaksh N, Sanchez FJ, Matuszewski BJ, Angermann Q, Romain O, Rustad B, Balasingham I, Pogorelov K, Debard Q, Maier-Hein L, Speidel S, Stoyanov D, Brandao P, Cordova H, Sanchez-Montes C, Gurudu SR, Fernandez-Esparrach G, Dray X, Histace A. Comparative Validation of Polyp Detection Methods in Video Colonoscopy: Results From the MICCAI 2015 Endoscopic Vision Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1231-1249. [PMID: 28182555 DOI: 10.1109/tmi.2017.2664042] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Colonoscopy is the gold standard for colon cancer screening though some polyps are still missed, thus preventing early disease detection and treatment. Several computational systems have been proposed to assist polyp detection during colonoscopy but so far without consistent evaluation. The lack of publicly available annotated databases has made it difficult to compare methods and to assess if they achieve performance levels acceptable for clinical use. The Automatic Polyp Detection sub-challenge, conducted as part of the Endoscopic Vision Challenge (http://endovis.grand-challenge.org) at the international conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2015, was an effort to address this need. In this paper, we report the results of this comparative evaluation of polyp detection methods, as well as describe additional experiments to further explore differences between methods. We define performance metrics and provide evaluation databases that allow comparison of multiple methodologies. Results show that convolutional neural networks are the state of the art. Nevertheless, it is also demonstrated that combining different methodologies can lead to an improved overall performance.
Collapse
|
41
|
Reichard D, Häntsch D, Bodenstedt S, Suwelack S, Wagner M, Kenngott H, Müller-Stich B, Maier-Hein L, Dillmann R, Speidel S. Projective biomechanical depth matching for soft tissue registration in laparoscopic surgery. Int J Comput Assist Radiol Surg 2017; 12:1101-1110. [PMID: 28550405 DOI: 10.1007/s11548-017-1613-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 05/15/2017] [Indexed: 10/19/2022]
Abstract
PURPOSE A key component of computer- assisted surgery systems is the accurate and robust registration of preoperative planning data with intraoperative sensor data. In laparoscopic surgery, this image-based registration remains challenging due to soft tissue deformations. This paper presents a novel approach for biomechanical soft tissue registration of preoperative CT data with stereo endoscopic image data. METHODS The proposed method consists of two registrations steps. First, we use a 3D surface mosaic from partial surfaces reconstructed from stereo endoscopic images to initially align the biomechanical model with the intraoperative position and shape of the organ. After this initialization, the biomechanical model is projected onto newly captured surfaces, resulting in displacement boundary conditions, which in turn are used to update the biomechanical model. RESULTS The method is evaluated in silico, using a human liver model, and in vivo, using porcine data. The quantitative in silico data shows a stable behaviour of the biomechanical model and root-mean-square deviation of volume vertices of under 3 mm with adjusted biomechanical parameters. CONCLUSION This work contributes a fully automatic featureless non-rigid registration approach. The results of the in silico and in vivo experiments suggest that our method is able to handle dynamic deformations during surgery. Additional experiments, especially regarding human tissue behaviour, are an important next step towards clinical applications.
Collapse
Affiliation(s)
- Daniel Reichard
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany.
| | - Dominik Häntsch
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany
| | - Sebastian Bodenstedt
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany
| | - Stefan Suwelack
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany
| | - Martin Wagner
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Hannes Kenngott
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Beat Müller-Stich
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Lena Maier-Hein
- Junior Group Computer-Assisted Interventions, Division of Medical and Biological Informatics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Rüdiger Dillmann
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany
| | - Stefanie Speidel
- Karlsruhe Institute of Technology, Adenauerring 2, Bldg. 50.20, Karlsruhe, Germany
| |
Collapse
|
42
|
Penza V, De Momi E, Enayati N, Chupin T, Ortiz J, Mattos LS. EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding. Front Robot AI 2017. [DOI: 10.3389/frobt.2017.00015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
43
|
Chhatkuli A, Pizarro D, Bartoli A, Collins T. A Stable Analytical Framework for Isometric Shape-from-Template by Surface Integration. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:833-850. [PMID: 27164575 DOI: 10.1109/tpami.2016.2562622] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Shape-from-Template (SfT) reconstructs the shape of a deforming surface from a single image, a 3D template and a deformation prior. For isometric deformations, this is a well-posed problem. However, previous methods which require no initialization break down when the perspective effects are small, which happens when the object is small or viewed from larger distances. That is, they do not handle all projection geometries. We propose stable SfT methods that accurately reconstruct the 3D shape for all projection geometries. We follow the existing approach of using first-order differential constraints and obtain local analytical solutions for depth and the first-order quantities: the depth-gradient or the surface normal. Previous methods use the depth solution directly to obtain the 3D shape. We prove that the depth solution is unstable when the projection geometry tends to affine, while the solution for the first-order quantities remain stable for all projection geometries. We therefore propose to solve SfT by first estimating the first-order quantities (either depth-gradient or surface normal) and integrating them to obtain shape. We validate our approach with extensive synthetic and real-world experiments and obtain significantly more accurate results compared to previous initialization-free methods. Our approach does not require any optimization, which makes it very fast.
Collapse
|
44
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
45
|
Zhang Y, Wirkert SJ, Iszatt J, Kenngott H, Wagner M, Mayer B, Stock C, Clancy NT, Elson DS, Maier-Hein L. Tissue classification for laparoscopic image understanding based on multispectral texture analysis. J Med Imaging (Bellingham) 2017; 4:015001. [PMID: 28149926 DOI: 10.1117/1.jmi.4.1.015001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2016] [Accepted: 12/16/2016] [Indexed: 11/14/2022] Open
Abstract
Intraoperative tissue classification is one of the prerequisites for providing context-aware visualization in computer-assisted minimally invasive surgeries. As many anatomical structures are difficult to differentiate in conventional RGB medical images, we propose a classification method based on multispectral image patches. In a comprehensive ex vivo study through statistical analysis, we show that (1) multispectral imaging data are superior to RGB data for organ tissue classification when used in conjunction with widely applied feature descriptors and (2) combining the tissue texture with the reflectance spectrum improves the classification performance. The classifier reaches an accuracy of 98.4% on our dataset. Multispectral tissue analysis could thus evolve as a key enabling technique in computer-assisted laparoscopy.
Collapse
Affiliation(s)
- Yan Zhang
- German Cancer Research Center (DKFZ) , Department of Computer Assisted Medical Interventions, Im Neuenheimer Feld 581, Heidelberg 69120, Germany
| | - Sebastian J Wirkert
- German Cancer Research Center (DKFZ) , Department of Computer Assisted Medical Interventions, Im Neuenheimer Feld 581, Heidelberg 69120, Germany
| | - Justin Iszatt
- German Cancer Research Center (DKFZ) , Department of Computer Assisted Medical Interventions, Im Neuenheimer Feld 581, Heidelberg 69120, Germany
| | - Hannes Kenngott
- Heidelberg University Hospital , Department for General, Visceral and Transplantation Surgery, International Office, Im Neuenheimer Feld 400, Heidelberg 69120, Germany
| | - Martin Wagner
- Heidelberg University Hospital , Department for General, Visceral and Transplantation Surgery, International Office, Im Neuenheimer Feld 400, Heidelberg 69120, Germany
| | - Benjamin Mayer
- Heidelberg University Hospital , Department for General, Visceral and Transplantation Surgery, International Office, Im Neuenheimer Feld 400, Heidelberg 69120, Germany
| | - Christian Stock
- University of Heidelberg , Institute of Medical Biometry and Informatics, Im Neuenheimer Feld 130.3, Heidelberg 69120, Germany
| | - Neil T Clancy
- The Hamlyn Centre, Imperial College London, Bessemer Building, South Kensington Campus, London SW7 2AZ, United Kingdom; Imperial College London, Department of Surgery and Cancer, South Kensington Campus, London SW7 2AZ, United Kingdom
| | - Daniel S Elson
- The Hamlyn Centre, Imperial College London, Bessemer Building, South Kensington Campus, London SW7 2AZ, United Kingdom; Imperial College London, Department of Surgery and Cancer, South Kensington Campus, London SW7 2AZ, United Kingdom
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ) , Department of Computer Assisted Medical Interventions, Im Neuenheimer Feld 581, Heidelberg 69120, Germany
| |
Collapse
|
46
|
Combining Local-Physical and Global-Statistical Models for Sequential Deformable Shape from Motion. Int J Comput Vis 2016. [DOI: 10.1007/s11263-016-0972-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
47
|
Brudfors M, García-Vázquez V, Sesé-Lucio B, Marinetto E, Desco M, Pascau J. ConoSurf: Open-source 3D scanning system based on a conoscopic holography device for acquiring surgical surfaces. Int J Med Robot 2016; 13. [PMID: 27868345 PMCID: PMC5638071 DOI: 10.1002/rcs.1788] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 09/27/2016] [Accepted: 10/12/2016] [Indexed: 11/26/2022]
Abstract
Background A difficulty in computer‐assisted interventions is acquiring the patient's anatomy intraoperatively. Standard modalities have several limitations: low image quality (ultrasound), radiation exposure (computed tomography) or high costs (magnetic resonance imaging). An alternative approach uses a tracked pointer; however, the pointer causes tissue deformation and requires sterilizing. Recent proposals, utilizing a tracked conoscopic holography device, have shown promising results without the previously mentioned drawbacks. Methods We have developed an open‐source software system that enables real‐time surface scanning using a conoscopic holography device and a wide variety of tracking systems, integrated into pre‐existing and well‐supported software solutions. Results The mean target registration error of point measurements was 1.46 mm. For a quick guidance scan, surface reconstruction improved the surface registration error compared with point‐set registration. Conclusions We have presented a system enabling real‐time surface scanning using a tracked conoscopic holography device. Results show that it can be useful for acquiring the patient's anatomy during surgery.
Collapse
Affiliation(s)
- Mikael Brudfors
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Begoña Sesé-Lucio
- Instituto de Investigación Sanitaria Gregorio Marañón (IiSGM), Madrid, Spain
| | - Eugenio Marinetto
- Instituto de Investigación Sanitaria Gregorio Marañón (IiSGM), Madrid, Spain
| | - Manuel Desco
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón (IiSGM), Madrid, Spain.,Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón (IiSGM), Madrid, Spain.,Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, Spain
| |
Collapse
|
48
|
Monge F, Shakir DI, Lejeune F, Morandi X, Navab N, Jannin P. Acquisition models in intraoperative positron surface imaging. Int J Comput Assist Radiol Surg 2016; 12:691-703. [PMID: 27714566 DOI: 10.1007/s11548-016-1487-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 09/07/2016] [Indexed: 10/20/2022]
Abstract
PURPOSE Intraoperative imaging aims at identifying residual tumor during surgery. Positron Surface Imaging (PSI) is one of the solutions to help surgeons in a better detection of resection margins of brain tumor, leading to an improved patient outcome. This system relies on a tracked freehand beta probe, using [Formula: see text]F-based radiotracer. Some acquisition models have been proposed in the literature in order to enhance image quality, but no comparative validation study has been performed for PSI. METHODS In this study, we investigated the performance of different acquisition models by considering validation criteria and normalized metrics. We proposed a reference-based validation framework to perform the comparative study between acquisition models and a basic method. We estimated the performance of several acquisition models in light of four validation criteria: efficiency, computational speed, spatial accuracy and tumor contrast. RESULTS Selected acquisition models outperformed the basic method, albeit with the real-time aspect compromised. One acquisition model yielded the best performance among all according to the validation criteria: efficiency (1-Spe: 0.1, Se: 0.94), spatial accuracy (max Dice: 0.77) and tumor contrast (max T/B: 5.2). We also found out that above a minimum threshold value of the sampling rate, the reconstruction quality does not vary significantly. CONCLUSION Our method allowed the comparison of different acquisition models and highlighted one of them according to our validation criteria. This novel approach can be extended to 3D datasets, for validation of future acquisition models dedicated to intraoperative guidance of brain surgery.
Collapse
Affiliation(s)
- Frédéric Monge
- LTSI INSERM, UMR 1099, Campus de Villejean, Université de Rennes 1, 2, Avenue du Pr. Léon Bernard, 35043, Rennes Cedex, France.
| | | | | | - Xavier Morandi
- LTSI INSERM, UMR 1099, Campus de Villejean, Université de Rennes 1, 2, Avenue du Pr. Léon Bernard, 35043, Rennes Cedex, France.,CHU Rennes, Service de Neurochirurgie, Rennes, 35000, France
| | - Nassir Navab
- CAMP, Technische Universität München, Munich, Germany
| | - Pierre Jannin
- LTSI INSERM, UMR 1099, Campus de Villejean, Université de Rennes 1, 2, Avenue du Pr. Léon Bernard, 35043, Rennes Cedex, France
| |
Collapse
|
49
|
Zhang Z, Xin Y, Liu B, Li WXY, Lee KH, Ng CF, Stoyanov D, Cheung RCC, Kwok KW. FPGA-Based High-Performance Collision Detection: An Enabling Technique for Image-Guided Robotic Surgery. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00051] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
50
|
Ong R, Glisson CL, Burgner-Kahrs J, Simpson A, Danilchenko A, Lathrop R, Herrell SD, Webster RJ, Miga M, Galloway RL. A novel method for texture-mapping conoscopic surfaces for minimally invasive image-guided kidney surgery. Int J Comput Assist Radiol Surg 2016; 11:1515-26. [PMID: 26758889 PMCID: PMC4942405 DOI: 10.1007/s11548-015-1339-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2015] [Accepted: 12/09/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Organ-level registration is critical to image-guided therapy in soft tissue. This is especially important in organs such as the kidney which can freely move. We have developed a method for registration that combines three-dimensional locations from a holographic conoscope with an endoscopically obtained textured surface. By combining these data sources clear decisions as to the tissue from which the points arise can be made. METHODS By localizing the conoscope's laser dot in the endoscopic space, we register the textured surface to the cloud of conoscopic points. This allows the cloud of points to be filtered for only those arising from the kidney surface. Once a valid cloud is obtained we can use standard surface registration techniques to perform the image-space to physical-space registration. Since our methods use two distinct data sources we test for spatial accuracy and characterize temporal effects in phantoms, ex vivo porcine and human kidneys. In addition we use an industrial robot to provide controlled motion and positioning for characterizing temporal effects. RESULTS Our initial surface acquisitions are hand-held. This means that we take approximately 55 s to acquire a surface. At that rate we see no temporal effects due to acquisition synchronization or probe speed. Our surface registrations were able to find applied targets with submillimeter target registration errors. CONCLUSION The results showed that the textured surfaces could be reconstructed with submillimetric mean registration errors. While this paper focuses on kidney applications, this method could be applied to any anatomical structures where a line of sight can be created via open or minimally invasive surgical techniques.
Collapse
Affiliation(s)
- Rowena Ong
- Medtronic Surgical Technologies, Louisville, CO, 80027, USA
| | - Courtenay L Glisson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | | | - Amber Simpson
- Memorial Sloan Cancer Center, New York City, NY, USA
| | | | - Ray Lathrop
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - S Duke Herrell
- Department of Urologic Surgery, Vanderbilt Medical Center, Nashville, TN, 37235, USA
| | - Robert J Webster
- Department of Mechanical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Michael Miga
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Robert L Galloway
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, 37235, USA.
| |
Collapse
|