1
|
Bongratz F, Rickmann AM, Wachinger C. Abdominal organ segmentation via deep diffeomorphic mesh deformations. Sci Rep 2023; 13:18270. [PMID: 37880251 PMCID: PMC10600339 DOI: 10.1038/s41598-023-45435-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 10/19/2023] [Indexed: 10/27/2023] Open
Abstract
Abdominal organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems. It is challenging due to the high variability in the shape, size, and position of abdominal organs. Three-dimensional numeric representations of abdominal shapes with point-wise correspondence to a template are further important for quantitative and statistical analyses thereof. Recently, template-based surface extraction methods have shown promising advances for direct mesh reconstruction from volumetric scans. However, the generalization of these deep learning-based approaches to different organs and datasets, a crucial property for deployment in clinical environments, has not yet been assessed. We close this gap and employ template-based mesh reconstruction methods for joint liver, kidney, pancreas, and spleen segmentation. Our experiments on manually annotated CT and MRI data reveal limited generalization capabilities of previous methods to organs of different geometry and weak performance on small datasets. We alleviate these issues with a novel deep diffeomorphic mesh-deformation architecture and an improved training scheme. The resulting method, UNetFlow, generalizes well to all four organs and can be easily fine-tuned on new data. Moreover, we propose a simple registration-based post-processing that aligns voxel and mesh outputs to boost segmentation accuracy.
Collapse
Affiliation(s)
- Fabian Bongratz
- Department of Radiology, Technical University of Munich, Munich, 81675, Germany.
- Munich Center for Machine Learning, Munich, Germany.
| | - Anne-Marie Rickmann
- Department of Radiology, Technical University of Munich, Munich, 81675, Germany
- Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich, 80336, Germany
| | - Christian Wachinger
- Department of Radiology, Technical University of Munich, Munich, 81675, Germany
- Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich, 80336, Germany
- Munich Center for Machine Learning, Munich, Germany
| |
Collapse
|
2
|
Rezaei SR, Ahmadi A. A GAN-based method for 3D lung tumor reconstruction boosted by a knowledge transfer approach. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362675 PMCID: PMC10106883 DOI: 10.1007/s11042-023-15232-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 02/18/2023] [Accepted: 03/30/2023] [Indexed: 06/28/2023]
Abstract
Three-dimensional (3D) image reconstruction of tumors has been one of the most effective techniques for accurately visualizing tumor structures and treatment with high resolution, which requires a set of two-dimensional medical images such as CT slices. In this paper we propose a novel method based on generative adversarial networks (GANs) for 3D lung tumor reconstruction by CT images. The proposed method consists of three stages: lung segmentation, tumor segmentation and 3D lung tumor reconstruction. Lung and tumor segmentation are performed using snake optimization and Gustafson-Kessel (GK) clustering. In the 3D reconstruction part first, features are extracted using the pre-trained VGG model from the tumors that detected in 2D CT slices. Then, a sequence of extracted features is fed into an LSTM to output compressed features. Finally, the compressed feature is used as input for GAN, where the generator is responsible for high-level reconstructing the 3D image of the lung tumor. The main novelty of this paper is the use of GAN to reconstruct a 3D lung tumor model for the first time, to the best of our knowledge. Also, we used knowledge transfer to extract features from 2D images to speed up the training process. The results obtained from the proposed model on the LUNA dataset showed better results than state of the art. According to HD and ED metrics, the proposed method has the lowest values of 3.02 and 1.06, respectively, as compared to those of other methods. The experimental results show that the proposed method performs better than previous similar methods and it is useful to help practitioners in the treatment process.
Collapse
Affiliation(s)
- Seyed Reza Rezaei
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
| | - Abbas Ahmadi
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
3
|
Gholinejad M, Pelanis E, Aghayan D, Fretland ÅA, Edwin B, Terkivatan T, Elle OJ, Loeve AJ, Dankelman J. Generic surgical process model for minimally invasive liver treatment methods. Sci Rep 2022; 12:16684. [PMID: 36202857 PMCID: PMC9537522 DOI: 10.1038/s41598-022-19891-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 09/06/2022] [Indexed: 11/09/2022] Open
Abstract
Surgical process modelling is an innovative approach that aims to simplify the challenges involved in improving surgeries through quantitative analysis of a well-established model of surgical activities. In this paper, surgical process model strategies are applied for the analysis of different Minimally Invasive Liver Treatments (MILTs), including ablation and surgical resection of the liver lesions. Moreover, a generic surgical process model for these differences in MILTs is introduced. The generic surgical process model was established at three different granularity levels. The generic process model, encompassing thirteen phases, was verified against videos of MILT procedures and interviews with surgeons. The established model covers all the surgical and interventional activities and the connections between them and provides a foundation for extensive quantitative analysis and simulations of MILT procedures for improving computer-assisted surgery systems, surgeon training and evaluation, surgeon guidance and planning systems and evaluation of new technologies.
Collapse
Affiliation(s)
- Maryam Gholinejad
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands.
| | - Egidius Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway
| | - Davit Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Surgery N1, Yerevan State Medical University After M. Heratsi, Yerevan, Armenia
| | - Åsmund Avdem Fretland
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Medical Faculty, University of Oslo, Oslo, Norway.,Department of HPB Surgery, Oslo University Hospital, Oslo, Norway
| | - Turkan Terkivatan
- Department of Surgery, Division of HPB and Transplant Surgery, Erasmus MC, University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Arjo J Loeve
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| | - Jenny Dankelman
- Department of Biomechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
4
|
Chu Y, Yang X, Li H, Ai D, Ding Y, Fan J, Song H, Yang J. Multi-level feature aggregation network for instrument identification of endoscopic images. Phys Med Biol 2020; 65:165004. [PMID: 32344381 DOI: 10.1088/1361-6560/ab8dda] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Identification of surgical instruments is crucial in understanding surgical scenarios and providing an assistive process in endoscopic image-guided surgery. This study proposes a novel multilevel feature-aggregated deep convolutional neural network (MLFA-Net) for identifying surgical instruments in endoscopic images. First, a global feature augmentation layer is created on the top layer of the backbone to improve the localization ability of object identification by boosting the high-level semantic information to the feature flow network. Second, a modified interaction path of cross-channel features is proposed to increase the nonlinear combination of features in the same level and improve the efficiency of information propagation. Third, a multiview fusion branch of features is built to aggregate the location-sensitive information of the same level in different views, increase the information diversity of features, and enhance the localization ability of objects. By utilizing the latent information, the proposed network of multilevel feature aggregation can accomplish multitask instrument identification with a single network. Three tasks are handled by the proposed network, including object detection, which classifies the type of instrument and locates its border; mask segmentation, which detects the instrument shape; and pose estimation, which detects the keypoint of instrument parts. The experiments are performed on laparoscopic images from MICCAI 2017 Endoscopic Vision Challenge, and the mean average precision (AP) and average recall (AR) are utilized to quantify the segmentation and pose estimation results. For the bounding box regression, the AP and AR are 79.1% and 63.2%, respectively, while the AP and AR of mask segmentation are 78.1% and 62.1%, and the AP and AR of the pose estimation achieve 67.1% and 55.7%, respectively. The experiments demonstrate that our method efficiently improves the recognition accuracy of the instrument in endoscopic images, and outperforms the other state-of-the-art methods.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081 People's Republic of China. Authors contribute equally to this article
| | | | | | | | | | | | | | | |
Collapse
|
5
|
Joseph SS, Dennisan A. Three Dimensional Reconstruction Models for Medical Modalities: A Comprehensive Investigation and Analysis. Curr Med Imaging 2020; 16:653-668. [PMID: 32723236 DOI: 10.2174/1573405615666190124165855] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 12/14/2018] [Accepted: 01/03/2019] [Indexed: 11/22/2022]
Abstract
BACKGROUND Image reconstruction is the mathematical process which converts the signals obtained from the scanning machine into an image. The reconstructed image plays a fundamental role in the planning of surgery and research in the medical field. DISCUSSION This paper introduces the first comprehensive survey of the literature about medical image reconstruction related to diseases, presenting a categorical study about the techniques and analyzing advantages and disadvantages of each technique. The images obtained by various imaging modalities like MRI, CT, CTA, Stereo radiography and Light field microscopy are included. A comparison on the basis of the reconstruction technique, Imaging Modality and Visualization, Disease, Metrics for 3D reconstruction accuracy, Dataset and Execution time, Evaluation of the technique is also performed. CONCLUSION The survey makes an assessment of the suitable reconstruction technique for an organ, draws general conclusions and discusses the future directions.
Collapse
Affiliation(s)
- Sushitha Susan Joseph
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Aju Dennisan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
6
|
Chu Y, Li H, Li X, Ding Y, Yang X, Ai D, Chen X, Wang Y, Yang J. Endoscopic image feature matching via motion consensus and global bilateral regression. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105370. [PMID: 32036206 DOI: 10.1016/j.cmpb.2020.105370] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 12/17/2019] [Accepted: 01/26/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Feature matching of endoscopic images is of crucial importance in many clinical applications, such as object tracking and surface reconstruction. However, with the presence of low texture, specular reflections and deformations, the feature matching methods of natural scene are facing great challenges in minimally invasive surgery (MIS) scenarios. We propose a novel motion consensus-based method for endoscopic image feature matching to address these problems. METHODS Our method starts by correcting the radial distortion with the spherical projection model and removing the specular reflection regions with an adaptive detection method, which helps to eliminate the image distortion and to reduce the quantity of outliers. We solve the matching problem with a two-stage strategy that progressively estimates a consensus of inliers; the result is a precisely smoothed motion field. First, we construct a spatial motion field from candidate feature matches and estimate its maximum posterior with expectation maximization algorithm, which is computationally efficient and able to obtain smoothed motion field quickly. Second, we extend the smoothed motion field to the affine domain and refine it with bilateral regression to preserve locally subtle motions. The true matches can be identified by checking the difference of feature motion against the estimated field. RESULTS Evaluations are implemented on two simulation datasets of deformation (218 images) and four different types of endoscopic datasets (1032 images). Our method is compared with three other state-of-the-art methods and achieves the best performance on affine transformation and nonrigid deformation simulations, with inlier ratio of 86.7% and 94.3%, sensitivity of 90.0% and 96.2%, precision of 88.2% and 93.9%, and F1-Score of 89.1% and 95.0%, respectively. On clinical datasets evaluations, the proposed method achieves an average reprojection error of 3.7 pixels and a consistent performance in multi-image correspondence of sequential images. Furthermore, we also present a surface reconstruction result from rhinoscopic images to validate the reliability of our method, which shows high-quality feature matching results. CONCLUSIONS The proposed motion consensus-based feature matching method is proved effective and robust for endoscopic images correspondence. This demonstrates its capability to generate reliable feature matches for surface reconstruction and other meaningful applications in MIS scenarios.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Heng Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| | - Xu Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yuan Ding
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xilin Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiaohong Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Beijing 100730, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
7
|
Teatini A, Pérez de Frutos J, Eigl B, Pelanis E, Aghayan DL, Lai M, Kumar RP, Palomar R, Edwin B, Elle OJ. Influence of sampling accuracy on augmented reality for laparoscopic image-guided surgery. MINIM INVASIV THER 2020; 30:229-238. [DOI: 10.1080/13645706.2020.1727524] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Andrea Teatini
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Javier Pérez de Frutos
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | | | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Davit L. Aghayan
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Surgery N1, Yerevan State Medical University, Yerevan, Armenia
| | - Marco Lai
- Philips Research, High Tech, Eindhoven, The Netherlands
| | | | - Rafael Palomar
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Department of Computer Science, NTNU, Trondheim, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Hepato-Pancreatic-Biliary Surgery, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital Rikshospitalet, Oslo, Norway
- SINTEF Digital, SINTEF A.S, Trondheim, Norway
| |
Collapse
|
8
|
Teatini A, Pelanis E, Aghayan DL, Kumar RP, Palomar R, Fretland ÅA, Edwin B, Elle OJ. The effect of intraoperative imaging on surgical navigation for laparoscopic liver resection surgery. Sci Rep 2019; 9:18687. [PMID: 31822701 PMCID: PMC6904553 DOI: 10.1038/s41598-019-54915-3] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/21/2019] [Indexed: 12/14/2022] Open
Abstract
Conventional surgical navigation systems rely on preoperative imaging to provide guidance. In laparoscopic liver surgery, insufflation of the abdomen (pneumoperitoneum) can cause deformations on the liver, introducing inaccuracies in the correspondence between the preoperative images and the intraoperative reality. This study evaluates the improvements provided by intraoperative imaging for laparoscopic liver surgical navigation, when displayed as augmented reality (AR). Significant differences were found in terms of accuracy of the AR, in favor of intraoperative imaging. In addition, results showed an effect of user-induced error: image-to-patient registration based on annotations performed by clinicians caused 33% more inaccuracy as compared to image-to-patient registration algorithms that do not depend on user annotations. Hence, to achieve accurate surgical navigation for laparoscopic liver surgery, intraoperative imaging is recommendable to compensate for deformation. Moreover, user annotation errors may lead to inaccuracies in registration processes.
Collapse
Affiliation(s)
- Andrea Teatini
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.
- Department of Informatics, University of Oslo, Oslo, Norway.
| | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Davit L Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Surgery N1, Yerevan State Medical University, Yerevan, Armenia
| | | | - Rafael Palomar
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Department of Computer Science, NTNU, Gjøvik, Norway
| | - Åsmund Avdem Fretland
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Hepato-Pancreatic-Biliary surgery, Oslo University Hospital, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Institute of Clinical Medicine, University of Oslo, Oslo, Norway
- Department of Hepato-Pancreatic-Biliary surgery, Oslo University Hospital, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
- Department of Informatics, University of Oslo, Oslo, Norway
| |
Collapse
|
9
|
Pelanis E, Kumar RP, Aghayan DL, Palomar R, Fretland ÅA, Brun H, Elle OJ, Edwin B. Use of mixed reality for improved spatial understanding of liver anatomy. MINIM INVASIV THER 2019; 29:154-160. [PMID: 31116053 DOI: 10.1080/13645706.2019.1616558] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Introduction: In liver surgery, medical images from pre-operative computed tomography and magnetic resonance imaging are the basis for the decision-making process. These images are used in surgery planning and guidance, especially for parenchyma-sparing hepatectomies. Though medical images are commonly visualized in two dimensions (2D), surgeons need to mentally reconstruct this information in three dimensions (3D) for a spatial understanding of the anatomy. The aim of this work is to investigate whether the use of a 3D model visualized in mixed reality with Microsoft HoloLens increases the spatial understanding of the liver, compared to the conventional way of using 2D images.Material and methods: In this study, clinicians had to identify liver segments associated to lesions.Results: Twenty-eight clinicians with varying medical experience were recruited for the study. From a total of 150 lesions, 89 were correctly assigned without significant difference between the modalities. The median time for correct identification was 23.5 [4-138] s using the magnetic resonance imaging images and 6.00 [1-35] s using HoloLens (p < 0.001).Conclusions: The use of 3D liver models in mixed reality significantly decreases the time for tasks requiring a spatial understanding of the organ. This may significantly decrease operating time and improve use of resources.
Collapse
Affiliation(s)
- Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | - Rahul P Kumar
- The Intervention Centre, Oslo University Hospital, Oslo, Norway
| | - Davit L Aghayan
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway.,Department of Surgery N1, Yerevan State Medical University after M.Heratsi, Yerevan, Armenia
| | - Rafael Palomar
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Computer Science, NTNU, Gjøvik, Norway
| | - Åsmund A Fretland
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway.,Department of HPB Surgery, Norway University Hospital - Rikshospitalet, Oslo, Norway
| | - Henrik Brun
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Clinic for Pediatric Cardiology, Norway University Hospital - Rikshospitalet, Oslo, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Bjørn Edwin
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway.,Department of HPB Surgery, Norway University Hospital - Rikshospitalet, Oslo, Norway
| |
Collapse
|
10
|
Sdiri B, Kaaniche M, Cheikh FA, Beghdadi A, Elle OJ. Efficient Enhancement of Stereo Endoscopic Images Based on Joint Wavelet Decomposition and Binocular Combination. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:33-45. [PMID: 29994612 DOI: 10.1109/tmi.2018.2853808] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The success of minimally invasive interventions and the remarkable technological and medical progress have made endoscopic image enhancement a very active research field. Due to the intrinsic endoscopic domain characteristics and the surgical exercise, stereo endoscopic images may suffer from different degradations which affect its quality. Therefore, in order to provide the surgeons with a better visual feedback and improve the outcomes of possible subsequent processing steps, namely, a 3-D organ reconstruction/registration, it would be interesting to improve the stereo endoscopic image quality. To this end, we propose, in this paper, two joint enhancement methods which operate in the wavelet transform domain. More precisely, by resorting to a joint wavelet decomposition, the wavelet subbands of the right and left views are simultaneously processed to exploit the binocular vision properties. While the first proposed technique combines only the approximation subbands of both views, the second method combines all the wavelet subbands yielding an inter-view processing fully adapted to the local features of the stereo endoscopic images. Experimental results, carried out on various stereo endoscopic datasets, have demonstrated the efficiency of the proposed enhancement methods in terms of perceived visual image quality.
Collapse
|
11
|
Abstract
INTRODUCTION Technical difficulty and unfamiliar surgical anatomy are the main challenges in transanal total mesorectal excision. Precise 3-dimensional real-time image guidance may facilitate the safety, accuracy, and efficiency of transanal total mesorectal excision. TECHNIQUE A preoperative CT was obtained with 10 skin fiducials and further processed to emphasize the border of the anatomical structure by 3-dimensional modeling and pelvic organ segmentation. A forced sacral tilt by placing a 10-degree wedge under the patient's sacrum was induced to minimize pelvic organ movement caused by lithotomy position. An optical navigation system with cranial software was used. Preoperative CT images were loaded into the navigation system, and patient tracker was mounted onto the iliac bone. Once the patient-to-image paired point registration using skin fiducials was completed, the laparoscopic instrument mounted with instrument tracker was calibrated for instrument tracking. After validating the experimental setup and process of registration by navigating laparoscopic anterior resection, stereotactic navigation for transanal total mesorectal excision was performed in the low rectal neuroendocrine tumor. RESULTS The fiducial registration error was 1.7 mm. The accuracy of target positioning was sufficient at less than 3 mm (1.8 ± 0.9 mm). Qualitative assessment using a Likert scale was well matched between the 2 observers. Of the 20 scores, 19 were judged as 4 (very good) or 5 (excellent). There was no statistical difference between mean Likert scales of the abdominal or transanal landmarks (4.4 ± 0.5 vs 4.3 ± 1.0, p = 0.965). CONCLUSIONS Application of an existing navigation system to transanal total mesorectal excision for a low rectal tumor is feasible. The acceptable accuracy of target positioning justifies its clinical use. Further research is needed to prove the clinical need for the procedure and its impact on clinical outcomes.
Collapse
|
12
|
Wang C, Alaya Cheikh F, Kaaniche M, Beghdadi A, Elle OJ. Variational based smoke removal in laparoscopic images. Biomed Eng Online 2018; 17:139. [PMID: 30340594 PMCID: PMC6194583 DOI: 10.1186/s12938-018-0590-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Accepted: 10/11/2018] [Indexed: 11/13/2022] Open
Abstract
Background In laparoscopic surgery, image quality can be severely degraded by surgical smoke, which not only introduces errors for the image processing algorithms (used in image guided surgery), but also reduces the visibility of the observed organs and tissues. To overcome these drawbacks, this work aims to remove smoke in laparoscopic images using an image preprocessing method based on a variational approach. Methods In this paper, we present the physical smoke model where the degraded image is separated into two parts: direct attenuation and smoke veil and propose an efficient variational-based desmoking method for laparoscopic images. To estimate the smoke veil, the proposed method relies on the observation that smoke veil has low contrast and low inter-channel differences. A cost function is defined based on this prior knowledge and is solved using an augmented Lagrangian method. The obtained smoke veil is then subtracted from the original degraded image, resulting in the direct attenuation part. Finally, the smoke free image is computed using a linear intensity transformation of the direct attenuation part. Results The performance of the proposed method is evaluated quantitatively and qualitatively using three datasets: two public real smoked laparoscopic datasets and one generated synthetic dataset. No-reference and reduced-reference image quality assessment metrics are used with the two real datasets, and show that the proposed method outperforms the state-of-the-art ones. Besides, standard full-reference ones are employed with the synthetic dataset, and indicate also the good performance of the proposed method. Furthermore, the qualitative visual inspection of the results shows that our method removes smoke effectively from the laparoscopic images. Conclusion All the obtained results show that the proposed approach reduces the smoke effectively while preserving the important perceptual information of the image. This allows to provide a better visualization of the operation field for surgeons and improve the image guided laparoscopic surgery procedure.
Collapse
Affiliation(s)
- Congcong Wang
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Gjøvik, Norway.
| | - Faouzi Alaya Cheikh
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Mounir Kaaniche
- L2TI-Institut Galilée, Université Paris 13, Sorbonne Paris Cité, Villetaneuse, France
| | - Azeddine Beghdadi
- L2TI-Institut Galilée, Université Paris 13, Sorbonne Paris Cité, Villetaneuse, France
| | - Ole Jacob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,The Department of Informatics, University of Oslo, Oslo, Norway
| |
Collapse
|
13
|
Afshar P, Ahmadi A, Mohebi A, Fazel Zarandi M. A hierarchical stochastic modelling approach for reconstructing lung tumour geometry from 2D CT images. J EXP THEOR ARTIF IN 2018. [DOI: 10.1080/0952813x.2018.1509894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Parnian Afshar
- Industrial Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Abbas Ahmadi
- Industrial Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Azadeh Mohebi
- Information Technology, Iranian Research Institute for Information Science and Technology (IranDoc), Tehran, Iran
| | - M.H. Fazel Zarandi
- Industrial Engineering, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|