1
|
Bette S, Canalini L, Feitelson LM, Woźnicki P, Risch F, Huber A, Decker JA, Tehlan K, Becker J, Wollny C, Scheurig-Münkler C, Wendler T, Schwarz F, Kroencke T. Radiomics-Based Machine Learning Model for Diagnosis of Acute Pancreatitis Using Computed Tomography. Diagnostics (Basel) 2024; 14:718. [PMID: 38611632 PMCID: PMC11011980 DOI: 10.3390/diagnostics14070718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024] Open
Abstract
In the early diagnostic workup of acute pancreatitis (AP), the role of contrast-enhanced CT is to establish the diagnosis in uncertain cases, assess severity, and detect potential complications like necrosis, fluid collections, bleeding or portal vein thrombosis. The value of texture analysis/radiomics of medical images has rapidly increased during the past decade, and the main focus has been on oncological imaging and tumor classification. Previous studies assessed the value of radiomics for differentiating between malignancies and inflammatory diseases of the pancreas as well as for prediction of AP severity. The aim of our study was to evaluate an automatic machine learning model for AP detection using radiomics analysis. Patients with abdominal pain and contrast-enhanced CT of the abdomen in an emergency setting were retrospectively included in this single-center study. The pancreas was automatically segmented using TotalSegmentator and radiomics features were extracted using PyRadiomics. We performed unsupervised hierarchical clustering and applied the random-forest based Boruta model to select the most important radiomics features. Important features and lipase levels were included in a logistic regression model with AP as the dependent variable. The model was established in a training cohort using fivefold cross-validation and applied to the test cohort (80/20 split). From a total of 1012 patients, 137 patients with AP and 138 patients without AP were included in the final study cohort. Feature selection confirmed 28 important features (mainly shape and first-order features) for the differentiation between AP and controls. The logistic regression model showed excellent diagnostic accuracy of radiomics features for the detection of AP, with an area under the curve (AUC) of 0.932. Using lipase levels only, an AUC of 0.946 was observed. Using both radiomics features and lipase levels, we showed an excellent AUC of 0.933 for the detection of AP. Automated segmentation of the pancreas and consecutive radiomics analysis almost achieved the high diagnostic accuracy of lipase levels, a well-established predictor of AP, and might be considered an additional diagnostic tool in unclear cases. This study provides scientific evidence that automated image analysis of the pancreas achieves comparable diagnostic accuracy to lipase levels and might therefore be used in the future in the rapidly growing era of AI-based image analysis.
Collapse
Affiliation(s)
- Stefanie Bette
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Luca Canalini
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Laura-Marie Feitelson
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Piotr Woźnicki
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, University of Würzburg, 97080 Würzburg, Germany;
| | - Franka Risch
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Adrian Huber
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Josua A. Decker
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Kartikay Tehlan
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Judith Becker
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Claudia Wollny
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Christian Scheurig-Münkler
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
| | - Thomas Wendler
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
- Institute of Digital Health, University Hospital Augsburg, Faculty of Medicine, University of Augsburg, 86356 Neusaess, Germany
- Computer-Aided Medical Procedures and Augmented Reality, School of Computation, Information and Technology, Technical University of Munich, 85748 Garching bei Muenchen, Germany
| | - Florian Schwarz
- Centre for Diagnostic Imaging and Interventional Therapy, Donau-Isar-Klinikum, 94469 Deggendorf, Germany;
| | - Thomas Kroencke
- Clinic for Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, 86156 Augsburg, Germany; (S.B.); (L.C.); (L.-M.F.); (A.H.); (J.A.D.); (K.T.); (J.B.); (C.W.); (C.S.-M.); (T.W.)
- Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, 86159 Augsburg, Germany
| |
Collapse
|
2
|
Decker JA, Becker J, Härting M, Jehs B, Risch F, Canalini L, Wollny C, Scheurig-Muenkler C, Kroencke T, Schwarz F, Bette S. Optimal conspicuity of pancreatic ductal adenocarcinoma in virtual monochromatic imaging reconstructions on a photon-counting detector CT: comparison to conventional MDCT. Abdom Radiol (NY) 2024; 49:103-116. [PMID: 37796327 PMCID: PMC10789688 DOI: 10.1007/s00261-023-04042-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/30/2023] [Accepted: 08/30/2023] [Indexed: 10/06/2023]
Abstract
PURPOSE To analyze the conspicuity of pancreatic ductal adenocarcinoma (PDAC) in virtual monoenergetic images (VMI) on a novel photon-counting detector CT (PCD-CT) in comparison to energy-integrating CT (EID-CT). METHODS Inclusion criteria comprised initial diagnosis of PDAC (reference standard: histopathological analysis) and standardized contrast-enhanced CT imaging either on an EID-CT or a PCD-CT. Patients were excluded due to different histopathological diagnosis or missing tumor delineation on CT. On the PCD-CT, 40-190 keV VMI reconstructions were generated. Image noise, tumor-to-pancreas ratio (TPR) and contrast-to-noise ratio (CNR) were analyzed by ROI-based measurements in arterial and portal venous contrast phase. Two board-certified radiologist evaluated image quality and tumor delineation at both, EID-CT and PCD-CT (40 and 70 keV). RESULTS Thirty-eight patients (mean age 70.4 years ± 10.3 [range 45-91], 27 males; PCD-CT: n=19, EID-CT: n=19) were retrospectively included. On the PCD-CT, tumor conspicuity (reflected by low TPR and high CNR) was significantly improved at low-energy VMI series (≤ 70 keV compared to > 70 keV), both in arterial and in portal venous contrast phase (P < 0.001), reaching the maximum at 40 keV. Comparison between PCD-CT and EID-CT showed significantly higher CNR on the PCD-CT in portal venous contrast phase at < 70 keV (P < 0.016). On the PCD-CT, tumor conspicuity was improved in portal venous contrast phase compared to arterial contrast phase especially at the lower end of the VMI spectrum (≤ 70 keV). Qualitative analysis revealed that tumor delineation is improved in 40 keV reconstructions compared to 70 keV reconstructions on a PCD-CT. CONCLUSION PCD-CT VMI reconstructions (≤ 70 keV) showed significantly improved conspicuity of PDAC in quantitative and qualitative analysis in both, arterial and portal venous contrast phase, compared to EID-CT, which may be important for early detection of tumor tissue in clinical routine. Tumor delineation was superior in portal venous contrast phase compared to arterial contrast phase.
Collapse
Affiliation(s)
- Josua A Decker
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Judith Becker
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Mark Härting
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Bertram Jehs
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Franka Risch
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Luca Canalini
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Claudia Wollny
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Christian Scheurig-Muenkler
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| | - Thomas Kroencke
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany.
- Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Universitätsstr. 2, 86159, Augsburg, Germany.
| | - Florian Schwarz
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
- Medical Faculty, Ludwig Maximilian University Munich, Bavariaring 19, 80336, Munich, Germany
- Institute for Radiology, DONAUISAR Hospital Deggendorf-Dingolfing-Landau, Perlasberger Str. 41, 94469, Deggendorf, Germany
| | - Stefanie Bette
- Diagnostic and Interventional Radiology, Faculty of Medicine, University Hospital Augsburg, University of Augsburg, Stenglinstr. 2, 86156, Augsburg, Germany
| |
Collapse
|
3
|
Walluscheck S, Canalini L, Strohm H, Diekmann S, Klein J, Heldmann S. MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs. Int J Comput Assist Radiol Surg 2023; 18:483-491. [PMID: 36334164 PMCID: PMC9939492 DOI: 10.1007/s11548-022-02786-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 10/25/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE Computed tomography (CT) is widely used to identify anomalies in brain tissues because their localization is important for diagnosis and therapy planning. Due to the insufficient soft tissue contrast of CT, the division of the brain into anatomical meaningful regions is challenging and is commonly done with magnetic resonance imaging (MRI). METHODS We propose a multi-atlas registration approach to propagate anatomical information from a standard MRI brain atlas to CT scans. This translation will enable a detailed automated reporting of brain CT exams. We utilize masks of the lateral ventricles and the brain volume of CT images as adjuvant input to guide the registration process. Besides using manual annotations to test the registration in a first step, we then verify that convolutional neural networks (CNNs) are a reliable solution for automatically segmenting structures to enhance the registration process. RESULTS The registration method obtains mean Dice values of 0.92 and 0.99 in brain ventricles and parenchyma on 22 healthy test cases when using manually segmented structures as guidance. When guiding with automatically segmented structures, the mean Dice values are 0.87 and 0.98, respectively. CONCLUSION Our registration approach is a fully automated solution to register MRI atlas images to CT scans and thus obtain detailed anatomical information. The proposed CNN segmentation method can be used to obtain masks of ventricles and brain volume which guide the registration.
Collapse
Affiliation(s)
- Sina Walluscheck
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Luca Canalini
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hannah Strohm
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Susanne Diekmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Stefan Heldmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
4
|
Ezhov I, Scibilia K, Franitza K, Steinbauer F, Shit S, Zimmer L, Lipkova J, Kofler F, Paetzold JC, Canalini L, Waldmannstetter D, Menten MJ, Metz M, Wiestler B, Menze B. Learn-Morph-Infer: A new way of solving the inverse problem for brain tumor modeling. Med Image Anal 2023; 83:102672. [PMID: 36395623 DOI: 10.1016/j.media.2022.102672] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 07/18/2022] [Accepted: 10/20/2022] [Indexed: 11/06/2022]
Abstract
Current treatment planning of patients diagnosed with a brain tumor, such as glioma, could significantly benefit by accessing the spatial distribution of tumor cell concentration. Existing diagnostic modalities, e.g. magnetic resonance imaging (MRI), contrast sufficiently well areas of high cell density. In gliomas, however, they do not portray areas of low cell concentration, which can often serve as a source for the secondary appearance of the tumor after treatment. To estimate tumor cell densities beyond the visible boundaries of the lesion, numerical simulations of tumor growth could complement imaging information by providing estimates of full spatial distributions of tumor cells. Over recent years a corpus of literature on medical image-based tumor modeling was published. It includes different mathematical formalisms describing the forward tumor growth model. Alongside, various parametric inference schemes were developed to perform an efficient tumor model personalization, i.e. solving the inverse problem. However, the unifying drawback of all existing approaches is the time complexity of the model personalization which prohibits a potential integration of the modeling into clinical settings. In this work, we introduce a deep learning based methodology for inferring the patient-specific spatial distribution of brain tumors from T1Gd and FLAIR MRI medical scans. Coined as Learn-Morph-Infer, the method achieves real-time performance in the order of minutes on widely available hardware and the compute time is stable across tumor models of different complexity, such as reaction-diffusion and reaction-advection-diffusion models. We believe the proposed inverse solution approach not only bridges the way for clinical translation of brain tumor personalization but can also be adopted to other scientific and engineering domains.
Collapse
Affiliation(s)
- Ivan Ezhov
- Department of Informatics, TUM, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany.
| | | | | | | | - Suprosanna Shit
- Department of Informatics, TUM, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany
| | - Lucas Zimmer
- TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany; Department of Quantitative Biomedicine, UZH, Zurich, Switzerland
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Broad Institute of Harvard and MIT, Cambridge, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, USA
| | - Florian Kofler
- Department of Informatics, TUM, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany; Neuroradiology Department of Klinikum Rechts der Isar, TUM, Munich, Germany
| | - Johannes C Paetzold
- Department of Informatics, TUM, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany
| | | | | | - Martin J Menten
- Department of Informatics, TUM, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany
| | - Marie Metz
- TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany; Neuroradiology Department of Klinikum Rechts der Isar, TUM, Munich, Germany
| | - Benedikt Wiestler
- TranslaTUM - Central Institute for Translational Cancer Research, TUM, Munich, Germany; Neuroradiology Department of Klinikum Rechts der Isar, TUM, Munich, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, UZH, Zurich, Switzerland
| |
Collapse
|
5
|
Canalini L, Klein J, Waldmannstetter D, Kofler F, Cerri S, Hering A, Heldmann S, Schlaeger S, Menze BH, Wiestler B, Kirschke J, Hahn HK. Quantitative evaluation of the influence of multiple MRI sequences and of pathological tissues on the registration of longitudinal data acquired during brain tumor treatment. Front Neuroimaging 2022; 1:977491. [PMID: 37555157 PMCID: PMC10406206 DOI: 10.3389/fnimg.2022.977491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 08/15/2022] [Indexed: 08/10/2023]
Abstract
Registration methods facilitate the comparison of multiparametric magnetic resonance images acquired at different stages of brain tumor treatments. Image-based registration solutions are influenced by the sequences chosen to compute the distance measure, and the lack of image correspondences due to the resection cavities and pathological tissues. Nonetheless, an evaluation of the impact of these input parameters on the registration of longitudinal data is still missing. This work evaluates the influence of multiple sequences, namely T1-weighted (T1), T2-weighted (T2), contrast enhanced T1-weighted (T1-CE), and T2 Fluid Attenuated Inversion Recovery (FLAIR), and the exclusion of the pathological tissues on the non-rigid registration of pre- and post-operative images. We here investigate two types of registration methods, an iterative approach and a convolutional neural network solution based on a 3D U-Net. We employ two test sets to compute the mean target registration error (mTRE) based on corresponding landmarks. In the first set, markers are positioned exclusively in the surroundings of the pathology. The methods employing T1-CE achieves the lowest mTREs, with a improvement up to 0.8 mm for the iterative solution. The results are higher than the baseline when using the FLAIR sequence. When excluding the pathology, lower mTREs are observable for most of the methods. In the second test set, corresponding landmarks are located in the entire brain volumes. Both solutions employing T1-CE obtain the lowest mTREs, with a decrease up to 1.16 mm for the iterative method, whereas the results worsen using the FLAIR. When excluding the pathology, an improvement is observable for the CNN method using T1-CE. Both approaches utilizing the T1-CE sequence obtain the best mTREs, whereas the FLAIR is the least informative to guide the registration process. Besides, the exclusion of pathology from the distance measure computation improves the registration of the brain tissues surrounding the tumor. Thus, this work provides the first numerical evaluation of the influence of these parameters on the registration of longitudinal magnetic resonance images, and it can be helpful for developing future algorithms.
Collapse
Affiliation(s)
- Luca Canalini
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Diana Waldmannstetter
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Florian Kofler
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Stefano Cerri
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, Netherlands
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Sarah Schlaeger
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Bjoern H. Menze
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan Kirschke
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Horst K. Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
6
|
Canalini L, Klein J, Miller D, Kikinis R. Enhanced registration of ultrasound volumes by segmentation of resection cavity in neurosurgical procedures. Int J Comput Assist Radiol Surg 2020; 15:1963-1974. [PMID: 33029677 PMCID: PMC7671994 DOI: 10.1007/s11548-020-02273-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 09/25/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Neurosurgeons can have a better understanding of surgical procedures by comparing ultrasound images obtained at different phases of the tumor resection. However, establishing a direct mapping between subsequent acquisitions is challenging due to the anatomical changes happening during surgery. We propose here a method to improve the registration of ultrasound volumes, by excluding the resection cavity from the registration process. METHODS The first step of our approach includes the automatic segmentation of the resection cavities in ultrasound volumes, acquired during and after resection. We used a convolution neural network inspired by the 3D U-Net. Then, subsequent ultrasound volumes are registered by excluding the contribution of resection cavity. RESULTS Regarding the segmentation of the resection cavity, the proposed method achieved a mean DICE index of 0.84 on 27 volumes. Concerning the registration of the subsequent ultrasound acquisitions, we reduced the mTRE of the volumes acquired before and during resection from 3.49 to 1.22 mm. For the set of volumes acquired before and after removal, the mTRE improved from 3.55 to 1.21 mm. CONCLUSIONS We proposed an innovative registration algorithm to compensate the brain shift affecting ultrasound volumes obtained at subsequent phases of neurosurgical procedures. To the best of our knowledge, our method is the first to exclude automatically segmented resection cavities in the registration of ultrasound volumes in neurosurgery.
Collapse
Affiliation(s)
- Luca Canalini
- Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, Germany.
- Medical Imaging Computing, University of Bremen, Bremen, Germany.
| | - Jan Klein
- Fraunhofer MEVIS, Institute for Digital Medicine, Bremen, Germany
| | - Dorothea Miller
- Department of Neurosurgery, University Hospital Knappschaftskrankenhaus, Bochum, Germany
| | - Ron Kikinis
- Surgical Planning Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| |
Collapse
|