1
|
Chen X, Zhou B, Guo X, Xie H, Liu Q, Duncan JS, Sinusas AJ, Liu C. DuDoCFNet: Dual-Domain Coarse-to-Fine Progressive Network for Simultaneous Denoising, Limited-View Reconstruction, and Attenuation Correction of Cardiac SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3110-3125. [PMID: 38578853 DOI: 10.1109/tmi.2024.3385650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps ( μ -maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free μ -map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived μ -maps. Our experiments demonstrate DuDoCFNet's superior accuracy in estimating projections, generating μ -maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask.
Collapse
|
2
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
3
|
Gao X, Zheng G. SMILE: Siamese Multi-scale Interactive-representation LEarning for Hierarchical Diffeomorphic Deformable image registration. Comput Med Imaging Graph 2024; 111:102322. [PMID: 38157671 DOI: 10.1016/j.compmedimag.2023.102322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 11/23/2023] [Accepted: 12/14/2023] [Indexed: 01/03/2024]
Abstract
Deformable medical image registration plays an important role in many clinical applications. It aims to find a dense deformation field to establish point-wise correspondences between a pair of fixed and moving images. Recently, unsupervised deep learning-based registration methods have drawn more and more attention because of fast inference at testing stage. Despite remarkable progress, existing deep learning-based methods suffer from several limitations including: (a) they often overlook the explicit modeling of feature correspondences due to limited receptive fields; (b) the performance on image pairs with large spatial displacements is still limited since the dense deformation field is regressed from features learned by local convolutions; and (c) desirable properties, including topology-preservation and the invertibility of transformation, are often ignored. To address above limitations, we propose a novel Convolutional Neural Network (CNN) consisting of a Siamese Multi-scale Interactive-representation LEarning (SMILE) encoder and a Hierarchical Diffeomorphic Deformation (HDD) decoder. Specifically, the SMILE encoder aims for effective feature representation learning and spatial correspondence establishing while the HDD decoder seeks to regress the dense deformation field in a coarse-to-fine manner. We additionally propose a novel Local Invertible Loss (LIL) to encourage topology-preservation and local invertibility of the regressed transformation while keeping high registration accuracy. Extensive experiments conducted on two publicly available brain image datasets demonstrate the superiority of our method over the state-of-the-art (SOTA) approaches. Specifically, on the Neurite-OASIS dataset, our method achieved an average DSC of 0.815 and an average ASSD of 0.633 mm.
Collapse
Affiliation(s)
- Xiaoru Gao
- Institute of Medical Robotics, School of Biomedical Engineering, 800 DongChuan Road, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, 800 DongChuan Road, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
4
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
5
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
6
|
Hering A, Hansen L, Mok TCW, Chung ACS, Siebert H, Hager S, Lange A, Kuckertz S, Heldmann S, Shao W, Vesal S, Rusu M, Sonn G, Estienne T, Vakalopoulou M, Han L, Huang Y, Yap PT, Brudfors M, Balbastre Y, Joutard S, Modat M, Lifshitz G, Raviv D, Lv J, Li Q, Jaouen V, Visvikis D, Fourcade C, Rubeaux M, Pan W, Xu Z, Jian B, De Benetti F, Wodzinski M, Gunnarsson N, Sjolund J, Grzech D, Qiu H, Li Z, Thorley A, Duan J, Grosbrohmer C, Hoopes A, Reinertsen I, Xiao Y, Landman B, Huo Y, Murphy K, Lessmann N, van Ginneken B, Dalca AV, Heinrich MP. Learn2Reg: Comprehensive Multi-Task Medical Image Registration Challenge, Dataset and Evaluation in the Era of Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:697-712. [PMID: 36264729 DOI: 10.1109/tmi.2022.3213983] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.
Collapse
|
7
|
Ruthven M, Miquel ME, King AP. A segmentation-informed deep learning framework to register dynamic two-dimensional magnetic resonance images of the vocal tract during speech. Biomed Signal Process Control 2023; 80:104290. [PMID: 36743699 PMCID: PMC9746295 DOI: 10.1016/j.bspc.2022.104290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/29/2022] [Accepted: 10/08/2022] [Indexed: 11/06/2022]
Abstract
Objective Dynamic magnetic resonance (MR) imaging enables visualisation of articulators during speech. There is growing interest in quantifying articulator motion in two-dimensional MR images of the vocal tract, to better understand speech production and potentially inform patient management decisions. Image registration is an established way to achieve this quantification. Recently, segmentation-informed deformable registration frameworks have been developed and have achieved state-of-the-art accuracy. This work aims to adapt such a framework and optimise it for estimating displacement fields between dynamic two-dimensional MR images of the vocal tract during speech. Methods A deep-learning-based registration framework was developed and compared with current state-of-the-art registration methods and frameworks (two traditional methods and three deep-learning-based frameworks, two of which are segmentation informed). The accuracy of the methods and frameworks was evaluated using the Dice coefficient (DSC), average surface distance (ASD) and a metric based on velopharyngeal closure. The metric evaluated if the fields captured a clinically relevant and quantifiable aspect of articulator motion. Results The segmentation-informed frameworks achieved higher DSCs and lower ASDs and captured more velopharyngeal closures than the traditional methods and the framework that was not segmentation informed. All segmentation-informed frameworks achieved similar DSCs and ASDs. However, the proposed framework captured the most velopharyngeal closures. Conclusions A framework was successfully developed and found to more accurately estimate articulator motion than five current state-of-the-art methods and frameworks. Significance The first deep-learning-based framework specifically for registering dynamic two-dimensional MR images of the vocal tract during speech has been developed and evaluated.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom,Corresponding author at: Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom.
| | - Marc E. Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom,Digital Environment Research Institute (DERI), Empire House, 67-75 New Road, Queen Mary University of London, London E1 1HH, United Kingdom,Advanced Cardiovascular Imaging, Barts NIHR BRC, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P. King
- School of Biomedical Engineering & Imaging Sciences, King’s College London, King’s Health Partners, St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
8
|
SegPC-2021: A challenge & dataset on segmentation of Multiple Myeloma plasma cells from microscopic images. Med Image Anal 2023; 83:102677. [PMID: 36403309 DOI: 10.1016/j.media.2022.102677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/25/2022] [Accepted: 10/27/2022] [Indexed: 11/05/2022]
Abstract
Multiple Myeloma (MM) is an emerging ailment of global concern. Its diagnosis at the early stages is critical for recovery. Therefore, efforts are underway to produce digital pathology tools with human-level intelligence that are efficient, scalable, accessible, and cost-effective. Following the trend, a medical imaging challenge on "Segmentation of Multiple Myeloma Plasma Cells in Microscopic Images (SegPC-2021)" was organized at the IEEE International Symposium on Biomedical Imaging (ISBI), 2021, France. The challenge addressed the problem of cell segmentation in microscopic images captured from the slides prepared from the bone marrow aspirate of patients diagnosed with Multiple Myeloma. The challenge released a total of 775 images with 690 and 85 images of sizes 2040×1536 and 1920×2560 pixels, respectively, captured from two different (microscope and camera) setups. The participants had to segment the plasma cells with a separate label on each cell's nucleus and cytoplasm. This problem comprises many challenges, including a reduced color contrast between the cytoplasm and the background, and the clustering of cells with a feeble boundary separation of individual cells. To our knowledge, the SegPC-2021 challenge dataset is the largest publicly available annotated data on plasma cell segmentation in MM so far. The challenge targets a semi-automated tool to ensure the supervision of medical experts. It was conducted for a span of five months, from November 2020 to April 2021. Initially, the data was shared with 696 people from 52 teams, of which 41 teams submitted the results of their models on the evaluation portal in the validation phase. Similarly, 20 teams qualified for the last round, of which 16 teams submitted the results in the final test phase. All the top-5 teams employed DL-based approaches, and the best mIoU obtained on the final test set of 277 microscopic images was 0.9389. All these five models have been analyzed and discussed in detail. This challenge task is a step towards the target of creating an automated MM diagnostic tool.
Collapse
|
9
|
Canalini L, Klein J, Waldmannstetter D, Kofler F, Cerri S, Hering A, Heldmann S, Schlaeger S, Menze BH, Wiestler B, Kirschke J, Hahn HK. Quantitative evaluation of the influence of multiple MRI sequences and of pathological tissues on the registration of longitudinal data acquired during brain tumor treatment. FRONTIERS IN NEUROIMAGING 2022; 1:977491. [PMID: 37555157 PMCID: PMC10406206 DOI: 10.3389/fnimg.2022.977491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 08/15/2022] [Indexed: 08/10/2023]
Abstract
Registration methods facilitate the comparison of multiparametric magnetic resonance images acquired at different stages of brain tumor treatments. Image-based registration solutions are influenced by the sequences chosen to compute the distance measure, and the lack of image correspondences due to the resection cavities and pathological tissues. Nonetheless, an evaluation of the impact of these input parameters on the registration of longitudinal data is still missing. This work evaluates the influence of multiple sequences, namely T1-weighted (T1), T2-weighted (T2), contrast enhanced T1-weighted (T1-CE), and T2 Fluid Attenuated Inversion Recovery (FLAIR), and the exclusion of the pathological tissues on the non-rigid registration of pre- and post-operative images. We here investigate two types of registration methods, an iterative approach and a convolutional neural network solution based on a 3D U-Net. We employ two test sets to compute the mean target registration error (mTRE) based on corresponding landmarks. In the first set, markers are positioned exclusively in the surroundings of the pathology. The methods employing T1-CE achieves the lowest mTREs, with a improvement up to 0.8 mm for the iterative solution. The results are higher than the baseline when using the FLAIR sequence. When excluding the pathology, lower mTREs are observable for most of the methods. In the second test set, corresponding landmarks are located in the entire brain volumes. Both solutions employing T1-CE obtain the lowest mTREs, with a decrease up to 1.16 mm for the iterative method, whereas the results worsen using the FLAIR. When excluding the pathology, an improvement is observable for the CNN method using T1-CE. Both approaches utilizing the T1-CE sequence obtain the best mTREs, whereas the FLAIR is the least informative to guide the registration process. Besides, the exclusion of pathology from the distance measure computation improves the registration of the brain tissues surrounding the tumor. Thus, this work provides the first numerical evaluation of the influence of these parameters on the registration of longitudinal magnetic resonance images, and it can be helpful for developing future algorithms.
Collapse
Affiliation(s)
- Luca Canalini
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Diana Waldmannstetter
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Florian Kofler
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Stefano Cerri
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, Netherlands
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Sarah Schlaeger
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Bjoern H. Menze
- Image-Based Biomedical Modeling, Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan Kirschke
- Department of Neuroradiology, Technical University of Munich (TUM) School of Medicine, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Horst K. Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
10
|
Schumacher M, Siebert H, Genz A, Bade R, Heinrich M. Learning-based three-dimensional registration with weak bounding box supervision. J Med Imaging (Bellingham) 2022; 9:044001. [PMID: 35847178 PMCID: PMC9279677 DOI: 10.1117/1.jmi.9.4.044001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/28/2022] [Indexed: 09/05/2024] Open
Abstract
Purpose: Image registration is the process of aligning images, and it is a fundamental task in medical image analysis. While many tasks in the field of image analysis, such as image segmentation, are handled almost entirely with deep learning and exceed the accuracy of conventional algorithms, currently available deformable image registration methods are often still conventional. Deep learning methods for medical image registration have recently reached the accuracy of conventional algorithms. However, they are often based on a weakly supervised learning scheme using multilabel image segmentations during training. The creation of such detailed annotations is very time-consuming. Approach: We propose a weakly supervised learning scheme for deformable image registration. By calculating the loss function based on only bounding box labels, we are able to train an image registration network for large displacement deformations without using densely labeled images. We evaluate our model on interpatient three-dimensional abdominal CT and MRI images. Results: The results show an improvement of ∼ 10 % (for CT images) and 20% (for MRI images) in comparison to the unsupervised method. When taking into account the reduced annotation effort, the performance also exceeds the performance of weakly supervised training using detailed image segmentations. Conclusion: We show that the performance of image registration methods can be enhanced with little annotation effort using our proposed method.
Collapse
Affiliation(s)
- Mona Schumacher
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
- MeVis Medical Solutions AG, Bremen, Germany
| | - Hanna Siebert
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
| | | | | | - Mattias Heinrich
- University of Luebeck, Institute of Medical Informatics, Luebeck, Germany
| |
Collapse
|
11
|
Huang B, Ye Y, Xu Z, Cai Z, He Y, Zhong Z, Liu L, Chen X, Chen H, Huang B. 3D Lightweight Network for Simultaneous Registration and Segmentation of Organs-at-Risk in CT Images of Head and Neck Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:951-964. [PMID: 34784272 DOI: 10.1109/tmi.2021.3128408] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image-guided radiation therapy (IGRT) is the most effective treatment for head and neck cancer. The successful implementation of IGRT requires accurate delineation of organ-at-risk (OAR) in the computed tomography (CT) images. In routine clinical practice, OARs are manually segmented by oncologists, which is time-consuming, laborious, and subjective. To assist oncologists in OAR contouring, we proposed a three-dimensional (3D) lightweight framework for simultaneous OAR registration and segmentation. The registration network was designed to align a selected OAR template to a new image volume for OAR localization. A region of interest (ROI) selection layer then generated ROIs of OARs from the registration results, which were fed into a multiview segmentation network for accurate OAR segmentation. To improve the performance of registration and segmentation networks, a centre distance loss was designed for the registration network, an ROI classification branch was employed for the segmentation network, and further, context information was incorporated to iteratively promote both networks' performance. The segmentation results were further refined with shape information for final delineation. We evaluated registration and segmentation performances of the proposed framework using three datasets. On the internal dataset, the Dice similarity coefficient (DSC) of registration and segmentation was 69.7% and 79.6%, respectively. In addition, our framework was evaluated on two external datasets and gained satisfactory performance. These results showed that the 3D lightweight framework achieved fast, accurate and robust registration and segmentation of OARs in head and neck cancer. The proposed framework has the potential of assisting oncologists in OAR delineation.
Collapse
|
12
|
Stumpo V, Kernbach JM, van Niftrik CHB, Sebök M, Fierstra J, Regli L, Serra C, Staartjes VE. Machine Learning Algorithms in Neuroimaging: An Overview. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:125-138. [PMID: 34862537 DOI: 10.1007/978-3-030-85292-4_17] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Machine learning (ML) and artificial intelligence (AI) applications in the field of neuroimaging have been on the rise in recent years, and their clinical adoption is increasing worldwide. Deep learning (DL) is a field of ML that can be defined as a set of algorithms enabling a computer to be fed with raw data and progressively discover-through multiple layers of representation-more complex and abstract patterns in large data sets. The combination of ML and radiomics, namely the extraction of features from medical images, has proven valuable, too: Radiomic information can be used for enhanced image characterization and prognosis or outcome prediction. This chapter summarizes the basic concepts underlying ML application for neuroimaging and discusses technical aspects of the most promising algorithms, with a specific focus on Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs), in order to provide the readership with the fundamental theoretical tools to better understand ML in neuroimaging. Applications are highlighted from a practical standpoint in the last section of the chapter, including: image reconstruction and restoration, image synthesis and super-resolution, registration, segmentation, classification, and outcome prediction.
Collapse
Affiliation(s)
- Vittorio Stumpo
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Julius M Kernbach
- Neurosurgical Artificial Intelligence Lab Aachen (NAILA), Department of Neurosurgery, RWTH University Hospital, Aachen, Germany
- Department of Neurosurgery, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| | - Christiaan H B van Niftrik
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Martina Sebök
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Jorn Fierstra
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience (MICN) Lab, Department of Neurosurgery, Clinical Neuroscience Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| |
Collapse
|
13
|
Clark D, Badea C. Advances in micro-CT imaging of small animals. Phys Med 2021; 88:175-192. [PMID: 34284331 PMCID: PMC8447222 DOI: 10.1016/j.ejmp.2021.07.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 06/23/2021] [Accepted: 07/05/2021] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Micron-scale computed tomography (micro-CT) imaging is a ubiquitous, cost-effective, and non-invasive three-dimensional imaging modality. We review recent developments and applications of micro-CT for preclinical research. METHODS Based on a comprehensive review of recent micro-CT literature, we summarize features of state-of-the-art hardware and ongoing challenges and promising research directions in the field. RESULTS Representative features of commercially available micro-CT scanners and some new applications for both in vivo and ex vivo imaging are described. New advancements include spectral scanning using dual-energy micro-CT based on energy-integrating detectors or a new generation of photon-counting x-ray detectors (PCDs). Beyond two-material discrimination, PCDs enable quantitative differentiation of intrinsic tissues from one or more extrinsic contrast agents. When these extrinsic contrast agents are incorporated into a nanoparticle platform (e.g. liposomes), novel micro-CT imaging applications are possible such as combined therapy and diagnostic imaging in the field of cancer theranostics. Another major area of research in micro-CT is in x-ray phase contrast (XPC) imaging. XPC imaging opens CT to many new imaging applications because phase changes are more sensitive to density variations in soft tissues than standard absorption imaging. We further review the impact of deep learning on micro-CT. We feature several recent works which have successfully applied deep learning to micro-CT data, and we outline several challenges specific to micro-CT. CONCLUSIONS All of these advancements establish micro-CT imaging at the forefront of preclinical research, able to provide anatomical, functional, and even molecular information while serving as a testbench for translational research.
Collapse
Affiliation(s)
- D.P. Clark
- Quantitative Imaging and Analysis Lab, Department of Radiology, Duke University Medical Center, Durham, NC 27710
| | - C.T. Badea
- Quantitative Imaging and Analysis Lab, Department of Radiology, Duke University Medical Center, Durham, NC 27710
| |
Collapse
|
14
|
Cannarile MA, Gomes B, Canamero M, Reis B, Byrd A, Charo J, Yadav M, Karanikas V. Biomarker Technologies to Support Early Clinical Immuno-oncology Development: Advances and Interpretation. Clin Cancer Res 2021; 27:4147-4159. [PMID: 33766813 DOI: 10.1158/1078-0432.ccr-20-2345] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 02/02/2021] [Accepted: 03/08/2021] [Indexed: 11/16/2022]
Abstract
Today, there is a huge effort to develop cancer immunotherapeutics capable of combating cancer cells as well as the biological environment in which they can grow, adapt, and survive. For such treatments to benefit more patients, there is a great need to dissect the complex interplays between tumor cells and the host's immune system. Monitoring mechanisms of resistance to immunotherapeutics can delineate the evolution of key players capable of driving an efficacious antitumor immune response. In doing so, simultaneous and systematic interrogation of multiple biomarkers beyond single biomarker approaches needs to be undertaken. Zooming into cell-to-cell interactions using technological advancements with unprecedented cellular resolution such as single-cell spatial transcriptomics, advanced tissue histology approaches, and new molecular immune profiling tools promises to provide a unique level of molecular granularity of the tumor environment and may support better decision-making during drug development. This review will focus on how such technological tools are applied in clinical settings, to inform the underlying tumor-immune biology of patients and offer a deeper understanding of cancer immune responsiveness to immuno-oncology treatments.
Collapse
Affiliation(s)
- Michael A Cannarile
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Munich, Munich, Germany
| | - Bruno Gomes
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| | - Marta Canamero
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Munich, Munich, Germany
| | - Bernhard Reis
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| | | | - Jehad Charo
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Zurich, Zurich, Switzerland
| | | | - Vaios Karanikas
- F. Hoffmann-La Roche AG, Pharmaceutical Research and Early Development Oncology, Roche Innovation Center Zurich, Zurich, Switzerland.
| |
Collapse
|
15
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
16
|
Longitudinal diffusion MRI analysis using Segis-Net: A single-step deep-learning framework for simultaneous segmentation and registration. Neuroimage 2021; 235:118004. [PMID: 33794359 DOI: 10.1016/j.neuroimage.2021.118004] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Revised: 03/12/2021] [Accepted: 03/19/2021] [Indexed: 01/02/2023] Open
Abstract
This work presents a single-step deep-learning framework for longitudinal image analysis, coined Segis-Net. To optimally exploit information available in longitudinal data, this method concurrently learns a multi-class segmentation and nonlinear registration. Segmentation and registration are modeled using a convolutional neural network and optimized simultaneously for their mutual benefit. An objective function that optimizes spatial correspondence for the segmented structures across time-points is proposed. We applied Segis-Net to the analysis of white matter tracts from N=8045 longitudinal brain MRI datasets of 3249 elderly individuals. Segis-Net approach showed a significant increase in registration accuracy, spatio-temporal segmentation consistency, and reproducibility compared with two multistage pipelines. This also led to a significant reduction in the sample-size that would be required to achieve the same statistical power in analyzing tract-specific measures. Thus, we expect that Segis-Net can serve as a new reliable tool to support longitudinal imaging studies to investigate macro- and microstructural brain changes over time.
Collapse
|
17
|
Park JE, Ham S, Kim HS, Park SY, Yun J, Lee H, Choi SH, Kim N. Diffusion and perfusion MRI radiomics obtained from deep learning segmentation provides reproducible and comparable diagnostic model to human in post-treatment glioblastoma. Eur Radiol 2020; 31:3127-3137. [PMID: 33128598 DOI: 10.1007/s00330-020-07414-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 08/25/2020] [Accepted: 10/13/2020] [Indexed: 01/06/2023]
Abstract
OBJECTIVES Deep learning-based automatic segmentation (DLAS) helps the reproducibility of radiomics features, but its effect on radiomics modeling is unknown. We therefore evaluated whether DLAS can robustly extract anatomical and physiological MRI features, thereby assisting in the accurate assessment of treatment response in glioblastoma patients. METHODS A DLAS model was trained on 238 glioblastomas and validated on an independent set of 98 pre- and 86 post-treatment glioblastomas from two tertiary hospitals. A total of 1618 radiomics features from contrast-enhanced T1-weighted images (CE-T1w) and histogram features from apparent diffusion coefficient (ADC) and cerebral blood volume (CBV) mapping were extracted. The diagnostic performance of radiomics features and ADC and CBV parameters for identifying treatment response was tested using area under the curve (AUC) from receiver operating characteristics analysis. Feature reproducibility was tested using a 0.80 cutoff for concordance correlation coefficients. RESULTS Reproducibility was excellent for ADC and CBV features (ICC, 0.82-0.99) and first-order features (pre- and post-treatment, 100% and 94.1% remained), but lower for texture (79.0% and 69.1% remained) and wavelet-transformed (81.8% and 74.9% remained) features of CE-T1w. DLAS-based radiomics showed similar performance to human-performed segmentations in internal validation (AUC, 0.81 [95% CI, 0.64-0.99] vs. AUC, 0.81 [0.60-1.00], p = 0.80), but slightly lower performance in external validation (AUC, 0.78 [0.61-0.95] vs. AUC, 0.65 [0.46-0.84], p = 0.23). CONCLUSION DLAS-based feature extraction showed high reproducibility for first-order features from anatomical and physiological MRI, and comparable diagnostic performance to human manual segmentations in the identification of pseudoprogression, supporting the utility of DLAS in quantitative MRI analysis. KEY POINTS • Deep learning-based automatic segmentation (DLAS) enables fast and robust feature extraction from diffusion- and perfusion-weighted MRI. • DLAS showed high reproducibility in first-order feature extraction from anatomical, diffusion, and perfusion MRI across two centers. • DLAS-based radiomics features showed comparable diagnostic accuracy to manual segmentations in post-treatment glioblastoma.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, Seoul, 05505, South Korea
| | - Sungwon Ham
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, Seoul, 05505, South Korea.
| | - Seo Young Park
- Department of Clinical Epidemiology and Biostatistics, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Jihye Yun
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Hyunna Lee
- Health Innovation Big Data Center, Asan Institute for Life Science, Asan Medical Center, Seoul, South Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University College of Medicine, Seoul, 03080, South Korea
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, Seoul, 05505, South Korea.,Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| |
Collapse
|
18
|
Chen X, Fan Z, Li KKW, Wu G, Yang Z, Gao X, Liu Y, Wu H, Chen H, Tang Q, Chen L, Wang Y, Mao Y, Ng HK, Shi Z, Yu J, Zhou L. Molecular subgrouping of medulloblastoma based on few-shot learning of multitasking using conventional MR images: a retrospective multicenter study. Neurooncol Adv 2020; 2:vdaa079. [PMID: 32760911 PMCID: PMC7393307 DOI: 10.1093/noajnl/vdaa079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Background The determination of molecular subgroups—wingless (WNT), sonic hedgehog (SHH), Group 3, and Group 4—of medulloblastomas is very important for prognostication and risk-adaptive treatment strategies. Due to the rare disease characteristics of medulloblastoma, we designed a unique multitask framework for the few-shot scenario to achieve noninvasive molecular subgrouping with high accuracy. Methods We introduced a multitask technique based on mask regional convolutional neural network (Mask-RCNN). By effectively utilizing the comprehensive information including genotyping, tumor mask, and prognosis, multitask technique, on the one hand, realized multi-purpose modeling and simultaneously, on the other hand, promoted the accuracy of the molecular subgrouping. One hundred and thirteen medulloblastoma cases were collected from 4 hospitals during the 8-year period in the retrospective study, which were divided into 3-fold cross-validation cohorts (N = 74) from 2 hospitals and independent testing cohort (N = 39) from the other 2 hospitals. Comparative experiments of different auxiliary tasks were designed to illustrate the effect of multitasking in molecular subgrouping. Results Compared to the single-task framework, the multitask framework that combined 3 tasks increased the average accuracy of molecular subgrouping from 0.84 to 0.93 in cross-validation and from 0.79 to 0.85 in independent testing. The average area under the receiver operating characteristic curves (AUCs) of molecular subgrouping were 0.97 in cross-validation and 0.92 in independent testing. The average AUCs of prognostication also reached to 0.88 in cross-validation and 0.79 in independent testing. The tumor segmentation results achieved the Dice coefficient of 0.90 in both cohorts. Conclusions The multitask Mask-RCNN is an effective method for the molecular subgrouping and prognostication of medulloblastomas with high accuracy in few-shot learning.
Collapse
Affiliation(s)
- Xi Chen
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Zhen Fan
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Kay Ka-Wai Li
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China SAR
| | - Guoqing Wu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Zhong Yang
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Xin Gao
- Department of Neurosurgery, Huadong Hospital, Fudan University, Shanghai, China
| | - Yingchao Liu
- Department of Neurosurgery, Shandong Provincial Hospital, Jinan, China
| | - Haibo Wu
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Hong Chen
- Department of Pathology, Huashan Hospital, Fudan University, Shanghai, China
| | - Qisheng Tang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Liang Chen
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Ho-Keung Ng
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Hong Kong, China SAR
| | - Zhifeng Shi
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Liangfu Zhou
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|