1
|
Ji Y, Silva RF, Adali T, Wen X, Zhu Q, Jiang R, Zhang D, Qi S, Calhoun VD. Joint multi-site domain adaptation and multi-modality feature selection for the diagnosis of psychiatric disorders. Neuroimage Clin 2024; 43:103663. [PMID: 39226701 DOI: 10.1016/j.nicl.2024.103663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/18/2024] [Accepted: 08/25/2024] [Indexed: 09/05/2024]
Abstract
Identifying biomarkers for computer-aided diagnosis (CAD) is crucial for early intervention of psychiatric disorders. Multi-site data have been utilized to increase the sample size and improve statistical power, while multi-modality classification offers significant advantages over traditional single-modality based approaches for diagnosing psychiatric disorders. However, inter-site heterogeneity and intra-modality heterogeneity present challenges to multi-site and multi-modality based classification. In this paper, brain functional and structural networks (BFNs/BSNs) from multiple sites were constructed to establish a joint multi-site multi-modality framework for psychiatric diagnosis. To do this we developed a hypergraph based multi-source domain adaptation (HMSDA) which allowed us to transform source domain subjects into a target domain. A local ordinal structure based multi-task feature selection (LOSMFS) approach was developed by integrating the transformed functional and structural connections (FCs/SCs). The effectiveness of our method was validated by evaluating diagnosis of both schizophrenia (SZ) and autism spectrum disorder (ASD). The proposed method obtained accuracies of 92.2 %±2.22 % and 84.8 %±2.68 % for the diagnosis of SZ and ASD, respectively. We also compared with 6 DA, 10 multi-modality feature selection, and 8 multi-site and multi-modality methods. Results showed the proposed HMSDA+LOSMFS effectively integrated multi-site and multi-modality data to enhance psychiatric diagnosis and identify disorder-specific diagnostic brain connections.
Collapse
Affiliation(s)
- Yixin Ji
- Department of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, China
| | - Rogers F Silva
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| | - Tülay Adali
- Department of CSEE, University of Maryland, USA
| | - Xuyun Wen
- Department of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, China
| | - Qi Zhu
- Department of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, China
| | - Rongtao Jiang
- Department of Psychiatry and Neuroscience, Yale School of Medicine, New Haven, CT, USA
| | - Daoqiang Zhang
- Department of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, China.
| | - Shile Qi
- Department of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China; Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing, China.
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| |
Collapse
|
2
|
Hughes JW, Somani S, Elias P, Tooley J, Rogers AJ, Poterucha T, Haggerty CM, Salerno M, Ouyang D, Ashley E, Zou J, Perez MV. Simple models vs. deep learning in detecting low ejection fraction from the electrocardiogram. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2024; 5:427-434. [PMID: 39081946 PMCID: PMC11284011 DOI: 10.1093/ehjdh/ztae034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 03/28/2024] [Accepted: 04/23/2024] [Indexed: 08/02/2024]
Abstract
Aims Deep learning methods have recently gained success in detecting left ventricular systolic dysfunction (LVSD) from electrocardiogram (ECG) waveforms. Despite their high level of accuracy, they are difficult to interpret and deploy broadly in the clinical setting. In this study, we set out to determine whether simpler models based on standard ECG measurements could detect LVSD with similar accuracy to that of deep learning models. Methods and results Using an observational data set of 40 994 matched 12-lead ECGs and transthoracic echocardiograms, we trained a range of models with increasing complexity to detect LVSD based on ECG waveforms and derived measurements. The training data were acquired from the Stanford University Medical Center. External validation data were acquired from the Columbia Medical Center and the UK Biobank. The Stanford data set consisted of 40 994 matched ECGs and echocardiograms, of which 9.72% had LVSD. A random forest model using 555 discrete, automated measurements achieved an area under the receiver operator characteristic curve (AUC) of 0.92 (0.91-0.93), similar to a deep learning waveform model with an AUC of 0.94 (0.93-0.94). A logistic regression model based on five measurements achieved high performance [AUC of 0.86 (0.85-0.87)], close to a deep learning model and better than N-terminal prohormone brain natriuretic peptide (NT-proBNP). Finally, we found that simpler models were more portable across sites, with experiments at two independent, external sites. Conclusion Our study demonstrates the value of simple electrocardiographic models that perform nearly as well as deep learning models, while being much easier to implement and interpret.
Collapse
Affiliation(s)
- John Weston Hughes
- Department of Computer Science, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305, USA
| | - Sulaiman Somani
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| | - Pierre Elias
- Department of Medicine, Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, USA
| | - James Tooley
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| | - Albert J Rogers
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| | - Timothy Poterucha
- Department of Medicine, Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, USA
| | - Christopher M Haggerty
- Department of Medicine, Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, USA
| | - Michael Salerno
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| | - David Ouyang
- Cedars-Sinai Medical Center, Department of Cardiology, Smidt Heart Institute, 127 S San Vicente Blvd Pavilion, Suite A3600, Los Angeles, CA 90048, USA
| | - Euan Ashley
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| | - James Zou
- Department of Biomedical Data Science, Stanford University, 1265 Welch Road, Stanford, CA 94305, USA
| | - Marco V Perez
- Department of Medicine, Stanford University, 1265 Pasteur Dr, Stanford, CA 94305, USA
| |
Collapse
|
3
|
Texier B, Hémon C, Queffélec A, Dowling J, Bessieres I, Greer P, Acosta O, Boue-Rafle A, de Crevoisier R, Lafond C, Castelli J, Barateau A, Nunes JC. 3D Unsupervised deep learning method for magnetic resonance imaging-to-computed tomography synthesis in prostate radiotherapy. Phys Imaging Radiat Oncol 2024; 31:100612. [PMID: 39161728 PMCID: PMC11332181 DOI: 10.1016/j.phro.2024.100612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/10/2024] [Accepted: 07/12/2024] [Indexed: 08/21/2024] Open
Abstract
Background and purpose Magnetic resonance imaging (MRI)-to-computed tomography (CT) synthesis is essential in MRI-only radiotherapy workflows, particularly through deep learning techniques known for their accuracy. However, current supervised methods are limited to specific center's learnings and depend on registration precision. The aim of this study was to evaluate the accuracy of unsupervised and supervised approaches in the context of prostate MRI-to-CT generation for radiotherapy dose calculation. Methods CT/MRI image pairs from 99 prostate cancer patients across three different centers were used. A comparison between supervised and unsupervised conditional Generative Adversarial Networks (cGAN) was conducted. Unsupervised training incorporates a style transfer method with. Content and Style Representation for Enhanced Perceptual synthesis (CREPs) loss. For dose evaluation, the photon prescription dose was 60 Gy delivered in volumetric modulated arc therapy (VMAT). Imaging endpoint for sCT evaluation was Mean Absolute Error (MAE). Dosimetric endpoints included absolute dose differences and gamma analysis between CT and sCT dose calculations. Results The unsupervised paired network exhibited the highest accuracy for the body with a MAE at 33.6 HU, the highest MAE was 45.5 HU obtained with unsupervised unpaired learning. All architectures provided clinically acceptable results for dose calculation with gamma pass rates above 94 % (1 % 1 mm 10 %). Conclusions This study shows that multicenter data can produce accurate sCTs via unsupervised learning, eliminating CT-MRI registration. The sCTs not only matched HU values but also enabled precise dose calculations, suggesting their potential for wider use in MRI-only radiotherapy workflows.
Collapse
Affiliation(s)
- Blanche Texier
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Cédric Hémon
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Adélie Queffélec
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jason Dowling
- CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | | | - Peter Greer
- Univ. of Newcastle, School of Mathematical and Physical Sciences, Dept. of Radiation-Oncology Calvary Mater Hospital, Newcastle, Australia
| | - Oscar Acosta
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Adrien Boue-Rafle
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Caroline Lafond
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Anaïs Barateau
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
4
|
He Y, Kong J, Li J, Zheng C. Entropy and distance-guided super self-ensembling for optic disc and cup segmentation. BIOMEDICAL OPTICS EXPRESS 2024; 15:3975-3992. [PMID: 38867792 PMCID: PMC11166439 DOI: 10.1364/boe.521778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 06/14/2024]
Abstract
Segmenting the optic disc (OD) and optic cup (OC) is crucial to accurately detect changes in glaucoma progression in the elderly. Recently, various convolutional neural networks have emerged to deal with OD and OC segmentation. Due to the domain shift problem, achieving high-accuracy segmentation of OD and OC from different domain datasets remains highly challenging. Unsupervised domain adaptation has taken extensive focus as a way to address this problem. In this work, we propose a novel unsupervised domain adaptation method, called entropy and distance-guided super self-ensembling (EDSS), to enhance the segmentation performance of OD and OC. EDSS is comprised of two self-ensembling models, and the Gaussian noise is added to the weights of the whole network. Firstly, we design a super self-ensembling (SSE) framework, which can combine two self-ensembling to learn more discriminative information about images. Secondly, we propose a novel exponential moving average with Gaussian noise (G-EMA) to enhance the robustness of the self-ensembling framework. Thirdly, we propose an effective multi-information fusion strategy (MFS) to guide and improve the domain adaptation process. We evaluate the proposed EDSS on two public fundus image datasets RIGA+ and REFUGE. Large amounts of experimental results demonstrate that the proposed EDSS outperforms state-of-the-art segmentation methods with unsupervised domain adaptation, e.g., the Dicemean score on three test sub-datasets of RIGA+ are 0.8442, 0.8772 and 0.9006, respectively, and the Dicemean score on the REFUGE dataset is 0.9154.
Collapse
Affiliation(s)
- Yanlin He
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
| | - Jun Kong
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
| | - Juan Li
- Jilin Engineering Normal University, Changchun 130052, China
- Business School, Northeast Normal University, Changchun 130117, China
| | - Caixia Zheng
- College of Information Sciences and Technology, Northeast Normal University, Changchun 130117, China
- Key Laboratory of Applied Statistics of MOE, Northeast Normal University, Changchun 130024, China
| |
Collapse
|
5
|
Fidon L, Aertsen M, Kofler F, Bink A, David AL, Deprest T, Emam D, Guffens F, Jakab A, Kasprian G, Kienast P, Melbourne A, Menze B, Mufti N, Pogledic I, Prayer D, Stuempflen M, Van Elslander E, Ourselin S, Deprest J, Vercauteren T. A Dempster-Shafer Approach to Trustworthy AI With Application to Fetal Brain MRI Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3784-3795. [PMID: 38198270 DOI: 10.1109/tpami.2023.3346330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
Deep learning models for medical image segmentation can fail unexpectedly and spectacularly for pathological cases and images acquired at different centers than training images, with labeling errors that violate expert knowledge. Such errors undermine the trustworthiness of deep learning models for medical image segmentation. Mechanisms for detecting and correcting such failures are essential for safely translating this technology into clinics and are likely to be a requirement of future regulations on artificial intelligence (AI). In this work, we propose a trustworthy AI theoretical framework and a practical system that can augment any backbone AI system using a fallback method and a fail-safe mechanism based on Dempster-Shafer theory. Our approach relies on an actionable definition of trustworthy AI. Our method automatically discards the voxel-level labeling predicted by the backbone AI that violate expert knowledge and relies on a fallback for those voxels. We demonstrate the effectiveness of the proposed trustworthy AI approach on the largest reported annotated dataset of fetal MRI consisting of 540 manually annotated fetal brain 3D T2w MRIs from 13 centers. Our trustworthy AI method improves the robustness of four backbone AI models for fetal brain MRIs acquired across various centers and for fetuses with various brain abnormalities.
Collapse
|
6
|
Li Y, Fu Y, Gayo IJMB, Yang Q, Min Z, Saeed SU, Yan W, Wang Y, Noble JA, Emberton M, Clarkson MJ, Huisman H, Barratt DC, Prisacariu VA, Hu Y. Prototypical few-shot segmentation for cross-institution male pelvic structures with spatial registration. Med Image Anal 2023; 90:102935. [PMID: 37716198 DOI: 10.1016/j.media.2023.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 06/08/2023] [Accepted: 08/16/2023] [Indexed: 09/18/2023]
Abstract
The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.
Collapse
Affiliation(s)
- Yiwen Li
- Active Vision Laboratory, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Yunguan Fu
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; InstaDeep Ltd., London, UK
| | - Iani J M B Gayo
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Qianye Yang
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Zhe Min
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Shaheer U Saeed
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Wen Yan
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
| | - Yipei Wang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, London, UK
| | - Matthew J Clarkson
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Henkjan Huisman
- Department of Radiology, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | - Dean C Barratt
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Victor A Prisacariu
- Active Vision Laboratory, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Yipeng Hu
- Department of Medical Physics and Biomedical Engineering, UCL Centre for Medical Image Computing, and Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK; Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
7
|
Sundaresan V, Lehman JF, Maffei C, Haber SN, Yendiki A. Self-supervised segmentation and characterization of fiber bundles in anatomic tracing data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.30.560310. [PMID: 37873366 PMCID: PMC10592842 DOI: 10.1101/2023.09.30.560310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Anatomic tracing is the gold standard tool for delineating brain connections and for validating more recently developed imaging approaches such as diffusion MRI tractography. A key step in the analysis of data from tracer experiments is the careful, manual charting of fiber trajectories on histological sections. This is a very time-consuming process, which limits the amount of annotated tracer data that are available for validation studies. Thus, there is a need to accelerate this process by developing a method for computer-assisted segmentation. Such a method must be robust to the common artifacts in tracer data, including variations in the intensity of stained axons and background, as well as spatial distortions introduced by sectioning and mounting the tissue. The method should also achieve satisfactory performance using limited manually charted data for training. Here we propose the first deeplearning method, with a self-supervised loss function, for segmentation of fiber bundles on histological sections from macaque brains that have received tracer injections. We address the limited availability of manual labels with a semi-supervised training technique that takes advantage of unlabeled data to improve performance. We also introduce anatomic and across-section continuity constraints to improve accuracy. We show that our method can be trained on manually charted sections from a single case and segment unseen sections from different cases, with a true positive rate of ~0.80. We further demonstrate the utility of our method by quantifying the density of fiber bundles as they travel through different white-matter pathways. We show that fiber bundles originating in the same injection site have different levels of density when they travel through different pathways, a finding that can have implications for microstructure-informed tractography methods. The code for our method is available at https://github.com/v-sundaresan/fiberbundle_seg_tracing.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Department of Computational and Data Sciences, Indian Institute of Science, Bengaluru, Karnataka 560012, India
| | - Julia F. Lehman
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Suzanne N. Haber
- Department of Pharmacology and Physiology, University of Rochester School of Medicine, Rochester, NY, United States
- McLean Hospital, Belmont, MA, United States
| | - Anastasia Yendiki
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| |
Collapse
|
8
|
Texier B, Hémon C, Lekieffre P, Collot E, Tahri S, Chourak H, Dowling J, Greer P, Bessieres I, Acosta O, Boue-Rafle A, Guevelou JL, de Crevoisier R, Lafond C, Castelli J, Barateau A, Nunes JC. Computed tomography synthesis from magnetic resonance imaging using cycle Generative Adversarial Networks with multicenter learning. Phys Imaging Radiat Oncol 2023; 28:100511. [PMID: 38077271 PMCID: PMC10709085 DOI: 10.1016/j.phro.2023.100511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/03/2023] [Accepted: 11/08/2023] [Indexed: 12/13/2023] Open
Abstract
Background and Purpose: Addressing the need for accurate dose calculation in MRI-only radiotherapy, the generation of synthetic Computed Tomography (sCT) from MRI has emerged. Deep learning (DL) techniques, have shown promising results in achieving high sCT accuracies. However, existing sCT synthesis methods are often center-specific, posing a challenge to their generalizability. To overcome this limitation, recent studies have proposed approaches, such as multicenter training . Material and methods: The purpose of this work was to propose a multicenter sCT synthesis by DL, using a 2D cycle-GAN on 128 prostate cancer patients, from four different centers. Four cases were compared: monocenter cases, monocenter training and test on another center, multicenter trainings and a test on a center not included in the training and multicenter trainings with an included center in the test. Trainings were performed using 20 patients. sCT accuracy evaluation was performed using Mean Absolute Error, Mean Error and Peak-Signal-to-Noise-Ratio. Dose accuracy was assessed with gamma index and Dose Volume Histogram comparison. Results: Qualitative, quantitative and dose results show that the accuracy of sCTs for monocenter trainings and multicenter trainings using a seen center in the test did not differ significantly. However, when the test involved an unseen center, the sCT quality was inferior. Conclusions: The aim of this work was to propose generalizable multicenter training for MR-to-CT synthesis. It was shown that only a few data from one center included in the training cohort allows sCT accuracy equivalent to a monocenter study.
Collapse
Affiliation(s)
- Blanche Texier
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Cédric Hémon
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Pauline Lekieffre
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Emma Collot
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Safaa Tahri
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Hilda Chourak
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
- CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - Jason Dowling
- CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - Peter Greer
- Univ. of Newcastle, School of Mathematical ans Physical Sciences, Dept of Radiation-Oncology Calvary Mater Hospital, Newcastle, Australia
| | | | - Oscar Acosta
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Adrien Boue-Rafle
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jennifer Le Guevelou
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Caroline Lafond
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
9
|
Bigalke A, Hansen L, Diesel J, Hennigs C, Rostalski P, Heinrich MP. Anatomy-guided domain adaptation for 3D in-bed human pose estimation. Med Image Anal 2023; 89:102887. [PMID: 37453235 DOI: 10.1016/j.media.2023.102887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/16/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023]
Abstract
3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain. Our method comprises two complementary adaptation strategies based on prior knowledge about human anatomy. First, we guide the learning process in the target domain by constraining predictions to the space of anatomically plausible poses. To this end, we embed the prior knowledge into an anatomical loss function that penalizes asymmetric limb lengths, implausible bone lengths, and implausible joint angles. Second, we propose to filter pseudo labels for self-training according to their anatomical plausibility and incorporate the concept into the Mean Teacher paradigm. We unify both strategies in a point cloud-based framework applicable to unsupervised and source-free domain adaptation. Evaluation is performed for in-bed pose estimation under two adaptation scenarios, using the public SLP dataset and a newly created dataset. Our method consistently outperforms various state-of-the-art domain adaptation methods, surpasses the baseline model by 31%/66%, and reduces the domain gap by 65%/82%. Source code is available at https://github.com/multimodallearning/da-3dhpe-anatomy.
Collapse
Affiliation(s)
- Alexander Bigalke
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany.
| | - Lasse Hansen
- EchoScout GmbH, Maria-Goeppert-Str. 3, 23562 Lübeck, Germany
| | - Jasper Diesel
- Drägerwerk AG & Co. KGaA, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Carlotta Hennigs
- Institute for Electrical Engineering in Medicine, University of Lübeck, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Philipp Rostalski
- Institute for Electrical Engineering in Medicine, University of Lübeck, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany
| |
Collapse
|
10
|
Choi B, Olberg S, Park JC, Kim JS, Shrestha DK, Yaddanapudi S, Furutani KM, Beltran CJ. Technical note: Progressive deep learning: An accelerated training strategy for medical image segmentation. Med Phys 2023; 50:5075-5087. [PMID: 36763566 DOI: 10.1002/mp.16267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/30/2022] [Accepted: 01/24/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Recent advancements in Deep Learning (DL) methodologies have led to state-of-the-art performance in a wide range of applications especially in object recognition, classification, and segmentation of medical images. However, training modern DL models requires a large amount of computation and long training times due to the complex nature of network structures and the large number of training datasets involved. Moreover, it is an intensive, repetitive manual process to select the optimized configuration of hyperparameters for a given DL network. PURPOSE In this study, we present a novel approach to accelerate the training time of DL models via the progressive feeding of training datasets based on similarity measures for medical image segmentation. We term this approach Progressive Deep Learning (PDL). METHODS The two-stage PDL approach was tested on the auto-segmentation task for two imaging modalities: CT and MRI. The training datasets were ranked according to similarity measures between each sample based on Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and the Universal Quality Image Index (UQI) values. At the start of the training process, a relatively coarse sampling of training datasets with higher ranks was used to optimize the hyperparameters of the DL network. Following this, the samples with higher ranks were used in step 1 to yield accelerated loss minimization in early training epochs and the total dataset was added in step 2 for the remainder of training. RESULTS Our results demonstrate that the PDL approach can reduce the training time by nearly half (∼49%) and can predict segmentations (CT U-net/DenseNet dice coefficient: 0.9506/0.9508, MR U-net/DenseNet dice coefficient: 0.9508/0.9510) without major statistical difference (Wilcoxon signed-rank test) compared to the conventional DL approach. The total training times with a fixed cutoff at 0.95 DSC for the CT dataset using DenseNet and U-Net architectures, respectively, were 17 h, 20 min and 4 h, 45 min in the conventional case compared to 8 h, 45 min and 2 h, 20 min with PDL. For the MRI dataset, the total training times using the same architectures were 2 h, 54 min and 52 min in the conventional case and 1 h, 14 min and 25 min with PDL. CONCLUSION The proposed PDL training approach offers the ability to substantially reduce the training time for medical image segmentation while maintaining the performance achieved in the conventional case.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Justin C Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Deepak K Shrestha
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | | | - Keith M Furutani
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Chris J Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
11
|
Kuang S, Woodruff HC, Granzier R, van Nijnatten TJA, Lobbes MBI, Smidt ML, Lambin P, Mehrkanoon S. MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets. Neural Netw 2023; 165:119-134. [PMID: 37285729 DOI: 10.1016/j.neunet.2023.05.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/09/2023] [Accepted: 05/09/2023] [Indexed: 06/09/2023]
Abstract
Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging (MRI) has received increased attention in the last decade, however, the domain shift which arises from different vendors, acquisition protocols, and biological heterogeneity, remains an important but challenging obstacle on the path towards clinical implementation. In this paper, we propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA) framework to address this issue in an unsupervised manner. Our approach incorporates self-training with contrastive learning to align feature representations between domains. In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts to better exploit the underlying semantic information of the image at different levels. To resolve the data imbalance problem, we utilize a category-wise cross-domain sampling strategy to sample anchors from target images and build a hybrid memory bank to store samples from source images. We have validated MSCDA with a challenging task of cross-domain breast MRI segmentation between datasets of healthy volunteers and invasive breast cancer patients. Extensive experiments show that MSCDA effectively improves the model's feature alignment capabilities between domains, outperforming state-of-the-art methods. Furthermore, the framework is shown to be label-efficient, achieving good performance with a smaller source dataset. The code is publicly available at https://github.com/ShengKuangCN/MSCDA.
Collapse
Affiliation(s)
- Sheng Kuang
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Renee Granzier
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Thiemo J A van Nijnatten
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Marc B I Lobbes
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, The Netherlands
| | - Marjolein L Smidt
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Siamak Mehrkanoon
- Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
12
|
Yan K, Guo X, Ji Z, Zhou X. Deep Transfer Learning for Cross-Species Plant Disease Diagnosis Adapting Mixed Subdomains. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2555-2564. [PMID: 34914593 DOI: 10.1109/tcbb.2021.3135882] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
A deep transfer learning framework adapting mixed subdomains is proposed for cross-species plant disease diagnosis. Most existing deep transfer learning studies focus on knowledge transfer between highly correlated domains. These methods may fail to deal with domains that are poorly correlated. In this study, mixed domain images were generated from source and target image groups for improving the correlation between the mixed domain (training dataset) and the target domain (testing dataset). A subdomain alignment mechanism is employed to transfer knowledge from the mixed domain to the target domain. The proposed framework captures the fine-grained information more effectively. Extensive experiments were conducted and prove that the proposed method produces a more effective result compared with existing deep transfer learning technologies for poorly related subdomains.
Collapse
|
13
|
Wang C, Cui Z, Yang J, Han M, Carneiro G, Shen D. BowelNet: Joint Semantic-Geometric Ensemble Learning for Bowel Segmentation From Both Partially and Fully Labeled CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1225-1236. [PMID: 36449590 DOI: 10.1109/tmi.2022.3225667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate bowel segmentation is essential for diagnosis and treatment of bowel cancers. Unfortunately, segmenting the entire bowel in CT images is quite challenging due to unclear boundary, large shape, size, and appearance variations, as well as diverse filling status within the bowel. In this paper, we present a novel two-stage framework, named BowelNet, to handle the challenging task of bowel segmentation in CT images, with two stages of 1) jointly localizing all types of the bowel, and 2) finely segmenting each type of the bowel. Specifically, in the first stage, we learn a unified localization network from both partially- and fully-labeled CT images to robustly detect all types of the bowel. To better capture unclear bowel boundary and learn complex bowel shapes, in the second stage, we propose to jointly learn semantic information (i.e., bowel segmentation mask) and geometric representations (i.e., bowel boundary and bowel skeleton) for fine bowel segmentation in a multi-task learning scheme. Moreover, we further propose to learn a meta segmentation network via pseudo labels to improve segmentation accuracy. By evaluating on a large abdominal CT dataset, our proposed BowelNet method can achieve Dice scores of 0.764, 0.848, 0.835, 0.774, and 0.824 in segmenting the duodenum, jejunum-ileum, colon, sigmoid, and rectum, respectively. These results demonstrate the effectiveness of our proposed BowelNet framework in segmenting the entire bowel from CT images.
Collapse
|
14
|
Barbaroux H, Kunze KP, Neji R, Nazir MS, Pennell DJ, Nielles-Vallespin S, Scott AD, Young AA. Automated segmentation of long and short axis DENSE cardiovascular magnetic resonance for myocardial strain analysis using spatio-temporal convolutional neural networks. J Cardiovasc Magn Reson 2023; 25:16. [PMID: 36991474 PMCID: PMC10061808 DOI: 10.1186/s12968-023-00927-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/01/2023] [Indexed: 03/31/2023] Open
Abstract
BACKGROUND Cine Displacement Encoding with Stimulated Echoes (DENSE) facilitates the quantification of myocardial deformation, by encoding tissue displacements in the cardiovascular magnetic resonance (CMR) image phase, from which myocardial strain can be estimated with high accuracy and reproducibility. Current methods for analyzing DENSE images still heavily rely on user input, making this process time-consuming and subject to inter-observer variability. The present study sought to develop a spatio-temporal deep learning model for segmentation of the left-ventricular (LV) myocardium, as spatial networks often fail due to contrast-related properties of DENSE images. METHODS 2D + time nnU-Net-based models have been trained to segment the LV myocardium from DENSE magnitude data in short- and long-axis images. A dataset of 360 short-axis and 124 long-axis slices was used to train the networks, from a combination of healthy subjects and patients with various conditions (hypertrophic and dilated cardiomyopathy, myocardial infarction, myocarditis). Segmentation performance was evaluated using ground-truth manual labels, and a strain analysis using conventional methods was performed to assess strain agreement with manual segmentation. Additional validation was performed using an externally acquired dataset to compare the inter- and intra-scanner reproducibility with respect to conventional methods. RESULTS Spatio-temporal models gave consistent segmentation performance throughout the cine sequence, while 2D architectures often failed to segment end-diastolic frames due to the limited blood-to-myocardium contrast. Our models achieved a DICE score of 0.83 ± 0.05 and a Hausdorff distance of 4.0 ± 1.1 mm for short-axis segmentation, and 0.82 ± 0.03 and 7.9 ± 3.9 mm respectively for long-axis segmentations. Strain measurements obtained from automatically estimated myocardial contours showed good to excellent agreement with manual pipelines, and remained within the limits of inter-user variability estimated in previous studies. CONCLUSION Spatio-temporal deep learning shows increased robustness for the segmentation of cine DENSE images. It provides excellent agreement with manual segmentation for strain extraction. Deep learning will facilitate the analysis of DENSE data, bringing it one step closer to clinical routine.
Collapse
Affiliation(s)
- Hugo Barbaroux
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK.
| | - Karl P Kunze
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Radhouene Neji
- MR Research Collaborations, Siemens Healthcare Limited, Camberley, UK
| | - Muhummad Sohaib Nazir
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Dudley J Pennell
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Sonia Nielles-Vallespin
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Andrew D Scott
- Cardiovascular Magnetic Resonance Unit, The Royal Brompton Hospital (Guy's and St Thomas' NHS Foundation Trust), London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Alistair A Young
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
15
|
Xie H, Fu C, Zheng X, Zheng Y, Sham CW, Wang X. Adversarial co-training for semantic segmentation over medical images. Comput Biol Med 2023; 157:106736. [PMID: 36958238 DOI: 10.1016/j.compbiomed.2023.106736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 02/21/2023] [Accepted: 02/28/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Abundant labeled data drives the model training for better performance, but collecting sufficient labels is still challenging. To alleviate the pressure of label collection, semi-supervised learning merges unlabeled data into training process. However, the joining of unlabeled data (e.g., data from different hospitals with different acquisition parameters) will change the original distribution. Such a distribution shift leads to a perturbation in the training process, potentially leading to a confirmation bias. In this paper, we investigate distribution shift and develop methods to increase the robustness of our models, with the goal of improving performance in semi-supervised semantic segmentation of medical images. We study distribution shift and increase model robustness to it, for improving practical performance in semi-supervised segmentation over medical images. METHODS To alleviate the issue of distribution shift, we introduce adversarial training into the co-training process. We simulate perturbations caused by the distribution shift via adversarial perturbations and introduce the adversarial perturbation to attack the supervised training to improve the robustness against the distribution shift. Benefiting from label guidance, supervised training does not collapse under adversarial attacks. For co-training, two sub-models are trained from two views (over two disjoint subsets of the dataset) to extract different kinds of knowledge independently. Co-training outperforms single-model by integrating both views of knowledge to avoid confirmation bias. RESULTS For practicality, we conduct extensive experiments on challenging medical datasets. Experimental results show desirable improvements to state-of-the-art counterparts (Yu and Wang, 2019; Peng et al., 2020; Perone et al., 2019). We achieve a DSC score of 87.37% with only 20% of labels on the ACDC dataset, almost same to using 100% of labels. On the SCGM dataset with more distribution shift, we achieve a DSC score of 78.65% with 6.5% of labels, surpassing 10.30% over Peng et al. (2020). Our evaluative results show superior robustness against distribution shifts in medical scenarios. CONCLUSION Empirical results show the effectiveness of our work for handling distribution shift in medical scenarios.
Collapse
Affiliation(s)
- Haoyu Xie
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China.
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China.
| | - Xu Zheng
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China.
| | - Yu Zheng
- Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region.
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, New Zealand
| | - Xingwei Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, 110819, China
| |
Collapse
|
16
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
17
|
Jiang Z, He Y, Ye S, Shao P, Zhu X, Xu Y, Chen Y, Coatrieux JL, Li S, Yang G. O2M-UDA: Unsupervised dynamic domain adaptation for one-to-multiple medical image segmentation. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
|
18
|
Qin C, Li W, Zheng B, Zeng J, Liang S, Zhang X, Zhang W. Dual adversarial models with cross-coordination consistency constraint for domain adaption in brain tumor segmentation. Front Neurosci 2023; 17:1043533. [PMID: 37123362 PMCID: PMC10133464 DOI: 10.3389/fnins.2023.1043533] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 03/10/2023] [Indexed: 05/02/2023] Open
Abstract
The brain tumor segmentation task with different domains remains a major challenge because tumors of different grades and severities may show different distributions, limiting the ability of a single segmentation model to label such tumors. Semi-supervised models (e.g., mean teacher) are strong unsupervised domain-adaptation learners. However, one of the main drawbacks of using a mean teacher is that given a large number of iterations, the teacher model weights converge to those of the student model, and any biased and unstable predictions are carried over to the student. In this article, we proposed a novel unsupervised domain-adaptation framework for the brain tumor segmentation task, which uses dual student and adversarial training techniques to effectively tackle domain shift with MR images. In this study, the adversarial strategy and consistency constraint for each student can align the feature representation on the source and target domains. Furthermore, we introduced the cross-coordination constraint for the target domain data to constrain the models to produce more confident predictions. We validated our framework on the cross-subtype and cross-modality tasks in brain tumor segmentation and achieved better performance than the current unsupervised domain-adaptation and semi-supervised frameworks.
Collapse
Affiliation(s)
- Chuanbo Qin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Wanying Li
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Bin Zheng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Junying Zeng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
- *Correspondence: Junying Zeng,
| | - Shufen Liang
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, China
| | - Xiuping Zhang
- Department of Neurosurgery, Jiangmen Central Hospital, Jiangmen, China
| | - Wenguang Zhang
- Department of Neurosurgery, Jiangmen Central Hospital, Jiangmen, China
| |
Collapse
|
19
|
Ma W, Li X, Zou L, Fan C, Wu M. Symmetrical awareness network for cross-site ultrasound thyroid nodule segmentation. Front Public Health 2023; 11:1055815. [PMID: 36969643 PMCID: PMC10031019 DOI: 10.3389/fpubh.2023.1055815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/17/2023] [Indexed: 03/29/2023] Open
Abstract
Recent years have seen remarkable progress of learning-based methods on Ultrasound Thyroid Nodules segmentation. However, with very limited annotations, the multi-site training data from different domains makes the task remain challenging. Due to domain shift, the existing methods cannot be well generalized to the out-of-set data, which limits the practical application of deep learning in the field of medical imaging. In this work, we propose an effective domain adaptation framework which consists of a bidirectional image translation module and two symmetrical image segmentation modules. The framework improves the generalization ability of deep neural networks in medical image segmentation. The image translation module conducts the mutual conversion between the source domain and the target domain, while the symmetrical image segmentation modules perform image segmentation tasks in both domains. Besides, we utilize adversarial constraint to further bridge the domain gap in feature space. Meanwhile, a consistency loss is also utilized to make the training process more stable and efficient. Experiments on a multi-site ultrasound thyroid nodule dataset achieve 96.22% for PA and 87.06% for DSC in average, demonstrating that our method performs competitively in cross-domain generalization ability with state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Wenxuan Ma
- Electronic Information School, Wuhan University, Wuhan, China
| | - Xiaopeng Li
- Electronic Information School, Wuhan University, Wuhan, China
| | - Lian Zou
- Electronic Information School, Wuhan University, Wuhan, China
| | - Cien Fan
- Electronic Information School, Wuhan University, Wuhan, China
- *Correspondence: Cien Fan
| | - Meng Wu
- Department of Ultrasound, Zhongnan Hospital of Wuhan University, Wuhan, China
- Meng Wu
| |
Collapse
|
20
|
Dorent R, Kujawa A, Ivory M, Bakas S, Rieke N, Joutard S, Glocker B, Cardoso J, Modat M, Batmanghelich K, Belkov A, Calisto MB, Choi JW, Dawant BM, Dong H, Escalera S, Fan Y, Hansen L, Heinrich MP, Joshi S, Kashtanova V, Kim HG, Kondo S, Kruse CN, Lai-Yuen SK, Li H, Liu H, Ly B, Oguz I, Shin H, Shirokikh B, Su Z, Wang G, Wu J, Xu Y, Yao K, Zhang L, Ourselin S, Shapey J, Vercauteren T. CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation. Med Image Anal 2023; 83:102628. [PMID: 36283200 DOI: 10.1016/j.media.2022.102628] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 06/17/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
Collapse
Affiliation(s)
- Reuben Dorent
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Aaron Kujawa
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marina Ivory
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Samuel Joutard
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Ben Glocker
- Department of Computing, Imperial College London, Department of Computing, London, United Kingdom
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | | | - Arseniy Belkov
- Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju, Republic of Korea
| | | | - Hexin Dong
- Center for Data Science, Peking University, Beijing, China
| | - Sergio Escalera
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | - Yubo Fan
- Vanderbilt University, Nashville, USA
| | - Lasse Hansen
- Institute of Medical Informatics, Universität zu Lübeck, Germany
| | | | - Smriti Joshi
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | | | - Hyeon Gyu Kim
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | | | - Hao Li
- Vanderbilt University, Nashville, USA
| | - Han Liu
- Vanderbilt University, Nashville, USA
| | - Buntheng Ly
- Inria, Université Côte d'Azur, Sophia Antipolis, France
| | - Ipek Oguz
- Vanderbilt University, Nashville, USA
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, Moscow, Russia; Artificial Intelligence Research Institute (AIRI), Moscow, Russia
| | - Zixian Su
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yanwu Xu
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, USA
| | - Kai Yao
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Li Zhang
- Center for Data Science, Peking University, Beijing, China
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom; Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| |
Collapse
|
21
|
Keaton MR, Zaveri RJ, Doretto G. CellTranspose: Few-shot Domain Adaptation for Cellular Instance Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:455-466. [PMID: 38170053 PMCID: PMC10760785 DOI: 10.1109/wacv56688.2023.00053] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Automated cellular instance segmentation is a process utilized for accelerating biological research for the past two decades, and recent advancements have produced higher quality results with less effort from the biologist. Most current endeavors focus on completely cutting the researcher out of the picture by generating highly generalized models. However, these models invariably fail when faced with novel data, distributed differently than the ones used for training. Rather than approaching the problem with methods that presume the availability of large amounts of target data and computing power for retraining, in this work we address the even greater challenge of designing an approach that requires minimal amounts of new annotated data as well as training time. We do so by designing specialized contrastive losses that leverage the few annotated samples very efficiently. A large set of results show that 3 to 5 annotations lead to models with accuracy that: 1) significantly mitigate the covariate shift effects; 2) matches or surpasses other adaptation methods; 3) even approaches methods that have been fully retrained on the target distribution. The adaptation training is only a few minutes, paving a path towards a balance between model performance, computing requirements and expert-level annotation needs.
Collapse
|
22
|
Deng X, Tian L, Zhang Y, Li A, Cai S, Zhou Y, Jie Y. Is histogram manipulation always beneficial when trying to improve model performance across devices? Experiments using a Meibomian gland segmentation model. Front Cell Dev Biol 2022; 10:1067914. [PMID: 36544900 PMCID: PMC9760981 DOI: 10.3389/fcell.2022.1067914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/14/2022] [Indexed: 12/12/2022] Open
Abstract
Meibomian gland dysfunction (MGD) is caused by abnormalities of the meibomian glands (MG) and is one of the causes of evaporative dry eye (DED). Precise MG segmentation is crucial for MGD-related DED diagnosis because the morphological parameters of MG are of importance. Deep learning has achieved state-of-the-art performance in medical image segmentation tasks, especially when training and test data come from the same distribution. But in practice, MG images can be acquired from different devices or hospitals. When testing image data from different distributions, deep learning models that have been trained on a specific distribution are prone to poor performance. Histogram specification (HS) has been reported as an effective method for contrast enhancement and improving model performance on images of different modalities. Additionally, contrast limited adaptive histogram equalization (CLAHE) will be used as a preprocessing method to enhance the contrast of MG images. In this study, we developed and evaluated the automatic segmentation method of the eyelid area and the MG area based on CNN and automatically calculated MG loss rate. This method is evaluated in the internal and external testing sets from two meibography devices. In addition, to assess whether HS and CLAHE improve segmentation results, we trained the network model using images from one device (internal testing set) and tested on images from another device (external testing set). High DSC (0.84 for MG region, 0.92 for eyelid region) for the internal test set was obtained, while for the external testing set, lower DSC (0.69-0.71 for MG region, 0.89-0.91 for eyelid region) was obtained. Also, HS and CLAHE were reported to have no statistical improvement in the segmentation results of MG in this experiment.
Collapse
Affiliation(s)
- Xianyu Deng
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Lei Tian
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Yinghuai Zhang
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Ao Li
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China
| | - Shangyu Cai
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China
| | - Yongjin Zhou
- Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China,Marshall Laboratory of Biomedical Engineering, Shenzhen, China,*Correspondence: Yongjin Zhou, ; Ying Jie,
| | - Ying Jie
- Beijing Ophthalmology and Visual Sciences Key Laboratory, Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing Institute of Ophthalmology, Capital Medical University, Beijing, China,Ophthalmology and Visual Sciences Key Laboratory, Beijing, China,*Correspondence: Yongjin Zhou, ; Ying Jie,
| |
Collapse
|
23
|
Uncertainty-aware deep co-training for semi-supervised medical image segmentation. Comput Biol Med 2022; 149:106051. [DOI: 10.1016/j.compbiomed.2022.106051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 07/27/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
|
24
|
Jafari M, Francis S, Garibaldi JM, Chen X. LMISA: A lightweight multi-modality image segmentation network via domain adaptation using gradient magnitude and shape constraint. Med Image Anal 2022; 81:102536. [PMID: 35870297 DOI: 10.1016/j.media.2022.102536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 04/26/2022] [Accepted: 07/11/2022] [Indexed: 11/20/2022]
Abstract
In medical image segmentation, supervised machine learning models trained using one image modality (e.g. computed tomography (CT)) are often prone to failure when applied to another image modality (e.g. magnetic resonance imaging (MRI)) even for the same organ. This is due to the significant intensity variations of different image modalities. In this paper, we propose a novel end-to-end deep neural network to achieve multi-modality image segmentation, where image labels of only one modality (source domain) are available for model training and the image labels for the other modality (target domain) are not available. In our method, a multi-resolution locally normalized gradient magnitude approach is firstly applied to images of both domains for minimizing the intensity discrepancy. Subsequently, a dual task encoder-decoder network including image segmentation and reconstruction is utilized to effectively adapt a segmentation network to the unlabeled target domain. Additionally, a shape constraint is imposed by leveraging adversarial learning. Finally, images from the target domain are segmented, as the network learns a consistent latent feature representation with shape awareness from both domains. We implement both 2D and 3D versions of our method, in which we evaluate CT and MRI images for kidney and cardiac tissue segmentation. For kidney, a public CT dataset (KiTS19, MICCAI 2019) and a local MRI dataset were utilized. The cardiac dataset was from the Multi-Modality Whole Heart Segmentation (MMWHS) challenge 2017. Experimental results reveal that our proposed method achieves significantly higher performance with a much lower model complexity in comparison with other state-of-the-art methods. More importantly, our method is also capable of producing superior segmentation results than other methods for images of an unseen target domain without model retraining. The code is available at GitHub (https://github.com/MinaJf/LMISA) to encourage method comparison and further research.
Collapse
Affiliation(s)
- Mina Jafari
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| | - Susan Francis
- The Sir Peter Mansfield Imaging Centre, University of Nottingham, UK
| | - Jonathan M Garibaldi
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK
| | - Xin Chen
- Intelligent Modeling and Analysis Group, School of Computer Science, University of Nottingham, UK.
| |
Collapse
|
25
|
Rasheed K, Qayyum A, Ghaly M, Al-Fuqaha A, Razi A, Qadir J. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput Biol Med 2022; 149:106043. [PMID: 36115302 DOI: 10.1016/j.compbiomed.2022.106043] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 08/15/2022] [Accepted: 08/20/2022] [Indexed: 12/18/2022]
Abstract
With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.
Collapse
Affiliation(s)
- Khansa Rasheed
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Adnan Qayyum
- IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
| | - Mohammed Ghaly
- Research Center for Islamic Legislation and Ethics (CILE), College of Islamic Studies, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Ala Al-Fuqaha
- Information and Computing Technology Division, College of Science and Engineering, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
| | - Adeel Razi
- Turner Institute for Brain and Mental Health, Monash University, Clayton, Australia; Monash Biomedical Imaging, Monash University, Clayton, Australia; Wellcome Centre for Human Neuroimaging, UCL, London, United Kingdom; CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada.
| | - Junaid Qadir
- Department of Computer Science and Engineering, College of Engineering, Qatar University, Doha, Qatar.
| |
Collapse
|
26
|
Saat P, Nogovitsyn N, Hassan MY, Ganaie MA, Souza R, Hemmati H. A domain adaptation benchmark for T1-weighted brain magnetic resonance image segmentation. Front Neuroinform 2022; 16:919779. [PMID: 36213544 PMCID: PMC9538795 DOI: 10.3389/fninf.2022.919779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023] Open
Abstract
Accurate brain segmentation is critical for magnetic resonance imaging (MRI) analysis pipelines. Machine-learning-based brain MR image segmentation methods are among the state-of-the-art techniques for this task. Nevertheless, the segmentations produced by machine learning models often degrade in the presence of expected domain shifts between the test and train sets data distributions. These domain shifts are expected due to several factors, such as scanner hardware and software differences, technology updates, and differences in MRI acquisition parameters. Domain adaptation (DA) methods can make machine learning models more resilient to these domain shifts. This paper proposes a benchmark for investigating DA techniques for brain MR image segmentation using data collected across sites with scanners from different vendors (Philips, Siemens, and General Electric). Our work provides labeled data, publicly available source code for a set of baseline and DA models, and a benchmark for assessing different brain MR image segmentation techniques. We applied the proposed benchmark to evaluate two segmentation tasks: skull-stripping; and white-matter, gray-matter, and cerebrospinal fluid segmentation, but the benchmark can be extended to other brain structures. Our main findings during the development of this benchmark are that there is not a single DA technique that consistently outperforms others, and hyperparameter tuning and computational times for these methods still pose a challenge before broader adoption of these methods in the clinical practice.
Collapse
Affiliation(s)
- Parisa Saat
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
| | - Nikita Nogovitsyn
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada
- Mood Disorders Program, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Muhammad Yusuf Hassan
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Electrical Engineering, Indian Institute of Technology, Gandhinagar, Gujarat, India
| | - Muhammad Athar Ganaie
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Chemical Engineering, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Roberto Souza
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Hadi Hemmati
- Electrical and Software Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada
- Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, ON, Canada
| |
Collapse
|
27
|
Hong J, Zhang YD, Chen W. Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109155] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
28
|
A multimodal domain adaptive segmentation framework for IDH genotype prediction. Int J Comput Assist Radiol Surg 2022; 17:1923-1931. [PMID: 35794409 DOI: 10.1007/s11548-022-02700-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 06/05/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE The gene mutation status of isocitrate dehydrogenase (IDH) in gliomas leads to a different prognosis. It is challenging to perform automated tumor segmentation and genotype prediction directly using label-deprived multimodal magnetic resonance (MR) images. We propose a novel framework that employs a domain adaptive mechanism to address this issue. METHODS Multimodal domain adaptive segmentation (MDAS) framework was proposed to solve the gap issue in cross dataset model transfer. Image translation was used to adaptively align the multimodal data from two domains at the image level, and segmentation consistency loss was proposed to retain more pathological information through semantic constraints. The data distribution between the labeled public dataset and label-free target dataset was learned to achieve better unsupervised segmentation results on the target dataset. Then, the segmented tumor foci were used as a mask to extract the radiomics and deep features. And the subsequent prediction of IDH gene mutation status was conducted by training a random forest classifier. The prediction model does not need any expert segmented labels. RESULTS We implemented our method on the public BraTS 2019 dataset and 110 astrocytoma cases of grade II-IV brain tumors from our hospital. We obtained a Dice score of 77.41% for unsupervised tumor segmentation, a genotype prediction accuracy (ACC) of 0.7639 and an area under curve (AUC) of 0.8600. Experimental results demonstrate that our domain adaptive approach outperforms the methods utilizing direct transfer learning. The model using hybrid features gives better results than the model using radiomics or deep features alone. CONCLUSIONS Domain adaptation enables the segmentation network to achieve better performance, and the extraction of mixed features at multiple levels on the segmented region of interest ensures effective prediction of the IDH gene mutation status.
Collapse
|
29
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
30
|
Wei W, Tao H, Chen W, Wu X. Automatic recognition of micronucleus by combining attention mechanism and AlexNet. BMC Med Inform Decis Mak 2022; 22:138. [PMID: 35585543 PMCID: PMC9116712 DOI: 10.1186/s12911-022-01875-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 05/05/2022] [Indexed: 11/19/2022] Open
Abstract
Background Micronucleus (MN) is an abnormal fragment in a human cell caused by disorders in the mechanism regulating chromosome segregation. It can be used as a biomarker for genotoxicity, tumor risk, and tumor malignancy. The in vitro micronucleus assay is a commonly used method to detect micronucleus. However, it is time-consuming and the visual scoring can be inconsistent. Methods To alleviate this issue, we proposed a computer-aided diagnosis method combining convolutional neural networks and visual attention for micronucleus recognition. The backbone of our model is AlexNet without any dense layers and it is pretrained on the ImageNet dataset. Two attention modules are applied to extract cell image features and generate attention maps highlighting the region of interest to improve the interpretability of the network. Given the problems in the data set, we leverage data augmentation and focal loss to alleviate the impact. Results Experiments show that the proposed network yields better performance with fewer parameters. The AP value, F1 value and AUC value reach 0.932, 0.811 and 0.995, respectively. Conclusion In conclusion, the proposed network can effectively recognize micronucleus, and it can play an auxiliary role in clinical diagnosis by doctors.
Collapse
Affiliation(s)
- Weiyi Wei
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Hong Tao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China.
| | - Wenxia Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, China
| | - Xiaoqin Wu
- Radiology Department, Gansu Provincial Center For Disease Control And Prevention, Lanzhou, China
| |
Collapse
|
31
|
Du Y, Zhang R, Zhang X, Yao Y, Lu H, Wang C. Learning transferable and discriminative features for unsupervised domain adaptation. INTELL DATA ANAL 2022. [DOI: 10.3233/ida-215813] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Although achieving remarkable progress, it is very difficult to induce a supervised classifier without any labeled data. Unsupervised domain adaptation is able to overcome this challenge by transferring knowledge from a labeled source domain to an unlabeled target domain. Transferability and discriminability are two key criteria for characterizing the superiority of feature representations to enable successful domain adaptation. In this paper, a novel method called learning TransFerable and Discriminative Features for unsupervised domain adaptation (TFDF) is proposed to optimize these two objectives simultaneously. On the one hand, distribution alignment is performed to reduce domain discrepancy and learn more transferable representations. Instead of adopting Maximum Mean Discrepancy (MMD) which only captures the first-order statistical information to measure distribution discrepancy, we adopt a recently proposed statistic called Maximum Mean and Covariance Discrepancy (MMCD), which can not only capture the first-order statistical information but also capture the second-order statistical information in the reproducing kernel Hilbert space (RKHS). On the other hand, we propose to explore both local discriminative information via manifold regularization and global discriminative information via minimizing the proposed class confusion objective to learn more discriminative features, respectively. We integrate these two objectives into the Structural Risk Minimization (RSM) framework and learn a domain-invariant classifier. Comprehensive experiments are conducted on five real-world datasets and the results verify the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Yuntao Du
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
| | - Ruiting Zhang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
| | - Xiaowen Zhang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
| | - Yirong Yao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
| | - Hengyang Lu
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
| | - Chongjun Wang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China
| |
Collapse
|
32
|
Hong J, Yu SCH, Chen W. Unsupervised domain adaptation for cross-modality liver segmentation via joint adversarial learning and self-learning. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108729] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
33
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
34
|
Swain S, Bhushan B, Dhiman G, Viriyasitavat W. Appositeness of Optimized and Reliable Machine Learning for Healthcare: A Survey. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2022; 29:3981-4003. [PMID: 35342282 PMCID: PMC8939887 DOI: 10.1007/s11831-022-09733-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 02/09/2022] [Indexed: 05/04/2023]
Abstract
Machine Learning (ML) has been categorized as a branch of Artificial Intelligence (AI) under the Computer Science domain wherein programmable machines imitate human learning behavior with the help of statistical methods and data. The Healthcare industry is one of the largest and busiest sectors in the world, functioning with an extensive amount of manual moderation at every stage. Most of the clinical documents concerning patient care are hand-written by experts, selective reports are machine-generated. This process elevates the chances of misdiagnosis thereby, imposing a risk to a patient's life. Recent technological adoptions for automating manual operations have witnessed extensive use of ML in its applications. The paper surveys the applicability of ML approaches in automating medical systems. The paper discusses most of the optimized statistical ML frameworks that encourage better service delivery in clinical aspects. The universal adoption of various Deep Learning (DL) and ML techniques as the underlying systems for a variety of wellness applications, is delineated by challenges and elevated by myriads of security. This work tries to recognize a variety of vulnerabilities occurring in medical procurement, admitting the concerns over its predictive performance from a privacy point of view. Finally providing possible risk delimiting facts and directions for active challenges in the future.
Collapse
Affiliation(s)
- Subhasmita Swain
- Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Greater Noida, India
| | - Bharat Bhushan
- Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Greater Noida, India
| | - Gaurav Dhiman
- Department of Computer Science, Government Bikram College of Commerce, Patiala, India
- University Centre for Research and Development, Department of Computer Science and Engineering, Chandigarh University, Gharuan, Mohali, India
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Wattana Viriyasitavat
- Department of Statistics, Faculty of Commerce and Accountancy, Chulalongkorn Business School, Bangkok, Thailand
| |
Collapse
|
35
|
Chen J, Yang G, Khan H, Zhang H, Zhang Y, Zhao S, Mohiaddin R, Wong T, Firmin D, Keegan J. JAS-GAN: Generative Adversarial Network Based Joint Atrium and Scar Segmentations on Unbalanced Atrial Targets. IEEE J Biomed Health Inform 2022; 26:103-114. [PMID: 33945491 DOI: 10.1109/jbhi.2021.3077469] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated and accurate segmentations of left atrium (LA) and atrial scars from late gadolinium-enhanced cardiac magnetic resonance (LGE CMR) images are in high demand for quantifying atrial scars. The previous quantification of atrial scars relies on a two-phase segmentation for LA and atrial scars due to their large volume difference (unbalanced atrial targets). In this paper, we propose an inter-cascade generative adversarial network, namely JAS-GAN, to segment the unbalanced atrial targets from LGE CMR images automatically and accurately in an end-to-end way. Firstly, JAS-GAN investigates an adaptive attention cascade to automatically correlate the segmentation tasks of the unbalanced atrial targets. The adaptive attention cascade mainly models the inclusion relationship of the two unbalanced atrial targets, where the estimated LA acts as the attention map to adaptively focus on the small atrial scars roughly. Then, an adversarial regularization is applied to the segmentation tasks of the unbalanced atrial targets for making a consistent optimization. It mainly forces the estimated joint distribution of LA and atrial scars to match the real ones. We evaluated the performance of our JAS-GAN on a 3D LGE CMR dataset with 192 scans. Compared with the state-of-the-art methods, our proposed approach yielded better segmentation performance (Average Dice Similarity Coefficient (DSC) values of 0.946 and 0.821 for LA and atrial scars, respectively), which indicated the effectiveness of our proposed approach for segmenting unbalanced atrial targets.
Collapse
|
36
|
Avetisian M, Burenko I, Egorov K, Kokh V, Nesterov A, Nikolaev A, Ponomarchuk A, Sokolova E, Tuzhilin A, Umerenkov D. CoRSAI: A System for Robust Interpretation of CT Scans of COVID-19 Patients Using Deep Learning. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS 2021. [DOI: 10.1145/3467471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Analysis of chest CT scans can be used in detecting parts of lungs that are affected by infectious diseases such as COVID-19. Determining the volume of lungs affected by lesions is essential for formulating treatment recommendations and prioritizing patients by severity of the disease. In this article we adopted an approach based on using an ensemble of deep convolutional neural networks for segmentation of slices of lung CT scans. Using our models, we are able to segment the lesions, evaluate patients’ dynamics, estimate relative volume of lungs affected by lesions, and evaluate the lung damage stage. Our models were trained on data from different medical centers. We compared predictions of our models with those of six experienced radiologists, and our segmentation model outperformed most of them. On the task of classification of disease severity, our model outperformed all the radiologists.
Collapse
Affiliation(s)
| | | | | | | | | | - Aleksandr Nikolaev
- Research and Practical Clinical Center for Diagnostics and Telemedicine Technologies, Russia
| | | | | | - Alex Tuzhilin
- Sberbank AI Laboratory and New York University, New York, USA
| | | |
Collapse
|
37
|
Du X, Liu Y. Constraint-based Unsupervised Domain Adaptation network for Multi-Modality Cardiac Image Segmentation. IEEE J Biomed Health Inform 2021; 26:67-78. [PMID: 34757915 DOI: 10.1109/jbhi.2021.3126874] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The cardiac CT and MRI images depict the various structures of the heart, which are very valuable for analyzing heart function. However, due to the difference in the shape of the cardiac images and imaging techniques, automatic segmentation is challenging. To solve this challenge, in this paper, we propose a new constraint-based unsupervised domain adaptation network. This network first performs mutual translation of images between different domains, it can provide training data for the segmentation model, and ensure domain invariance at the image level. Then, we input the target domain into the source domain segmentation model to obtain pseudo-labels and introduce cross-domain self-supervised learning between the two segmentation models. Here, a new loss function is designed to ensure the accuracy of the pseudo-labels. In addition, a cross-domain consistency loss is also introduced. Finally, we construct a multi-level aggregation segmentation network to obtain more refined target domain information. We validate our method on the public whole heart image segmentation challenge dataset and obtain experimental results of 82.9% and 5.5 on dice and average symmetric surface distance (ASSD), respectively. These experimental results prove that our method can provide important assistance in the clinical evaluation of unannotated cardiac datasets.
Collapse
|
38
|
Li K, Wang S, Yu L, Heng PA. Dual-Teacher++: Exploiting Intra-Domain and Inter-Domain Knowledge With Reliable Transfer for Cardiac Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2771-2782. [PMID: 33201808 DOI: 10.1109/tmi.2020.3038828] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Annotation scarcity is a long-standing problem in medical image analysis area. To efficiently leverage limited annotations, abundant unlabeled data are additionally exploited in semi-supervised learning, while well-established cross-modality data are investigated in domain adaptation. In this paper, we aim to explore the feasibility of concurrently leveraging both unlabeled data and cross-modality data for annotation-efficient cardiac segmentation. To this end, we propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++. Besides directly learning from limited labeled target domain data (e.g., CT) via a student model adopted by previous literature, we design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain. In this way, the dual teacher models would transfer acquired inter- and intra-domain knowledge to the student model for further integration and exploitation. Moreover, to encourag reliable dual-domain knowledge transfer, we enhance the inter-domain knowledge transfer on the samples with higher similarity to target domain after appearance alignment, and also strengthen intra-domain knowledge transfer of unlabeled target data with higher prediction confidence. In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data. We extensively evaluated the feasibility of our method on the MM-WHS 2017 challenge dataset. The experiments have demonstrated the superiority of our framework over other semi-supervised learning and domain adaptation methods. Moreover, our performance gains could be yielded in bidirections, i.e., adapting from MR to CT, and from CT to MR. Our code will be available at https://github.com/kli-lalala/Dual-Teacher-.
Collapse
|
39
|
Cohen-Adad J, Alonso-Ortiz E, Abramovic M, Arneitz C, Atcheson N, Barlow L, Barry RL, Barth M, Battiston M, Büchel C, Budde M, Callot V, Combes AJE, De Leener B, Descoteaux M, de Sousa PL, Dostál M, Doyon J, Dvorak A, Eippert F, Epperson KR, Epperson KS, Freund P, Finsterbusch J, Foias A, Fratini M, Fukunaga I, Wheeler-Kingshott CAMG, Germani G, Gilbert G, Giove F, Gros C, Grussu F, Hagiwara A, Henry PG, Horák T, Hori M, Joers J, Kamiya K, Karbasforoushan H, Keřkovský M, Khatibi A, Kim JW, Kinany N, Kitzler H, Kolind S, Kong Y, Kudlička P, Kuntke P, Kurniawan ND, Kusmia S, Labounek R, Laganà MM, Laule C, Law CS, Lenglet C, Leutritz T, Liu Y, Llufriu S, Mackey S, Martinez-Heras E, Mattera L, Nestrasil I, O'Grady KP, Papinutto N, Papp D, Pareto D, Parrish TB, Pichiecchio A, Prados F, Rovira À, Ruitenberg MJ, Samson RS, Savini G, Seif M, Seifert AC, Smith AK, Smith SA, Smith ZA, Solana E, Suzuki Y, Tackley G, Tinnermann A, Valošek J, Van De Ville D, Yiannakas MC, Weber KA, Weiskopf N, Wise RG, Wyss PO, Xu J. Generic acquisition protocol for quantitative MRI of the spinal cord. Nat Protoc 2021; 16:4611-4632. [PMID: 34400839 PMCID: PMC8811488 DOI: 10.1038/s41596-021-00588-0] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 06/10/2021] [Indexed: 02/08/2023]
Abstract
Quantitative spinal cord (SC) magnetic resonance imaging (MRI) presents many challenges, including a lack of standardized imaging protocols. Here we present a prospectively harmonized quantitative MRI protocol, which we refer to as the spine generic protocol, for users of 3T MRI systems from the three main manufacturers: GE, Philips and Siemens. The protocol provides guidance for assessing SC macrostructural and microstructural integrity: T1-weighted and T2-weighted imaging for SC cross-sectional area computation, multi-echo gradient echo for gray matter cross-sectional area, and magnetization transfer and diffusion weighted imaging for assessing white matter microstructure. In a companion paper from the same authors, the spine generic protocol was used to acquire data across 42 centers in 260 healthy subjects. The key details of the spine generic protocol are also available in an open-access document that can be found at https://github.com/spine-generic/protocols . The protocol will serve as a starting point for researchers and clinicians implementing new SC imaging initiatives so that, in the future, inclusion of the SC in neuroimaging protocols will be more common. The protocol could be implemented by any trained MR technician or by a researcher/clinician familiar with MRI acquisition.
Collapse
Affiliation(s)
- Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada.
- Functional Neuroimaging Unit, CRIUGM, University of Montreal, Montreal, Quebec, Canada.
- Mila-Quebec AI Institute, Montreal, Quebec, Canada.
| | - Eva Alonso-Ortiz
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada
| | - Mihael Abramovic
- Department of Radiology, Swiss Paraplegic Centre, Nottwil, Switzerland
| | - Carina Arneitz
- Department of Radiology, Swiss Paraplegic Centre, Nottwil, Switzerland
| | - Nicole Atcheson
- Centre for Advanced Imaging, The University of Queensland, Brisbane, Queensland, Australia
| | - Laura Barlow
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Robert L Barry
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-Massachusetts Institute of Technology Health Sciences & Technology, Cambridge, MA, USA
| | - Markus Barth
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Queensland, Australia
| | - Marco Battiston
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
| | - Christian Büchel
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Matthew Budde
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Virginie Callot
- Aix-Marseille Univ, CNRS, CRMBM, Marseille, France
- APHM, Hopital Universitaire Timone, CEMEREM, Marseille, France
| | - Anna J E Combes
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Benjamin De Leener
- Department of Computer and Software Engineering, Polytechnique Montreal, Montreal, Quebec, Canada
- CHU Sainte-Justine Research Centre, Montreal, Quebec, Canada
| | - Maxime Descoteaux
- Centre de Recherche CHUS, CIMS, Sherbrooke, Quebec, Canada
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | | | - Marek Dostál
- UHB - University Hospital Brno and Masaryk University, Department of Radiology and Nuclear Medicine, Brno, Czech Republic
| | - Julien Doyon
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Adam Dvorak
- Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia, Canada
| | - Falk Eippert
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Karla R Epperson
- Richard M. Lucas Center, Stanford University School of Medicine, Stanford, CA, USA
| | - Kevin S Epperson
- Richard M. Lucas Center, Stanford University School of Medicine, Stanford, CA, USA
| | - Patrick Freund
- Spinal Cord Injury Center Balgrist, University of Zurich, Zurich, Switzerland
| | - Jürgen Finsterbusch
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Alexandru Foias
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada
| | - Michela Fratini
- Institute of Nanotechnology, CNR, Rome, Italy
- IRCCS Santa Lucia Foundation, Rome, Italy
| | - Issei Fukunaga
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Claudia A M Gandini Wheeler-Kingshott
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
- Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
- Brain MRI 3T Research Centre, IRCCS Mondino Foundation, Pavia, Italy
| | - Giancarlo Germani
- Brain MRI 3T Research Centre, IRCCS Mondino Foundation, Pavia, Italy
| | | | - Federico Giove
- IRCCS Santa Lucia Foundation, Rome, Italy
- CREF - Museo storico della fisica e Centro studi e ricerche Enrico Fermi, Rome, Italy
| | - Charley Gros
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada
- Centre for Advanced Imaging, The University of Queensland, Brisbane, Queensland, Australia
| | - Francesco Grussu
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Akifumi Hagiwara
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Pierre-Gilles Henry
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Tomáš Horák
- Multimodal and functional imaging laboratory, Central European Institute of Technology (CEITEC), Brno, Czech Republic
| | - Masaaki Hori
- Department of Radiology, Toho University Omori Medical Center, Tokyo, Japan
| | - James Joers
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Kouhei Kamiya
- Department of Radiology, the University of Tokyo, Tokyo, Japan
| | - Haleh Karbasforoushan
- Interdepartmental Neuroscience Program, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- Department of Psychiatry and Behavioral Sciences, School of Medicine, Stanford University, Stanford, CA, USA
| | - Miloš Keřkovský
- UHB - University Hospital Brno and Masaryk University, Department of Radiology and Nuclear Medicine, Brno, Czech Republic
| | - Ali Khatibi
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
- Centre of Precision Rehabilitation for Spinal Pain (CPR Spine), School of Sport, Exercise and Rehabilitation Sciences, College of Life and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK
| | - Joo-Won Kim
- BioMedical Engineering and Imaging Institute (BMEII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Nawal Kinany
- Institute of Bioengineering/Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland
| | - Hagen Kitzler
- Institute of Diagnostic and Interventional Neuroradiology, Carl Gustav Carus University Hospital, Technische Universität Dresden, Dresden, Germany
| | - Shannon Kolind
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Medicine (Neurology), University of British Columbia, Vancouver, British Columbia, Canada
| | - Yazhuo Kong
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Wellcome Centre For Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Petr Kudlička
- Multimodal and functional imaging laboratory, Central European Institute of Technology (CEITEC), Brno, Czech Republic
| | - Paul Kuntke
- Institute of Diagnostic and Interventional Neuroradiology, Carl Gustav Carus University Hospital, Technische Universität Dresden, Dresden, Germany
| | - Nyoman D Kurniawan
- Centre for Advanced Imaging, The University of Queensland, Brisbane, Queensland, Australia
| | - Slawomir Kusmia
- CUBRIC, Cardiff University, Wales, UK
- Centre for Medical Image Computing (CMIC), Medical Physics and Biomedical Engineering Department, University College London, London, UK
- Epilepsy Society MRI Unit, Chalfont St Peter, UK
| | - René Labounek
- Division of Clinical Behavioral Neuroscience, Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
- Departments of Neurology and Biomedical Engineering, University Hospital Olomouc, Olomouc, Czech Republic
| | | | - Cornelia Laule
- Departments of Radiology, Pathology & Laboratory Medicine, Physics & Astronomy; International Collaboration on Repair Discoveries (ICORD), University of British Columbia, Vancouver, British Columbia, Canada
| | - Christine S Law
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA
| | - Christophe Lenglet
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Tobias Leutritz
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Tiantan Image Research Center, China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Sara Llufriu
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Sean Mackey
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA
| | - Eloy Martinez-Heras
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Loan Mattera
- Fondation Campus Biotech Genève, Geneva, Switzerland
| | - Igor Nestrasil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN, USA
- Division of Clinical Behavioral Neuroscience, Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| | - Kristin P O'Grady
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Radiology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Nico Papinutto
- UCSF Weill Institute for Neurosciences, Department of Neurology, University of California San Francisco, San Francisco, CA, USA
| | - Daniel Papp
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, Quebec, Canada
- Wellcome Centre For Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Deborah Pareto
- Neuroradiology Section, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Todd B Parrish
- Interdepartmental Neuroscience Program, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
| | - Anna Pichiecchio
- Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
- Brain MRI 3T Research Centre, IRCCS Mondino Foundation, Pavia, Italy
| | - Ferran Prados
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
- Centre for Medical Image Computing (CMIC), Medical Physics and Biomedical Engineering Department, University College London, London, UK
- E-health Centre, Universitat Oberta de Catalunya, Barcelona, Spain
| | - Àlex Rovira
- Neuroradiology Section, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Marc J Ruitenberg
- School of Biomedical Sciences, Faculty of Medicine, The University of Queensland, Brisbane, Queensland, Australia
| | - Rebecca S Samson
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
| | - Giovanni Savini
- Brain MRI 3T Research Centre, IRCCS Mondino Foundation, Pavia, Italy
| | - Maryam Seif
- Spinal Cord Injury Center Balgrist, University of Zurich, Zurich, Switzerland
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Alan C Seifert
- BioMedical Engineering and Imaging Institute (BMEII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Alex K Smith
- Wellcome Centre For Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Seth A Smith
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Radiology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Zachary A Smith
- University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Elisabeth Solana
- Center of Neuroimmunology, Laboratory of Advanced Imaging in Neuroimmunological Diseases, Hospital Clinic Barcelona, Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) and Universitat de Barcelona, Barcelona, Spain
| | - Yuichi Suzuki
- Department of Radiology, the University of Tokyo, Tokyo, Japan
| | | | - Alexandra Tinnermann
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jan Valošek
- Department of Neurology, Faculty of Medicine and Dentistry, Palacký University and University Hospital Olomouc, Olomouc, Czech Republic
| | - Dimitri Van De Ville
- Institute of Bioengineering/Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland
| | - Marios C Yiannakas
- NMR Research Unit, Queen Square MS Centre, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, London, UK
| | - Kenneth A Weber
- Division of Pain Medicine, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, CA, USA
| | - Nikolaus Weiskopf
- Department of Neurophysics, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Felix Bloch Institute for Solid State Physics, Faculty of Physics and Earth Sciences, Leipzig University, Leipzig, Germany
| | - Richard G Wise
- CUBRIC, Cardiff University, Wales, UK
- Institute for Advanced Biomedical Technologies, Department of Neuroscience, Imaging and Clinical Sciences, "G. D'Annunzio University" of Chieti-Pescara, Chieti, Italy
| | - Patrik O Wyss
- Department of Radiology, Swiss Paraplegic Centre, Nottwil, Switzerland
| | - Junqian Xu
- BioMedical Engineering and Imaging Institute (BMEII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
40
|
Chen H, Jiang Y, Loew M, Ko H. Unsupervised domain adaptation based COVID-19 CT infection segmentation network. APPL INTELL 2021; 52:6340-6353. [PMID: 34764618 PMCID: PMC8421243 DOI: 10.1007/s10489-021-02691-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2021] [Indexed: 10/31/2022]
Abstract
Automatic segmentation of infection areas in computed tomography (CT) images has proven to be an effective diagnostic approach for COVID-19. However, due to the limited number of pixel-level annotated medical images, accurate segmentation remains a major challenge. In this paper, we propose an unsupervised domain adaptation based segmentation network to improve the segmentation performance of the infection areas in COVID-19 CT images. In particular, we propose to utilize the synthetic data and limited unlabeled real COVID-19 CT images to jointly train the segmentation network. Furthermore, we develop a novel domain adaptation module, which is used to align the two domains and effectively improve the segmentation network's generalization capability to the real domain. Besides, we propose an unsupervised adversarial training scheme, which encourages the segmentation network to learn the domain-invariant feature, so that the robust feature can be used for segmentation. Experimental results demonstrate that our method can achieve state-of-the-art segmentation performance on COVID-19 CT images.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| | - Murray Loew
- Department of Biomedical Engineering, George Washington University, Washington, DC USA
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, 02841 South Korea
| |
Collapse
|
41
|
Mali SA, Ibrahim A, Woodruff HC, Andrearczyk V, Müller H, Primakov S, Salahuddin Z, Chatterjee A, Lambin P. Making Radiomics More Reproducible across Scanner and Imaging Protocol Variations: A Review of Harmonization Methods. J Pers Med 2021; 11:842. [PMID: 34575619 PMCID: PMC8472571 DOI: 10.3390/jpm11090842] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/21/2021] [Accepted: 08/24/2021] [Indexed: 12/13/2022] Open
Abstract
Radiomics converts medical images into mineable data via a high-throughput extraction of quantitative features used for clinical decision support. However, these radiomic features are susceptible to variation across scanners, acquisition protocols, and reconstruction settings. Various investigations have assessed the reproducibility and validation of radiomic features across these discrepancies. In this narrative review, we combine systematic keyword searches with prior domain knowledge to discuss various harmonization solutions to make the radiomic features more reproducible across various scanners and protocol settings. Different harmonization solutions are discussed and divided into two main categories: image domain and feature domain. The image domain category comprises methods such as the standardization of image acquisition, post-processing of raw sensor-level image data, data augmentation techniques, and style transfer. The feature domain category consists of methods such as the identification of reproducible features and normalization techniques such as statistical normalization, intensity harmonization, ComBat and its derivatives, and normalization using deep learning. We also reflect upon the importance of deep learning solutions for addressing variability across multi-centric radiomic studies especially using generative adversarial networks (GANs), neural style transfer (NST) techniques, or a combination of both. We cover a broader range of methods especially GANs and NST methods in more detail than previous reviews.
Collapse
Affiliation(s)
- Shruti Atul Mali
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
- Department of Medical Physics, Division of Nuclear Medicine and Oncological Imaging, Hospital Center Universitaire de Liege, 4000 Liege, Belgium
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
| | - Henry C. Woodruff
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Henning Müller
- Institute of Information Systems, University of Applied Sciences and Arts Western Switzerland (HES-SO), rue du Technopole 3, 3960 Sierre, Switzerland; (V.A.); (H.M.)
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Zohaib Salahuddin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Avishek Chatterjee
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, Maastricht, Universiteitssingel 40, 6229 ER Maastricht, The Netherlands; (A.I.); (H.C.W.); (S.P.); (Z.S.); (A.C.); (P.L.)
- Department of Radiology and Nuclear Medicine, GROW—School for Oncology, Maastricht University Medical Center+, P.O. Box 5800, 6202 AZ Maastricht, The Netherlands
| |
Collapse
|
42
|
Pennisi M, Kavasidis I, Spampinato C, Schinina V, Palazzo S, Salanitri FP, Bellitto G, Rundo F, Aldinucci M, Cristofaro M, Campioni P, Pianura E, Di Stefano F, Petrone A, Albarello F, Ippolito G, Cuzzocrea S, Conoci S. An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans. Artif Intell Med 2021; 118:102114. [PMID: 34412837 PMCID: PMC8139171 DOI: 10.1016/j.artmed.2021.102114] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 05/06/2021] [Accepted: 05/12/2021] [Indexed: 01/20/2023]
Abstract
COVID-19 infection caused by SARS-CoV-2 pathogen has been a catastrophic pandemic outbreak all over the world, with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at automatically identifying lung parenchyma and lobes. Next, we combine the segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the model's classification results with those obtained by three expert radiologists on a dataset of 166 CT scans. Results showed a sensitivity of 90.3% and a specificity of 93.5% for COVID-19 detection, at least on par with those yielded by the expert radiologists, and an average lesion categorization accuracy of about 84%. Moreover, a significant role is played by prior lung and lobe segmentation, that allowed us to enhance classification performance by over 6 percent points. The interpretation of the trained AI models reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai. The whole AI system is unique since, to the best of our knowledge, it is the first AI-based software, publicly available, that attempts to explain to radiologists what information is used by AI methods for making decisions and that proactively involves them in the decision loop to further improve the COVID-19 understanding.
Collapse
Affiliation(s)
| | | | | | - Vincenzo Schinina
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | | | | | | | | | - Marco Aldinucci
- Department of Computer Science, University of Turin, Turin, Italy
| | - Massimo Cristofaro
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Paolo Campioni
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Elisa Pianura
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Federica Di Stefano
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Ada Petrone
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Fabrizio Albarello
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Giuseppe Ippolito
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | | | - Sabrina Conoci
- ChimBioFaram Department, University of Messina, Messina, Italy
| |
Collapse
|
43
|
Wang P, Peng J, Pedersoli M, Zhou Y, Zhang C, Desrosiers C. Self-paced and self-consistent co-training for semi-supervised image segmentation. Med Image Anal 2021; 73:102146. [PMID: 34274692 DOI: 10.1016/j.media.2021.102146] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 05/19/2021] [Accepted: 06/21/2021] [Indexed: 11/25/2022]
Abstract
Deep co-training has recently been proposed as an effective approach for image segmentation when annotated data is scarce. In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method. To help distillate information from unlabeled images, we first design a self-paced learning strategy for co-training that lets jointly-trained neural networks focus on easier-to-segment regions first, and then gradually consider harder ones. This is achieved via an end-to-end differentiable loss in the form of a generalized Jensen Shannon Divergence (JSD). Moreover, to encourage predictions from different networks to be both consistent and confident, we enhance this generalized JSD loss with an uncertainty regularizer based on entropy. The robustness of individual models is further improved using a self-ensembling loss that enforces their prediction to be consistent across different training iterations. We demonstrate the potential of our method on three challenging image segmentation problems with different image modalities, using a small fraction of labeled data. Results show clear advantages in terms of performance compared to the standard co-training baselines and recently proposed state-of-the-art approaches for semi-supervised segmentation.
Collapse
Affiliation(s)
- Ping Wang
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Jizong Peng
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Marco Pedersoli
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| | - Yuanfeng Zhou
- School of Software, Shandong University, Jinan, 250101, China.
| | - Caiming Zhang
- School of Software, Shandong University, Jinan, 250101, China.
| | - Christian Desrosiers
- Department of Software and IT Engineering, Ecole de technologie supérieure, Montreal, H3C1K3, Canada.
| |
Collapse
|
44
|
Legarreta JH, Petit L, Rheault F, Theaud G, Lemaire C, Descoteaux M, Jodoin PM. Filtering in tractography using autoencoders (FINTA). Med Image Anal 2021; 72:102126. [PMID: 34161915 DOI: 10.1016/j.media.2021.102126] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 04/20/2021] [Accepted: 05/26/2021] [Indexed: 10/21/2022]
Abstract
Current brain white matter fiber tracking techniques show a number of problems, including: generating large proportions of streamlines that do not accurately describe the underlying anatomy; extracting streamlines that are not supported by the underlying diffusion signal; and under-representing some fiber populations, among others. In this paper, we describe a novel autoencoder-based learning method to filter streamlines from diffusion MRI tractography, and hence, to obtain more reliable tractograms. Our method, dubbed FINTA (Filtering in Tractography using Autoencoders) uses raw, unlabeled tractograms to train the autoencoder, and to learn a robust representation of brain streamlines. Such an embedding is then used to filter undesired streamline samples using a nearest neighbor algorithm. Our experiments on both synthetic and in vivo human brain diffusion MRI tractography data obtain accuracy scores exceeding the 90% threshold on the test set. Results reveal that FINTA has a superior filtering performance compared to conventional, anatomy-based methods, and the RecoBundles state-of-the-art method. Additionally, we demonstrate that FINTA can be applied to partial tractograms without requiring changes to the framework. We also show that the proposed method generalizes well across different tracking methods and datasets, and shortens significantly the computation time for large (>1 M streamlines) tractograms. Together, this work brings forward a new deep learning framework in tractography based on autoencoders, which offers a flexible and powerful method for white matter filtering and bundling that could enhance tractometry and connectivity analyses.
Collapse
Affiliation(s)
- Jon Haitz Legarreta
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada; Videos & Images Theory and Analytics Laboratory (VITAL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada.
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle (GIN), Univ. Bordeaux, CNRS, CEA, IMN, UMR 5293, Bordeaux F-33000, France
| | - François Rheault
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada
| | - Guillaume Theaud
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada
| | - Carl Lemaire
- Centre de Calcul Scientifique, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada
| | - Pierre-Marc Jodoin
- Videos & Images Theory and Analytics Laboratory (VITAL), Department of Computer Science, Université de Sherbrooke, 2500, boul. de l'Université, Sherbrooke, Québec J1K 2R1, Canada
| |
Collapse
|
45
|
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain adaptation for segmentation of critical structures for prostate cancer therapy. Sci Rep 2021; 11:11480. [PMID: 34075061 PMCID: PMC8169882 DOI: 10.1038/s41598-021-90294-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/04/2021] [Indexed: 11/23/2022] Open
Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Collapse
Affiliation(s)
- Anneke Meyer
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| | - Alireza Mehrtash
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marko Rak
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Oleksii Bashkanov
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Bjoern Langbein
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alireza Ziaei
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam S Kibel
- Division of Urology, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Christian Hansen
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
46
|
Soltanian-Zadeh S, Kurokawa K, Liu Z, Zhang F, Saeedi O, Hammer DX, Miller DT, Farsiu S. Weakly supervised individual ganglion cell segmentation from adaptive optics OCT images for glaucomatous damage assessment. OPTICA 2021; 8:642-651. [PMID: 35174258 PMCID: PMC8846574 DOI: 10.1364/optica.418274] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Cell-level quantitative features of retinal ganglion cells (GCs) are potentially important biomarkers for improved diagnosis and treatment monitoring of neurodegenerative diseases such as glaucoma, Parkinson's disease, and Alzheimer's disease. Yet, due to limited resolution, individual GCs cannot be visualized by commonly used ophthalmic imaging systems, including optical coherence tomography (OCT), and assessment is limited to gross layer thickness analysis. Adaptive optics OCT (AO-OCT) enables in vivo imaging of individual retinal GCs. We present an automated segmentation of GC layer (GCL) somas from AO-OCT volumes based on weakly supervised deep learning (named WeakGCSeg), which effectively utilizes weak annotations in the training process. Experimental results show that WeakGCSeg is on par with or superior to human experts and is superior to other state-of-the-art networks. The automated quantitative features of individual GCLs show an increase in structure-function correlation in glaucoma subjects compared to using thickness measures from OCT images. Our results suggest that by automatic quantification of GC morphology, WeakGCSeg can potentially alleviate a major bottleneck in using AO-OCT for vision research.
Collapse
Affiliation(s)
| | - Kazuhiro Kurokawa
- School of Optometry, Indiana University, Bloomington, Indiana 47405, USA
| | - Zhuolin Liu
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Furu Zhang
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Osamah Saeedi
- Department of Ophthalmology and Visual Sciences, University of Maryland Medical Center, Baltimore, Maryland 21201, USA
| | - Daniel X. Hammer
- Center for Devices and Radiological Health (CDRH), U.S. Food and Drug Administration, Silver Spring, Maryland 20993, USA
| | - Donald T. Miller
- School of Optometry, Indiana University, Bloomington, Indiana 47405, USA
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, North Carolina 27708, USA
- Department of Ophthalmology, Duke University Medical Center, Durham, North Carolina 27710, USA
- Corresponding author:
| |
Collapse
|
47
|
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06064-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
48
|
Chen L, Zhao H, Jiang H, Balu N, Geleri DB, Chu B, Watase H, Zhao X, Li R, Xu J, Hatsukami TS, Xu D, Hwang JN, Yuan C. Domain adaptive and fully automated carotid artery atherosclerotic lesion detection using an artificial intelligence approach (LATTE) on 3D MRI. Magn Reson Med 2021; 86:1662-1673. [PMID: 33885165 DOI: 10.1002/mrm.28794] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/07/2021] [Accepted: 03/18/2021] [Indexed: 01/17/2023]
Abstract
PURPOSE To develop and evaluate a domain adaptive and fully automated review workflow (lesion assessment through tracklet evaluation, LATTE) for assessment of atherosclerotic disease in 3D carotid MR vessel wall imaging (MR VWI). METHODS VWI of 279 subjects with carotid atherosclerosis were used to develop LATTE, mainly convolutional neural network (CNN)-based domain adaptive lesion classification after image quality assessment and artery of interest localization. Heterogeneity in test sets from various sites usually causes inferior CNN performance. With our novel unsupervised domain adaptation (DA), LATTE was designed to accurately classify arteries into normal arteries and early and advanced lesions without additional annotations on new datasets. VWI of 271 subjects from four datasets (eight sites) with slightly different imaging parameters/signal patterns were collected to assess the effectiveness of DA of LATTE using the area under the receiver operating characteristic curve (AUC) on all lesions and advanced lesions before and after DA. RESULTS LATTE had good performance with advanced/all lesion classification, with the AUC of >0.88/0.83, significant improvements from >0.82/0.80 if without DA. CONCLUSIONS LATTE can locate target arteries and distinguish carotid atherosclerotic lesions with consistently improved performance with DA on new datasets. It may be useful for carotid atherosclerosis detection and assessment on various clinical sites.
Collapse
Affiliation(s)
- Li Chen
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Huilin Zhao
- Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hongjian Jiang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Niranjan Balu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | | | - Baocheng Chu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Hiroko Watase
- Department of Surgery, University of Washington, Seattle, Washington, USA
| | - Xihai Zhao
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | - Rui Li
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | - Jianrong Xu
- Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Thomas S Hatsukami
- Department of Surgery, University of Washington, Seattle, Washington, USA
| | - Dongxiang Xu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jenq-Neng Hwang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Chun Yuan
- Department of Radiology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
49
|
Rim B, Lee S, Lee A, Gil HW, Hong M. Semantic Cardiac Segmentation in Chest CT Images Using K-Means Clustering and the Mathematical Morphology Method. SENSORS 2021; 21:s21082675. [PMID: 33920219 PMCID: PMC8070040 DOI: 10.3390/s21082675] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 04/02/2021] [Accepted: 04/07/2021] [Indexed: 11/24/2022]
Abstract
Whole cardiac segmentation in chest CT images is important to identify functional abnormalities that occur in cardiovascular diseases, such as coronary artery disease (CAD) detection. However, manual efforts are time-consuming and labor intensive. Additionally, labeling the ground truth for cardiac segmentation requires the extensive manual annotation of images by the radiologist. Due to the difficulty in obtaining the annotated data and the required expertise as an annotator, an unsupervised approach is proposed. In this paper, we introduce a semantic whole-heart segmentation combining K-Means clustering as a threshold criterion of the mean-thresholding method and mathematical morphology method as a threshold shifting enhancer. The experiment was conducted on 500 subjects in two cases: (1) 56 slices per volume containing full heart scans, and (2) 30 slices per volume containing about half of the top of heart scans before the liver appears. In both cases, the results showed an average silhouette score of the K-Means method of 0.4130. Additionally, the experiment on 56 slices per volume achieved an overall accuracy (OA) and mean intersection over union (mIoU) of 34.90% and 41.26%, respectively, while the performance for the first 30 slices per volume achieved an OA and mIoU of 55.10% and 71.46%, respectively.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Software Convergence, Soonchunhyang University, Asan 31538, Korea; (B.R.); (S.L.)
| | - Sungjin Lee
- Department of Software Convergence, Soonchunhyang University, Asan 31538, Korea; (B.R.); (S.L.)
| | - Ahyoung Lee
- Department of Computer Science, Kennesaw State University, Marietta, GA 30144, USA;
| | - Hyo-Wook Gil
- Department of Internal Medicine, Soonchunhyang University Cheonan Hospital, Cheonan 31151, Korea;
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
- Correspondence:
| |
Collapse
|
50
|
Meyer A, Ghosh S, Schindele D, Schostak M, Stober S, Hansen C, Rak M. Uncertainty-aware temporal self-learning (UATS): Semi-supervised learning for segmentation of prostate zones and beyond. Artif Intell Med 2021; 116:102073. [PMID: 34020751 DOI: 10.1016/j.artmed.2021.102073] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 02/09/2021] [Accepted: 04/07/2021] [Indexed: 10/21/2022]
Abstract
Various convolutional neural network (CNN) based concepts have been introduced for the prostate's automatic segmentation and its coarse subdivision into transition zone (TZ) and peripheral zone (PZ). However, when targeting a fine-grained segmentation of TZ, PZ, distal prostatic urethra (DPU) and the anterior fibromuscular stroma (AFS), the task becomes more challenging and has not yet been solved at the level of human performance. One reason might be the insufficient amount of labeled data for supervised training. Therefore, we propose to apply a semi-supervised learning (SSL) technique named uncertainty-aware temporal self-learning (UATS) to overcome the expensive and time-consuming manual ground truth labeling. We combine the SSL techniques temporal ensembling and uncertainty-guided self-learning to benefit from unlabeled images, which are often readily available. Our method significantly outperforms the supervised baseline and obtained a Dice coefficient (DC) of up to 78.9%, 87.3%, 75.3%, 50.6% for TZ, PZ, DPU and AFS, respectively. The obtained results are in the range of human inter-rater performance for all structures. Moreover, we investigate the method's robustness against noise and demonstrate the generalization capability for varying ratios of labeled data and on other challenging tasks, namely the hippocampus and skin lesion segmentation. UATS achieved superiority segmentation quality compared to the supervised baseline, particularly for minimal amounts of labeled data.
Collapse
Affiliation(s)
- Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany.
| | - Suhita Ghosh
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| | - Daniel Schindele
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Martin Schostak
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Sebastian Stober
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| |
Collapse
|