1
|
Liu Z, Kainth K, Zhou A, Deyer TW, Fayad ZA, Greenspan H, Mei X. A review of self-supervised, generative, and few-shot deep learning methods for data-limited magnetic resonance imaging segmentation. NMR IN BIOMEDICINE 2024; 37:e5143. [PMID: 38523402 DOI: 10.1002/nbm.5143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/26/2024]
Abstract
Magnetic resonance imaging (MRI) is a ubiquitous medical imaging technology with applications in disease diagnostics, intervention, and treatment planning. Accurate MRI segmentation is critical for diagnosing abnormalities, monitoring diseases, and deciding on a course of treatment. With the advent of advanced deep learning frameworks, fully automated and accurate MRI segmentation is advancing. Traditional supervised deep learning techniques have advanced tremendously, reaching clinical-level accuracy in the field of segmentation. However, these algorithms still require a large amount of annotated data, which is oftentimes unavailable or impractical. One way to circumvent this issue is to utilize algorithms that exploit a limited amount of labeled data. This paper aims to review such state-of-the-art algorithms that use a limited number of annotated samples. We explain the fundamental principles of self-supervised learning, generative models, few-shot learning, and semi-supervised learning and summarize their applications in cardiac, abdomen, and brain MRI segmentation. Throughout this review, we highlight algorithms that can be employed based on the quantity of annotated data available. We also present a comprehensive list of notable publicly available MRI segmentation datasets. To conclude, we discuss possible future directions of the field-including emerging algorithms, such as contrastive language-image pretraining, and potential combinations across the methods discussed-that can further increase the efficacy of image segmentation with limited labels.
Collapse
Affiliation(s)
- Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Komal Kainth
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Timothy W Deyer
- East River Medical Imaging, New York, New York, USA
- Department of Radiology, Cornell Medicine, New York, New York, USA
| | - Zahi A Fayad
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Xueyan Mei
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
2
|
Nerella S, Bandyopadhyay S, Zhang J, Contreras M, Siegel S, Bumin A, Silva B, Sena J, Shickel B, Bihorac A, Khezeli K, Rashidi P. Transformers and large language models in healthcare: A review. Artif Intell Med 2024; 154:102900. [PMID: 38878555 DOI: 10.1016/j.artmed.2024.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 05/28/2024] [Accepted: 05/30/2024] [Indexed: 08/09/2024]
Abstract
With Artificial Intelligence (AI) increasingly permeating various aspects of society, including healthcare, the adoption of the Transformers neural network architecture is rapidly changing many applications. Transformer is a type of deep learning architecture initially developed to solve general-purpose Natural Language Processing (NLP) tasks and has subsequently been adapted in many fields, including healthcare. In this survey paper, we provide an overview of how this architecture has been adopted to analyze various forms of healthcare data, including clinical NLP, medical imaging, structured Electronic Health Records (EHR), social media, bio-physiological signals, biomolecular sequences. Furthermore, which have also include the articles that used the transformer architecture for generating surgical instructions and predicting adverse outcomes after surgeries under the umbrella of critical care. Under diverse settings, these models have been used for clinical diagnosis, report generation, data reconstruction, and drug/protein synthesis. Finally, we also discuss the benefits and limitations of using transformers in healthcare and examine issues such as computational cost, model interpretability, fairness, alignment with human values, ethical implications, and environmental impact.
Collapse
Affiliation(s)
- Subhash Nerella
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | | | - Jiaqing Zhang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, United States
| | - Miguel Contreras
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Scott Siegel
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Aysegul Bumin
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Brandon Silva
- Department of Computer and Information Science and Engineering, University of Florida, Gainesville, United States
| | - Jessica Sena
- Department Of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Benjamin Shickel
- Department of Medicine, University of Florida, Gainesville, United States
| | - Azra Bihorac
- Department of Medicine, University of Florida, Gainesville, United States
| | - Kia Khezeli
- Department of Biomedical Engineering, University of Florida, Gainesville, United States
| | - Parisa Rashidi
- Department of Biomedical Engineering, University of Florida, Gainesville, United States.
| |
Collapse
|
3
|
Safari M, Eidex Z, Chang CW, Qiu RL, Yang X. Fast MRI Reconstruction Using Deep Learning-based Compressed Sensing: A Systematic Review. ARXIV 2024:arXiv:2405.00241v1. [PMID: 38745700 PMCID: PMC11092677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Magnetic resonance imaging (MRI) has revolutionized medical imaging, providing a non-invasive and highly detailed look into the human body. However, the long acquisition times of MRI present challenges, causing patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, researchers are exploring various techniques to reduce acquisition time and improve the overall efficiency of MRI. One such technique is compressed sensing (CS), which reduces data acquisition by leveraging image sparsity in transformed spaces. In recent years, deep learning (DL) has been integrated with CS-MRI, leading to a new framework that has seen remarkable growth. DL-based CS-MRI approaches are proving to be highly effective in accelerating MR imaging without compromising image quality. This review comprehensively examines DL-based CS-MRI techniques, focusing on their role in increasing MR imaging speed. We provide a detailed analysis of each category of DL-based CS-MRI including end-to-end, unroll optimization, self-supervised, and federated learning. Our systematic review highlights significant contributions and underscores the exciting potential of DL in CS-MRI. Additionally, our systematic review efficiently summarizes key results and trends in DL-based CS-MRI including quantitative metrics, the dataset used, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based CS-MRI in the advancement of medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based CS-MRI publications and publicly available datasets - https://github.com/mosaf/Awesome-DL-based-CS-MRI.
Collapse
Affiliation(s)
- Mojtaba Safari
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
4
|
Svanera M, Savardi M, Signoroni A, Benini S, Muckli L. Fighting the scanner effect in brain MRI segmentation with a progressive level-of-detail network trained on multi-site data. Med Image Anal 2024; 93:103090. [PMID: 38241763 DOI: 10.1016/j.media.2024.103090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.
Collapse
Affiliation(s)
- Michele Svanera
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK.
| | - Mattia Savardi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Alberto Signoroni
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Sergio Benini
- Department of Information Engineering, University of Brescia, Italy
| | - Lars Muckli
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK
| |
Collapse
|
5
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
6
|
Sudre CH, Van Wijnen K, Dubost F, Adams H, Atkinson D, Barkhof F, Birhanu MA, Bron EE, Camarasa R, Chaturvedi N, Chen Y, Chen Z, Chen S, Dou Q, Evans T, Ezhov I, Gao H, Girones Sanguesa M, Gispert JD, Gomez Anson B, Hughes AD, Ikram MA, Ingala S, Jaeger HR, Kofler F, Kuijf HJ, Kutnar D, Lee M, Li B, Lorenzini L, Menze B, Molinuevo JL, Pan Y, Puybareau E, Rehwald R, Su R, Shi P, Smith L, Tillin T, Tochon G, Urien H, van der Velden BHM, van der Velpen IF, Wiestler B, Wolters FJ, Yilmaz P, de Groot M, Vernooij MW, de Bruijne M. Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021. Med Image Anal 2024; 91:103029. [PMID: 37988921 DOI: 10.1016/j.media.2023.103029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.
Collapse
Affiliation(s)
- Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom; Centre for Medical Image Computing, University College London, London, United Kingdom; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
| | - Kimberlin Van Wijnen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Florian Dubost
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Hieab Adams
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - David Atkinson
- Centre for Medical Imaging, University College London, London, United Kingdom
| | - Frederik Barkhof
- Centre for Medical Image Computing, University College London, London, United Kingdom; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Mahlet A Birhanu
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Esther E Bron
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Robin Camarasa
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Nish Chaturvedi
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - Yuan Chen
- Department of Radiology, University of Massachusetts Medical School, Worcester, USA
| | - Zihao Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shuai Chen
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Tavia Evans
- Department of Clinical Genetics and Radiology, Erasmus MC, Rotterdam, The Netherlands
| | - Ivan Ezhov
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Haojun Gao
- Department of Radiology, Zhejiang University, Hangzhou, China
| | | | - Juan Domingo Gispert
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; Hospital del Mar Medical Research Institute (IMIM), Barcelona, Spain; Centro de Investigación Biomédica en Red Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Barcelona, Spain
| | | | - Alun D Hughes
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | - M Arfan Ikram
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Silvia Ingala
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - H Rolf Jaeger
- Institute of Neurology, University College London, London, United Kingdom
| | - Florian Kofler
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Germany
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Denis Kutnar
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - Bo Li
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Luigi Lorenzini
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, Amsterdam, The Netherlands
| | - Bjoern Menze
- Department of Informatics, Technische Universitat Munchen, Munich, Germany; Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Jose Luis Molinuevo
- Barcelonaß Brain Research Center (BBRC), Pasqual Maragall Foundation, Barcelona, Spain; H. Lundbeck A/S, Copenhagen, Denmark
| | - Yiwei Pan
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Rafael Rehwald
- Institute of Neurology, University College London, London, United Kingdom
| | - Ruisheng Su
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Pengcheng Shi
- Department of Electronic and Information Engineering, Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | | | - Therese Tillin
- MRC Unit for Lifelong Health and Ageing at UCL, Department of Population Science and Experimental Medicine, University College London, London, United Kingdom
| | | | - Hélène Urien
- ISEP-Institut Supérieur d'Électronique de Paris, Issy-les-Moulineaux, France
| | | | - Isabelle F van der Velpen
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Germany
| | - Frank J Wolters
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Pinar Yilmaz
- Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marius de Groot
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; GlaxoSmithKline Research, Stevenage, United Kingdom
| | - Meike W Vernooij
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Epidemiology, Erasmus MC, Rotterdam, The Netherlands
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
7
|
Mhlanga ST, Viriri S. Deep learning techniques for isointense infant brain tissue segmentation: a systematic literature review. Front Med (Lausanne) 2023; 10:1240360. [PMID: 38193036 PMCID: PMC10773803 DOI: 10.3389/fmed.2023.1240360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/01/2023] [Indexed: 01/10/2024] Open
Abstract
Introduction To improve comprehension of initial brain growth in wellness along with sickness, it is essential to precisely segment child brain magnetic resonance imaging (MRI) into white matter (WM) and gray matter (GM), along with cerebrospinal fluid (CSF). Nonetheless, in the isointense phase (6-8 months of age), the inborn myelination and development activities, WM along with GM display alike stages of intensity in both T1-weighted and T2-weighted MRI, making tissue segmentation extremely difficult. Methods The comprehensive review of studies related to isointense brain MRI segmentation approaches is highlighted in this publication. The main aim and contribution of this study is to aid researchers by providing a thorough review to make their search for isointense brain MRI segmentation easier. The systematic literature review is performed from four points of reference: (1) review of studies concerning isointense brain MRI segmentation; (2) research contribution and future works and limitations; (3) frequently applied evaluation metrics and datasets; (4) findings of this studies. Results and discussion The systemic review is performed on studies that were published in the period of 2012 to 2022. A total of 19 primary studies of isointense brain MRI segmentation were selected to report the research question stated in this review.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
8
|
Russo C, Pirozzi MA, Mazio F, Cascone D, Cicala D, De Liso M, Nastro A, Covelli EM, Cinalli G, Quarantelli M. Fully automated measurement of intracranial CSF and brain parenchyma volumes in pediatric hydrocephalus by segmentation of clinical MRI studies. Med Phys 2023; 50:7921-7933. [PMID: 37166045 DOI: 10.1002/mp.16445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 03/29/2023] [Accepted: 04/18/2023] [Indexed: 05/12/2023] Open
Abstract
BACKGROUND Brain parenchyma (BP) and intracranial cerebrospinal fluid (iCSF) volumes measured by fully automated segmentation of clinical brain MRI studies may be useful for the diagnosis and follow-up of pediatric hydrocephalus. However, previously published segmentation techniques either rely on dedicated sequences, not routinely used in clinical practice, or on spatial normalization, which has limited accuracy when severe brain distortions, such as in hydrocephalic patients, are present. PURPOSE We developed a fully automated method to measure BP and iCSF volumes from clinical brain MRI studies of pediatric hydrocephalus patients, exploiting the complementary information contained in T2- and T1-weighted images commonly used in clinical practice. METHODS The proposed procedure, following skull-stripping of the combined volumes, performed using a multiparametric method to obtain a reliable definition of the inner skull profile, maximizes the CSF-to-parenchyma contrast by dividing the T2w- by the T1w- volume after full-scale dynamic rescaling, thus allowing separation of iCSF and BP through a simple thresholding routine. RESULTS Validation against manual tracing on 23 studies (four controls and 19 hydrocephalic patients) showed excellent concordance (ICC > 0.98) and spatial overlap (Dice coefficients ranging from 77.2% for iCSF to 96.8% for intracranial volume). Accuracy was comparable to the intra-operator reproducibility of manual segmentation, as measured in 14 studies processed twice by the same experienced neuroradiologist. Results of the application of the algorithm to a dataset of 63 controls and 57 hydrocephalic patients (19 with parenchymal damage), measuring volumes' changes with normal development and in hydrocephalic patients, are also reported for demonstration purposes. CONCLUSIONS The proposed approach allows fully automated segmentation of BP and iCSF in clinical studies, also in severely distorted brains, enabling to assess age- and disease-related changes in intracranial tissue volume with an accuracy comparable to expert manual segmentation.
Collapse
Affiliation(s)
- Carmela Russo
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Maria Agnese Pirozzi
- Institute of Biostructures and Bioimaging, National Research Council, Naples, Italy
- Department of Advanced Medical and Surgical Sciences, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Federica Mazio
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Daniele Cascone
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Domenico Cicala
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Maria De Liso
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Anna Nastro
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Eugenio Maria Covelli
- Neuroradiology Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Giuseppe Cinalli
- Pediatric Neurosurgery Unit, Department of Neuroscience, Santobono-Pausilipon Children's Hospital, Naples, Italy
| | - Mario Quarantelli
- Institute of Biostructures and Bioimaging, National Research Council, Naples, Italy
| |
Collapse
|
9
|
Zhang M, Wu Y, Zhang H, Qin Y, Zheng H, Tang W, Arnold C, Pei C, Yu P, Nan Y, Yang G, Walsh S, Marshall DC, Komorowski M, Wang P, Guo D, Jin D, Wu Y, Zhao S, Chang R, Zhang B, Lu X, Qayyum A, Mazher M, Su Q, Wu Y, Liu Y, Zhu Y, Yang J, Pakzad A, Rangelov B, Estepar RSJ, Espinosa CC, Sun J, Yang GZ, Gu Y. Multi-site, Multi-domain Airway Tree Modeling. Med Image Anal 2023; 90:102957. [PMID: 37716199 DOI: 10.1016/j.media.2023.102957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 06/07/2023] [Accepted: 09/04/2023] [Indexed: 09/18/2023]
Abstract
Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to the quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and extensive clinical efforts for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Both quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage (https://atm22.grand-challenge.org/).
Collapse
Affiliation(s)
- Minghui Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yangqian Wu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hanxiao Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yulei Qin
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hao Zheng
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wen Tang
- InferVision Medical Technology Co., Ltd., Beijing, China
| | | | - Chenhao Pei
- InferVision Medical Technology Co., Ltd., Beijing, China
| | - Pengxin Yu
- InferVision Medical Technology Co., Ltd., Beijing, China
| | - Yang Nan
- Imperial College London, London, UK
| | | | | | | | | | - Puyang Wang
- Alibaba DAMO Academy, 969 West Wen Yi Road, Hangzhou, Zhejiang, China
| | - Dazhou Guo
- Alibaba DAMO Academy USA, 860 Washington Street, 8F, NY, USA
| | - Dakai Jin
- Alibaba DAMO Academy USA, 860 Washington Street, 8F, NY, USA
| | - Ya'nan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shuiqing Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Runsheng Chang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Boyu Zhang
- A.I R&D Center, Sanmed Biotech Inc., No. 266 Tongchang Road, Xiangzhou District, Zhuhai, Guangdong, China
| | - Xing Lu
- A.I R&D Center, Sanmed Biotech Inc., T220 Trade st. SanDiego, CA, USA
| | - Abdul Qayyum
- ENIB, UMR CNRS 6285 LabSTICC, Brest, 29238, France
| | - Moona Mazher
- Department of Computer Engineering and Mathematics, University Rovira I Virgili, Tarragona, Spain
| | - Qi Su
- Shanghai Jiao Tong University, Shanghai, China
| | - Yonghuang Wu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Ying'ao Liu
- University of Science and Technology of China, Hefei, Anhui, China
| | | | - Jiancheng Yang
- Dianei Technology, Shanghai, China; EPFL, Lausanne, Switzerland
| | - Ashkan Pakzad
- Medical Physics and Biomedical Engineering Department, University College London, London, UK
| | - Bojidar Rangelov
- Center for Medical Image Computing, University College London, London, UK
| | | | | | - Jiayuan Sun
- Department of Respiratory and Critical Care Medicine, Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai, China.
| | - Guang-Zhong Yang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Automation, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
10
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
11
|
Wagner DT, Tilmans L, Peng K, Niedermeier M, Rohl M, Ryan S, Yadav D, Takacs N, Garcia-Fraley K, Koso M, Dikici E, Prevedello LM, Nguyen XV. Artificial Intelligence in Neuroradiology: A Review of Current Topics and Competition Challenges. Diagnostics (Basel) 2023; 13:2670. [PMID: 37627929 PMCID: PMC10453240 DOI: 10.3390/diagnostics13162670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
There is an expanding body of literature that describes the application of deep learning and other machine learning and artificial intelligence methods with potential relevance to neuroradiology practice. In this article, we performed a literature review to identify recent developments on the topics of artificial intelligence in neuroradiology, with particular emphasis on large datasets and large-scale algorithm assessments, such as those used in imaging AI competition challenges. Numerous applications relevant to ischemic stroke, intracranial hemorrhage, brain tumors, demyelinating disease, and neurodegenerative/neurocognitive disorders were discussed. The potential applications of these methods to spinal fractures, scoliosis grading, head and neck oncology, and vascular imaging were also reviewed. The AI applications examined perform a variety of tasks, including localization, segmentation, longitudinal monitoring, diagnostic classification, and prognostication. While research on this topic is ongoing, several applications have been cleared for clinical use and have the potential to augment the accuracy or efficiency of neuroradiologists.
Collapse
Affiliation(s)
- Daniel T. Wagner
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Luke Tilmans
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Kevin Peng
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | | | - Matt Rohl
- College of Arts and Sciences, The Ohio State University, Columbus, OH 43210, USA
| | - Sean Ryan
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Divya Yadav
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Noah Takacs
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Krystle Garcia-Fraley
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Mensur Koso
- College of Medicine, The Ohio State University, Columbus, OH 43210, USA
| | - Engin Dikici
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Luciano M. Prevedello
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| | - Xuan V. Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA (L.M.P.)
| |
Collapse
|
12
|
Xiao H, Li L, Liu Q, Zhu X, Zhang Q. Transformers in medical image segmentation: A review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
13
|
Piao Z, Gu YH, Jin H, Yoo SJ. Intracerebral hemorrhage CT scan image segmentation with HarDNet based transformer. Sci Rep 2023; 13:7208. [PMID: 37137921 PMCID: PMC10156735 DOI: 10.1038/s41598-023-33775-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 04/18/2023] [Indexed: 05/05/2023] Open
Abstract
Although previous studies conducted on the segmentation of hemorrhage images were based on the U-Net model, which comprises an encoder-decoder architecture, these models exhibit low parameter passing efficiency between the encoder and decoder, large model size, and slow speed. Therefore, to overcome these drawbacks, this study proposes TransHarDNet, an image segmentation model for the diagnosis of intracerebral hemorrhage in CT scan images of the brain. In this model, the HarDNet block is applied to the U-Net architecture, and the encoder and decoder are connected using a transformer block. As a result, the network complexity was reduced and the inference speed improved while maintaining the high performance compared to conventional models. Furthermore, the superiority of the proposed model was verified by using 82,636 CT scan images showing five different types of hemorrhages to train and test the model. Experimental results showed that the proposed model exhibited a Dice coefficient and IoU of 0.712 and 0.597, respectively, in a test set comprising 1200 images of hemorrhage, indicating better performance compared to typical segmentation models such as U-Net, U-Net++, SegNet, PSPNet, and HarDNet. Moreover, the inference time was 30.78 frames per second (FPS), which was faster than all en-coder-decoder-based models except HarDNet.
Collapse
Affiliation(s)
- Zhegao Piao
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Yeong Hyeon Gu
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea.
| | - Hailin Jin
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Seong Joon Yoo
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea.
| |
Collapse
|
14
|
Murugesan B, Liu B, Galdran A, Ayed IB, Dolz J. Calibrating segmentation networks with margin-based label smoothing. Med Image Anal 2023; 87:102826. [PMID: 37146441 DOI: 10.1016/j.media.2023.102826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 02/15/2023] [Accepted: 04/17/2023] [Indexed: 05/07/2023]
Abstract
Despite the undeniable progress in visual recognition tasks fueled by deep neural networks, there exists recent evidence showing that these models are poorly calibrated, resulting in over-confident predictions. The standard practices of minimizing the cross-entropy loss during training promote the predicted softmax probabilities to match the one-hot label assignments. Nevertheless, this yields a pre-softmax activation of the correct class that is significantly larger than the remaining activations, which exacerbates the miscalibration problem. Recent observations from the classification literature suggest that loss functions that embed implicit or explicit maximization of the entropy of predictions yield state-of-the-art calibration performances. Despite these findings, the impact of these losses in the relevant task of calibrating medical image segmentation networks remains unexplored. In this work, we provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses. Specifically, these losses could be viewed as approximations of a linear penalty (or a Lagrangian term) imposing equality constraints on logit distances. This points to an important limitation of such underlying equality constraints, whose ensuing gradients constantly push towards a non-informative solution, which might prevent from reaching the best compromise between the discriminative performance and calibration of the model during gradient-based optimization. Following our observations, we propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances. Comprehensive experiments on a variety of public medical image segmentation benchmarks demonstrate that our method sets novel state-of-the-art results on these tasks in terms of network calibration, whereas the discriminative performance is also improved. The code is available at https://github.com/Bala93/MarginLoss.
Collapse
Affiliation(s)
- Balamurali Murugesan
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada.
| | - Bingyuan Liu
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada
| | | | - Ismail Ben Ayed
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Canada
| | - Jose Dolz
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Canada
| |
Collapse
|
15
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
16
|
Cabeza-Ruiz R, Velázquez-Pérez L, Pérez-Rodríguez R, Reetz K. ConvNets for automatic detection of polyglutamine SCAs from brain MRIs: state of the art applications. Med Biol Eng Comput 2023; 61:1-24. [PMID: 36385616 DOI: 10.1007/s11517-022-02714-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 10/26/2022] [Indexed: 11/17/2022]
Abstract
Polyglutamine spinocerebellar ataxias (polyQ SCAs) are a group of neurodegenerative diseases, clinically and genetically heterogeneous, characterized by loss of balance and motor coordination due to dysfunction of the cerebellum and its connections. The diagnosis of each type of polyQ SCA, alongside with genetic tests, includes medical images analysis, and its automation may help specialists to distinguish between each type. Convolutional neural networks (ConvNets or CNNs) have been recently used for medical image processing, with outstanding results. In this work, we present the main clinical and imaging features of polyglutamine SCAs, and the basics of CNNs. Finally, we review studies that have used this approach to automatically process brain medical images and may be applied to SCAs detection. We conclude by discussing the possible limitations and opportunities of using ConvNets for SCAs diagnose in the future.
Collapse
Affiliation(s)
| | - Luis Velázquez-Pérez
- Cuban Academy of Sciences, La Habana, Cuba
- Center for the Research and Rehabilitation of Hereditary Ataxias, Holguín, Cuba
| | - Roberto Pérez-Rodríguez
- CAD/CAM Study Center, University of Holguín, Holguín, Cuba
- Cuban Academy of Sciences, La Habana, Cuba
| | - Kathrin Reetz
- Department of Neurology, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
17
|
Praveenkumar S, Kalaiselvi T, Somasundaram K. Methods of Brain Extraction from Magnetic Resonance Images of Human Head: A Review. Crit Rev Biomed Eng 2023; 51:1-40. [PMID: 37581349 DOI: 10.1615/critrevbiomedeng.2023047606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Medical images are providing vital information to aid physicians in diagnosing a disease afflicting the organ of a human body. Magnetic resonance imaging is an important imaging modality in capturing the soft tissues of the brain. Segmenting and extracting the brain is essential in studying the structure and pathological condition of brain. There are several methods that are developed for this purpose. Researchers in brain extraction or segmentation need to know the current status of the work that have been done. Such an information is also important for improving the existing method to get more accurate results or to reduce the complexity of the algorithm. In this paper we review the classical methods and convolutional neural network-based deep learning brain extraction methods.
Collapse
Affiliation(s)
| | - T Kalaiselvi
- Department of Computer Science and Applications, Gandhigram Rural Institute, Gandhigram 624302, Tamil Nadu, India
| | | |
Collapse
|
18
|
Krithika alias AnbuDevi M, Suganthi K. Review of Semantic Segmentation of Medical Images Using Modified Architectures of UNET. Diagnostics (Basel) 2022; 12:diagnostics12123064. [PMID: 36553071 PMCID: PMC9777361 DOI: 10.3390/diagnostics12123064] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 11/17/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
In biomedical image analysis, information about the location and appearance of tumors and lesions is indispensable to aid doctors in treating and identifying the severity of diseases. Therefore, it is essential to segment the tumors and lesions. MRI, CT, PET, ultrasound, and X-ray are the different imaging systems to obtain this information. The well-known semantic segmentation technique is used in medical image analysis to identify and label regions of images. The semantic segmentation aims to divide the images into regions with comparable characteristics, including intensity, homogeneity, and texture. UNET is the deep learning network that segments the critical features. However, UNETs basic architecture cannot accurately segment complex MRI images. This review introduces the modified and improved models of UNET suitable for increasing segmentation accuracy.
Collapse
|
19
|
|
20
|
A Fuzzy Consensus Clustering Algorithm for MRI Brain Tissue Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Brain tissue segmentation is an important component of the clinical diagnosis of brain diseases using multi-modal magnetic resonance imaging (MR). Brain tissue segmentation has been developed by many unsupervised methods in the literature. The most commonly used unsupervised methods are K-Means, Expectation-Maximization, and Fuzzy Clustering. Fuzzy clustering methods offer considerable benefits compared with the aforementioned methods as they are capable of handling brain images that are complex, largely uncertain, and imprecise. However, this approach suffers from the intrinsic noise and intensity inhomogeneity (IIH) in the data resulting from the acquisition process. To resolve these issues, we propose a fuzzy consensus clustering algorithm that defines a membership function resulting from a voting schema to cluster the pixels. In particular, we first pre-process the MRI data and employ several segmentation techniques based on traditional fuzzy sets and intuitionistic sets. Then, we adopted a voting schema to fuse the results of the applied clustering methods. Finally, to evaluate the proposed method, we used the well-known performance measures (boundary measure, overlap measure, and volume measure) on two publicly available datasets (OASIS and IBSR18). The experimental results show the superior performance of the proposed method in comparison with the recent state of the art. The performance of the proposed method is also presented using a real-world Autism Spectrum Disorder Detection problem with better accuracy compared to other existing methods.
Collapse
|
21
|
Dobri S, Chen JJ, Ross B. Insights from auditory cortex for GABA+ magnetic resonance spectroscopy studies of aging. Eur J Neurosci 2022; 56:4425-4444. [PMID: 35781900 DOI: 10.1111/ejn.15755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 06/21/2022] [Accepted: 06/27/2022] [Indexed: 11/30/2022]
Abstract
Changes in levels of the inhibitory neurotransmitter γ-aminobutyric acid (GABA) may underlie aging-related changes in brain function. GABA and co-edited macromolecules (GABA+) can be measured with MEGA-PRESS magnetic resonance spectroscopy (MRS). The current study investigated how changes in the aging brain impact the interpretation of GABA+ measures in bilateral auditory cortices of healthy young and older adults. Structural changes during aging appeared as decreasing proportion of grey matter in the MRS volume of interest and corresponding increase in cerebrospinal fluid. GABA+ referenced to H2 O without tissue correction declined in aging. This decline persisted after correcting for tissue differences in MR-visible H2 O and relaxation times but vanished after considering the different abundance of GABA+ in grey and white matter. However, GABA+ referenced to creatine and N-acetyl aspartate (NAA), which showed no dependence on tissue composition, decreased in aging. All GABA+ measures showed hemispheric asymmetry in young but not older adults. The study also considered aging-related effects on tissue segmentation and the impact of co-edited macromolecules. Tissue segmentation differed significantly between commonly used algorithms, but aging-related effects on tissue-corrected GABA+ were consistent across methods. Auditory cortex macromolecule concentration did not change with age, indicating that a decline in GABA caused the decrease in the compound GABA+ measure. Most likely, the macromolecule contribution to GABA+ leads to underestimating an aging-related decrease in GABA. Overall, considering multiple GABA+ measures using different reference signals strengthened the support for an aging-related decline in auditory cortex GABA levels.
Collapse
Affiliation(s)
- Simon Dobri
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - J Jean Chen
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
22
|
Niyas S, Pawan S, Anand Kumar M, Rajan J. Medical image segmentation with 3D convolutional neural networks: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.065] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
23
|
Abstract
In the field of medical imaging, the division of an image into meaningful structures using image segmentation is an essential step for pre-processing analysis. Many studies have been carried out to solve the general problem of the evaluation of image segmentation results. One of the main focuses in the computer vision field is based on artificial intelligence algorithms for segmentation and classification, including machine learning and deep learning approaches. The main drawback of supervised segmentation approaches is that a large dataset of ground truth validated by medical experts is required. In this sense, many research groups have developed their segmentation approaches according to their specific needs. However, a generalised application aimed at visualizing, assessing and comparing the results of different methods facilitating the generation of a ground-truth repository is not found in recent literature. In this paper, a new graphical user interface application (MedicalSeg) for the management of medical imaging based on pre-processing and segmentation is presented. The objective is twofold, first to create a test platform for comparing segmentation approaches, and secondly to generate segmented images to create ground truths that can then be used for future purposes as artificial intelligence tools. An experimental demonstration and performance analysis discussion are presented in this paper.
Collapse
|
24
|
Cavedo E, Tran P, Thoprakarn U, Martini JB, Movschin A, Delmaire C, Gariel F, Heidelberg D, Pyatigorskaya N, Ströer S, Krolak-Salmon P, Cotton F, Dos Santos CL, Dormont D. Validation of an automatic tool for the rapid measurement of brain atrophy and white matter hyperintensity: QyScore®. Eur Radiol 2022; 32:2949-2961. [PMID: 34973104 PMCID: PMC9038894 DOI: 10.1007/s00330-021-08385-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 09/15/2021] [Accepted: 10/21/2021] [Indexed: 12/05/2022]
Abstract
OBJECTIVES QyScore® is an imaging analysis tool certified in Europe (CE marked) and the US (FDA cleared) for the automatic volumetry of grey and white matter (GM and WM respectively), hippocampus (HP), amygdala (AM), and white matter hyperintensity (WMH). Here we compare QyScore® performances with the consensus of expert neuroradiologists. METHODS Dice similarity coefficient (DSC) and the relative volume difference (RVD) for GM, WM volumes were calculated on 50 3DT1 images. DSC and the F1 metrics were calculated for WMH on 130 3DT1 and FLAIR images. For each index, we identified thresholds of reliability based on current literature review results. We hypothesized that DSC/F1 scores obtained using QyScore® markers would be higher than the threshold. In contrast, RVD scores would be lower. Regression analysis and Bland-Altman plots were obtained to evaluate QyScore® performance in comparison to the consensus of three expert neuroradiologists. RESULTS The lower bound of the DSC/F1 confidence intervals was higher than the threshold for the GM, WM, HP, AM, and WMH, and the higher bounds of the RVD confidence interval were below the threshold for the WM, GM, HP, and AM. QyScore®, compared with the consensus of three expert neuroradiologists, provides reliable performance for the automatic segmentation of the GM and WM volumes, and HP and AM volumes, as well as WMH volumes. CONCLUSIONS QyScore® represents a reliable medical device in comparison with the consensus of expert neuroradiologists. Therefore, QyScore® could be implemented in clinical trials and clinical routine to support the diagnosis and longitudinal monitoring of neurological diseases. KEY POINTS • QyScore® provides reliable automatic segmentation of brain structures in comparison with the consensus of three expert neuroradiologists. • QyScore® automatic segmentation could be performed on MRI images using different vendors and protocols of acquisition. In addition, the fast segmentation process saves time over manual and semi-automatic methods. • QyScore® could be implemented in clinical trials and clinical routine to support the diagnosis and longitudinal monitoring of neurological diseases.
Collapse
Affiliation(s)
- Enrica Cavedo
- Qynapse SAS, 130 rue de Lourmel, 75015, Paris, France.
| | - Philippe Tran
- Qynapse SAS, 130 rue de Lourmel, 75015, Paris, France
- Equipe-Projet ARAMIS, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Centre Inria de Paris, Groupe Hospitalier Pitié-Salpêtrière Charles Foix, Faculté de Médecine Sorbonne Université, Paris, France
| | | | | | | | | | - Florent Gariel
- Department of Neuroradiology, University Hospital of Bordeaux, Bordeaux, France
| | - Damien Heidelberg
- Faculty of Medicine, Claude-Bernard Lyon 1 University, 69000, Lyon, France
- Service de Radiologie and Laboratoire d'anatomie de Rockefeller, centre hospitalier Lyon Sud, hospices civils de Lyon, 69000, Lyon, France
| | - Nadya Pyatigorskaya
- Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| | - Sébastian Ströer
- Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| | - Pierre Krolak-Salmon
- Clinical and Research Memory Centre of Lyon, Hospices Civils de Lyon, Lyon, France
- University of Lyon, Lyon, France
- INSERM, U1028; UMR CNRS 5292, Lyon Neuroscience Research Center, Lyon, France
| | - Francois Cotton
- Radiology Department, centre hospitalier Lyon-Sud, hospices civils de Lyon, 69310, Pierre-Bénite, France
- Inserm U1044, CNRS UMR 5220, CREATIS, Université Lyon-1, 69100, Villeurbanne, France
| | | | - Didier Dormont
- Equipe-Projet ARAMIS, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Centre Inria de Paris, Groupe Hospitalier Pitié-Salpêtrière Charles Foix, Faculté de Médecine Sorbonne Université, Paris, France
- Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| |
Collapse
|
25
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
26
|
Bose S, Sur Chowdhury R, Das R, Maulik U. Dense Dilated Deep Multiscale Supervised U-Network for biomedical image segmentation. Comput Biol Med 2022; 143:105274. [PMID: 35123135 DOI: 10.1016/j.compbiomed.2022.105274] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 01/26/2022] [Accepted: 01/26/2022] [Indexed: 12/24/2022]
Abstract
Biomedical image segmentation is essential for computerized medical image analysis. Deep learning algorithms allow us to design state-of-the-art models for solving segmentation problems. The U-Net and its variants have provided positive results across various datasets. However, the existing networks have the same receptive field at each level and the models are supervised only at the shallow level. Considering these two ideas, we have proposed the D3MSU-Net where the field of view in each level is varied depending upon the depth of the resolution layer and the model is supervised at each resolution level. We have evaluated our network in eight benchmark datasets such as Electron Microscopy, Lung segmentation, Montgomery Chest X-ray, Covid-Radiopaedia, Wound, Medetec, Brain MRI, and Covid-19 lung CT dataset. Additionally, we have provided the performance for various ablations. The experimental results show the superiority of the proposed network. The proposed D3MSU-Net and ablation models are available at www.github.com/shirshabose/D3MSUNET.
Collapse
Affiliation(s)
- Shirsha Bose
- Department of Electronics and Telecommunication Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Rangan Das
- Department of Computer Science Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, 188, Raja S.C. Mallick Rd, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
27
|
Bron EE, Klein S, Reinke A, Papma JM, Maier-Hein L, Alexander DC, Oxtoby NP. Ten years of image analysis and machine learning competitions in dementia. Neuroimage 2022; 253:119083. [PMID: 35278709 DOI: 10.1016/j.neuroimage.2022.119083] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 02/18/2022] [Accepted: 03/08/2022] [Indexed: 11/24/2022] Open
Abstract
Machine learning methods exploiting multi-parametric biomarkers, especially based on neuroimaging, have huge potential to improve early diagnosis of dementia and to predict which individuals are at-risk of developing dementia. To benchmark algorithms in the field of machine learning and neuroimaging in dementia and assess their potential for use in clinical practice and clinical trials, seven grand challenges have been organized in the last decade: MIRIAD (2012), Alzheimer's Disease Big Data DREAM (2014), CADDementia (2014), Machine Learning Challenge (2014), MCI Neuroimaging (2017), TADPOLE (2017), and the Predictive Analytics Competition (2019). Based on two challenge evaluation frameworks, we analyzed how these grand challenges are complementing each other regarding research questions, datasets, validation approaches, results and impact. The seven grand challenges addressed questions related to screening, clinical status estimation, prediction and monitoring in (pre-clinical) dementia. There was little overlap in clinical questions, tasks and performance metrics. Whereas this aids providing insight on a broad range of questions, it also limits the validation of results across challenges. The validation process itself was mostly comparable between challenges, using similar methods for ensuring objective comparison, uncertainty estimation and statistical testing. In general, winning algorithms performed rigorous data pre-processing and combined a wide range of input features. Despite high state-of-the-art performances, most of the methods evaluated by the challenges are not clinically used. To increase impact, future challenges could pay more attention to statistical analysis of which factors (i.e., features, models) relate to higher performance, to clinical questions beyond Alzheimer's disease, and to using testing data beyond the Alzheimer's Disease Neuroimaging Initiative. Grand challenges would be an ideal venue for assessing the generalizability of algorithm performance to unseen data of other cohorts. Key for increasing impact in this way are larger testing data sizes, which could be reached by sharing algorithms rather than data to exploit data that cannot be shared. Given the potential and lessons learned in the past ten years, we are excited by the prospects of grand challenges in machine learning and neuroimaging for the next ten years and beyond.
Collapse
Affiliation(s)
- Esther E Bron
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands.
| | - Stefan Klein
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands.
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg 69120, Germany.
| | - Janne M Papma
- Department of Neurology, Erasmus MC, Rotterdam, the Netherlands.
| | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg 69120, Germany.
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London WC1E 6BT, UK.
| | - Neil P Oxtoby
- Centre for Medical Image Computing, Department of Computer Science, University College London, London WC1E 6BT, UK.
| |
Collapse
|
28
|
Ismail TF, Strugnell W, Coletti C, Božić-Iven M, Weingärtner S, Hammernik K, Correia T, Küstner T. Cardiac MR: From Theory to Practice. Front Cardiovasc Med 2022; 9:826283. [PMID: 35310962 PMCID: PMC8927633 DOI: 10.3389/fcvm.2022.826283] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/17/2022] [Indexed: 01/10/2023] Open
Abstract
Cardiovascular disease (CVD) is the leading single cause of morbidity and mortality, causing over 17. 9 million deaths worldwide per year with associated costs of over $800 billion. Improving prevention, diagnosis, and treatment of CVD is therefore a global priority. Cardiovascular magnetic resonance (CMR) has emerged as a clinically important technique for the assessment of cardiovascular anatomy, function, perfusion, and viability. However, diversity and complexity of imaging, reconstruction and analysis methods pose some limitations to the widespread use of CMR. Especially in view of recent developments in the field of machine learning that provide novel solutions to address existing problems, it is necessary to bridge the gap between the clinical and scientific communities. This review covers five essential aspects of CMR to provide a comprehensive overview ranging from CVDs to CMR pulse sequence design, acquisition protocols, motion handling, image reconstruction and quantitative analysis of the obtained data. (1) The basic MR physics of CMR is introduced. Basic pulse sequence building blocks that are commonly used in CMR imaging are presented. Sequences containing these building blocks are formed for parametric mapping and functional imaging techniques. Commonly perceived artifacts and potential countermeasures are discussed for these methods. (2) CMR methods for identifying CVDs are illustrated. Basic anatomy and functional processes are described to understand the cardiac pathologies and how they can be captured by CMR imaging. (3) The planning and conduct of a complete CMR exam which is targeted for the respective pathology is shown. Building blocks are illustrated to create an efficient and patient-centered workflow. Further strategies to cope with challenging patients are discussed. (4) Imaging acceleration and reconstruction techniques are presented that enable acquisition of spatial, temporal, and parametric dynamics of the cardiac cycle. The handling of respiratory and cardiac motion strategies as well as their integration into the reconstruction processes is showcased. (5) Recent advances on deep learning-based reconstructions for this purpose are summarized. Furthermore, an overview of novel deep learning image segmentation and analysis methods is provided with a focus on automatic, fast and reliable extraction of biomarkers and parameters of clinical relevance.
Collapse
Affiliation(s)
- Tevfik F. Ismail
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Cardiology Department, Guy's and St Thomas' Hospital, London, United Kingdom
| | - Wendy Strugnell
- Queensland X-Ray, Mater Hospital Brisbane, Brisbane, QLD, Australia
| | - Chiara Coletti
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
| | - Maša Božić-Iven
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
- Computer Assisted Clinical Medicine, Heidelberg University, Mannheim, Germany
| | | | - Kerstin Hammernik
- Lab for AI in Medicine, Technical University of Munich, Munich, Germany
- Department of Computing, Imperial College London, London, United Kingdom
| | - Teresa Correia
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Centre of Marine Sciences, Faro, Portugal
| | - Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospital of Tübingen, Tübingen, Germany
| |
Collapse
|
29
|
Huang J, Ding W, Lv J, Yang J, Dong H, Del Ser J, Xia J, Ren T, Wong ST, Yang G. Edge-enhanced dual discriminator generative adversarial network for fast MRI with parallel imaging using multi-view information. APPL INTELL 2022; 52:14693-14710. [PMID: 36199853 PMCID: PMC9526695 DOI: 10.1007/s10489-021-03092-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/09/2021] [Indexed: 12/24/2022]
Abstract
In clinical medicine, magnetic resonance imaging (MRI) is one of the most important tools for diagnosis, triage, prognosis, and treatment planning. However, MRI suffers from an inherent slow data acquisition process because data is collected sequentially in k-space. In recent years, most MRI reconstruction methods proposed in the literature focus on holistic image reconstruction rather than enhancing the edge information. This work steps aside this general trend by elaborating on the enhancement of edge information. Specifically, we introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction by incorporating multi-view information. The dual discriminator design aims to improve the edge information in MRI reconstruction. One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information. An improved U-Net with local and global residual learning is proposed for the generator. Frequency channel attention blocks (FCA Blocks) are embedded in the generator for incorporating attention mechanisms. Content loss is introduced to train the generator for better reconstruction quality. We performed comprehensive experiments on Calgary-Campinas public brain MR dataset and compared our method with state-of-the-art MRI reconstruction methods. Ablation studies of residual learning were conducted on the MICCAI13 dataset to validate the proposed modules. Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information. The time of single-image reconstruction is below 5ms, which meets the demand of faster processing.
Collapse
Affiliation(s)
- Jiahao Huang
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, 226019 Nantong, China
| | - Jun Lv
- School of Computer and Control Engineering, Yantai University, 264005 Yantai, China
| | - Jingwen Yang
- Department of Prosthodontics, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hao Dong
- Center on Frontiers of Computing Studies, Peking University, Beijing, China
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
- University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain
| | - Jun Xia
- Department of Radiology, Shenzhen Second People’s Hospital, The First Afliated Hospital of Shenzhen University Health Science Center, Shenzhen, China
| | - Tiaojuan Ren
- College of Information Science and Technology, Zhejiang Shuren University, 310015 Hangzhou, China
| | - Stephen T. Wong
- Systems Medicine and Bioengineering Department, Departments of Radiology and Pathology, Houston Methodist Cancer Center, Houston Methodist Hospital, Weill Cornell Medicine, 77030 Houston, TX USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, UK
| |
Collapse
|
30
|
Srikrishna M, Heckemann RA, Pereira JB, Volpe G, Zettergren A, Kern S, Westman E, Skoog I, Schöll M. Comparison of Two-Dimensional- and Three-Dimensional-Based U-Net Architectures for Brain Tissue Classification in One-Dimensional Brain CT. Front Comput Neurosci 2022; 15:785244. [PMID: 35082608 PMCID: PMC8784554 DOI: 10.3389/fncom.2021.785244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/02/2021] [Indexed: 11/13/2022] Open
Abstract
Brain tissue segmentation plays a crucial role in feature extraction, volumetric quantification, and morphometric analysis of brain scans. For the assessment of brain structure and integrity, CT is a non-invasive, cheaper, faster, and more widely available modality than MRI. However, the clinical application of CT is mostly limited to the visual assessment of brain integrity and exclusion of copathologies. We have previously developed two-dimensional (2D) deep learning-based segmentation networks that successfully classified brain tissue in head CT. Recently, deep learning-based MRI segmentation models successfully use patch-based three-dimensional (3D) segmentation networks. In this study, we aimed to develop patch-based 3D segmentation networks for CT brain tissue classification. Furthermore, we aimed to compare the performance of 2D- and 3D-based segmentation networks to perform brain tissue classification in anisotropic CT scans. For this purpose, we developed 2D and 3D U-Net-based deep learning models that were trained and validated on MR-derived segmentations from scans of 744 participants of the Gothenburg H70 Cohort with both CT and T1-weighted MRI scans acquired timely close to each other. Segmentation performance of both 2D and 3D models was evaluated on 234 unseen datasets using measures of distance, spatial similarity, and tissue volume. Single-task slice-wise processed 2D U-Nets performed better than multitask patch-based 3D U-Nets in CT brain tissue classification. These findings provide support to the use of 2D U-Nets to segment brain tissue in one-dimensional (1D) CT. This could increase the application of CT to detect brain abnormalities in clinical settings.
Collapse
Affiliation(s)
- Meera Srikrishna
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Rolf A. Heckemann
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, Gothenburg, Sweden
| | - Joana B. Pereira
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Memory Research Unit, Department of Clinical Sciences, Malmö Lund University, Mälmo, Sweden
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Anna Zettergren
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Silke Kern
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Sahlgrenska University Hospital, Psychiatry, Cognition and Old Age Psychiatry Clinic, Gothenburg, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Ingmar Skoog
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Sahlgrenska University Hospital, Psychiatry, Cognition and Old Age Psychiatry Clinic, Gothenburg, Sweden
| | - Michael Schöll
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
- Dementia Research Centre, Institute of Neurology, University College London, London, United Kingdom
- Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden
- *Correspondence: Michael Schöll
| |
Collapse
|
31
|
Çelik G, Talu MF. A new 3D MRI segmentation method based on Generative Adversarial Network and Atrous Convolution. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103155] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
32
|
Ferro DA, Kuijf HJ, Hilal S, van Veluw SJ, van Veldhuizen D, Venketasubramanian N, Tan BY, Biessels GJ, Chen C. Association Between Cerebral Cortical Microinfarcts and Perilesional Cortical Atrophy on 3T MRI. Neurology 2021; 98:e612-e622. [PMID: 34862322 DOI: 10.1212/wnl.0000000000013140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 11/16/2021] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Cerebral cortical microinfarcts (CMIs) are a novel MRI-marker of cerebrovascular disease (CeVD) that predicts accelerated cognitive decline. Presence of CMIs is known to be associated with global cortical atrophy, although the mechanism linking the two is unclear. Our primary objective was to examine the relation between CMIs and cortical atrophy and establish possible perilesional atrophy surrounding CMIs. Our secondary objective was to examine the role of cortical atrophy in CMI-associated cognitive impairment. METHODS Patients were recruited from two Singapore memory clinics between December 2010 and September 2013 and included if they received the diagnosis no objective cognitive impairment, cognitive impairment (with or without a history of stroke) or Alzheimer's or vascular dementia. Cortical thickness, chronic cortical microinfarcts and MRI-markers of CeVD were assessed on 3T MRI. Patients underwent cognitive testing. Cortical thickness was compared globally between patients with and without CMIs, regionally within individual patients with CMIs comparing brain regions with CMIs to the corresponding contralateral region without CMIs and locally within individuals patients in a 50 mm radius of CMIs. Global cortical thickness was analyzed as mediator in the relation between CMI and cognitive performance. RESULTS Of the 238 patients (mean age 72.5 SD 9.1 years) enrolled, 75 had ≥1 CMIs. Patient with CMIs had a 2.1% lower global cortical thickness (B=-.049 mm, 95% CI [.091; -.007] p=.022) compared to patients without CMIs, after correction for age, sex, education and intracranial volume. In patients with CMIs, cortical thickness in brain regions with CMIs was 2.2 % lower than in contralateral regions without CMIs (B=-.048 mm [-.071; -.026] p<.001). In a 20 mm radius area surrounding the CMI-core, cortical thickness was lower than in the area 20-50 mm from the CMI-core (Mean difference -.06 mm 95% CI [-.10; -.02] p=.002). Global cortical thickness was a significant mediator in the relationship between CMI presence and cognitive performance as measure with the Mini-Mental State Examination (B=-.12 [-.22; -.01] p=.025). DISCUSSION We found cortical atrophy surrounding CMIs, suggesting a perilesional effect in a cortical area many times larger than the CMI-core. Our findings support the notion that CMIs affect brain structure beyond the actual lesion site.
Collapse
Affiliation(s)
- Doeschka A Ferro
- Department of Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Saima Hilal
- Memory Aging and Cognition Centre, Department of Pharmacology, National University of Singapore, Singapore
| | - Susanne J van Veluw
- Department of Neurology, J.P.K. Stroke Research Center, Massachusetts General Hospital, Boston, MA, USA
| | | | | | | | - Geert Jan Biessels
- Department of Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands
| | - Christopher Chen
- Memory Aging and Cognition Centre, Department of Pharmacology, National University of Singapore, Singapore
| |
Collapse
|
33
|
Srikrishna M, Pereira JB, Heckemann RA, Volpe G, van Westen D, Zettergren A, Kern S, Wahlund LO, Westman E, Skoog I, Schöll M. Deep learning from MRI-derived labels enables automatic brain tissue classification on human brain CT. Neuroimage 2021; 244:118606. [PMID: 34571160 DOI: 10.1016/j.neuroimage.2021.118606] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/15/2021] [Accepted: 09/20/2021] [Indexed: 11/25/2022] Open
Abstract
Automatic methods for feature extraction, volumetry, and morphometric analysis in clinical neuroscience typically operate on images obtained with magnetic resonance (MR) imaging equipment. Although CT scans are less expensive to acquire and more widely available than MR scans, their application is currently limited to the visual assessment of brain integrity and the exclusion of co-pathologies. CT has rarely been used for tissue classification because the contrast between grey matter and white matter was considered insufficient. In this study, we propose an automatic method for segmenting grey matter (GM), white matter (WM), cerebrospinal fluid (CSF), and intracranial volume (ICV) from head CT images. A U-Net deep learning model was trained and validated on CT images with MRI-derived segmentation labels. We used data from 744 participants of the Gothenburg H70 Birth Cohort Studies for whom CT and T1-weighted MR images had been acquired on the same day. Our proposed model predicted brain tissue classes accurately from unseen CT images (Dice coefficients of 0.79, 0.82, 0.75, 0.93 and 0.98 for GM, WM, CSF, brain volume and ICV, respectively). To contextualize these results, we generated benchmarks based on established MR-based methods and intentional image degradation. Our findings demonstrate that CT-derived segmentations can be used to delineate and quantify brain tissues, opening new possibilities for the use of CT in clinical practice and research.
Collapse
Affiliation(s)
- Meera Srikrishna
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden; Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Joana B Pereira
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden; Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmo, Sweden
| | - Rolf A Heckemann
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahlgrenska Academy, Gothenburg, Sweden
| | - Giovanni Volpe
- Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Danielle van Westen
- Department of Clinical Sciences, Diagnostic Radiology, Lund University Sweden; Department of Imaging and Function, Skånes University Hospital, Lund, Sweden
| | - Anna Zettergren
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Silke Kern
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Mölndal, Sweden
| | - Lars-Olof Wahlund
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Ingmar Skoog
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Michael Schöll
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden; Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden; Dementia Research Centre, Institute of Neurology, University College London, London, UK; Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden.
| |
Collapse
|
34
|
Palraj K, Kalaivani V. Predicting the abnormality of brain and compute the cognitive power of human using deep learning techniques using functional magnetic resonance images. Soft comput 2021. [DOI: 10.1007/s00500-021-06292-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
35
|
Svanera M, Benini S, Bontempi D, Muckli L. CEREBRUM-7T: Fast and Fully Volumetric Brain Segmentation of 7 Tesla MR Volumes. Hum Brain Mapp 2021; 42:5563-5580. [PMID: 34598307 PMCID: PMC8559470 DOI: 10.1002/hbm.25636] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/14/2021] [Accepted: 08/09/2021] [Indexed: 01/16/2023] Open
Abstract
Ultra-high-field magnetic resonance imaging (MRI) enables sub-millimetre resolution imaging of the human brain, allowing the study of functional circuits of cortical layers at the meso-scale. An essential step in many functional and structural neuroimaging studies is segmentation, the operation of partitioning the MR images in anatomical structures. Despite recent efforts in brain imaging analysis, the literature lacks in accurate and fast methods for segmenting 7-tesla (7T) brain MRI. We here present CEREBRUM-7T, an optimised end-to-end convolutional neural network, which allows fully automatic segmentation of a whole 7T T1w MRI brain volume at once, without partitioning the volume, pre-processing, nor aligning it to an atlas. The trained model is able to produce accurate multi-structure segmentation masks on six different classes plus background in only a few seconds. The experimental part, a combination of objective numerical evaluations and subjective analysis, confirms that the proposed solution outperforms the training labels it was trained on and is suitable for neuroimaging studies, such as layer functional MRI studies. Taking advantage of a fine-tuning operation on a reduced set of volumes, we also show how it is possible to effectively apply CEREBRUM-7T to different sites data. Furthermore, we release the code, 7T data, and other materials, including the training labels and the Turing test.
Collapse
Affiliation(s)
- Michele Svanera
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Sergio Benini
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Dennis Bontempi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Lars Muckli
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
36
|
Inglese F, Jaarsma-Coes MG, Steup-Beekman GM, Monahan R, Huizinga T, van Buchem MA, Ronen I, de Bresser J. Neuropsychiatric systemic lupus erythematosus is associated with a distinct type and shape of cerebral white matter hyperintensities. Rheumatology (Oxford) 2021; 61:2663-2671. [PMID: 34730801 PMCID: PMC9157072 DOI: 10.1093/rheumatology/keab823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 10/27/2021] [Indexed: 11/18/2022] Open
Abstract
Objectives Advanced white matter hyperintensity (WMH) markers on brain MRI may help reveal underlying mechanisms and aid in the diagnosis of different phenotypes of SLE patients experiencing neuropsychiatric (NP) manifestations. Methods In this prospective cohort study, we included a clinically well-defined cohort of 155 patients consisting of 38 patients with NPSLE (26 inflammatory and 12 ischaemic phenotype) and 117 non-NPSLE patients. Differences in 3 T MRI WMH markers (volume, type and shape) were compared between patients with NPSLE and non-NPSLE and between patients with inflammatory and ischaemic NPSLE by linear and logistic regression analyses corrected for age, sex and intracranial volume. Results Compared with non-NPSLE [92% female; mean age 42 (13) years], patients with NPSLE [87% female; mean age 40 (14) years] showed a higher total WMH volume [B (95%-CI)]: 0.46 (0.0 7 ↔ 0.86); P = 0.021], a higher periventricular/confluent WMH volume [0.46 (0.0 6 ↔ 0.86); P = 0.024], a higher occurrence of periventricular with deep WMH type [0.32 (0.1 3 ↔ 0.77); P = 0.011], a higher number of deep WMH lesions [3.06 (1.2 1 ↔ 4.90); P = 0.001] and a more complex WMH shape [convexity: ‒0.07 (‒0.12 ↔ ‒0.02); P = 0.011, concavity index: 0.05 (0.0 1 ↔ 0.08); P = 0.007]. WMH shape was more complex in inflammatory NPSLE patients [89% female; mean age 39 (15) years] compared with patients with the ischaemic phenotype [83% female; mean age 41 (11) years] [concavity index: 0.08 (0.0 1 ↔ 0.15); P = 0.034]. Conclusion We demonstrated that patients with NPSLE showed a higher periventricular/confluent WMH volume and more complex shape of WMH compared with non-NPSLE patients. This finding was particularly significant in inflammatory NPLSE patients, suggesting different or more severe underlying pathophysiological abnormalities.
Collapse
Affiliation(s)
- Francesca Inglese
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | | | - Gerda M Steup-Beekman
- Department of Rheumatology, Leiden University Medical Center, Leiden, the Netherlands.,Department of Rheumatology, Haaglanden Medical Center, The Hague, the Netherlands
| | - Rory Monahan
- Department of Rheumatology, Leiden University Medical Center, Leiden, the Netherlands
| | - Tom Huizinga
- Department of Rheumatology, Leiden University Medical Center, Leiden, the Netherlands
| | - Mark A van Buchem
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Itamar Ronen
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen de Bresser
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| |
Collapse
|
37
|
HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7467261. [PMID: 34630994 PMCID: PMC8500745 DOI: 10.1155/2021/7467261] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/03/2021] [Accepted: 09/16/2021] [Indexed: 11/17/2022]
Abstract
Multimodal medical image segmentation is always a critical problem in medical image segmentation. Traditional deep learning methods utilize fully CNNs for encoding given images, thus leading to deficiency of long-range dependencies and bad generalization performance. Recently, a sequence of Transformer-based methodologies emerges in the field of image processing, which brings great generalization and performance in various tasks. On the other hand, traditional CNNs have their own advantages, such as rapid convergence and local representations. Therefore, we analyze a hybrid multimodal segmentation method based on Transformers and CNNs and propose a novel architecture, HybridCTrm network. We conduct experiments using HybridCTrm on two benchmark datasets and compare with HyperDenseNet, a network based on fully CNNs. Results show that our HybridCTrm outperforms HyperDenseNet on most of the evaluation metrics. Furthermore, we analyze the influence of the depth of Transformer on the performance. Besides, we visualize the results and carefully explore how our hybrid methods improve on segmentations.
Collapse
|
38
|
Delisle PL, Anctil-Robitaille B, Desrosiers C, Lombaert H. Realistic image normalization for multi-Domain segmentation. Med Image Anal 2021; 74:102191. [PMID: 34509168 DOI: 10.1016/j.media.2021.102191] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 06/22/2021] [Accepted: 07/19/2021] [Indexed: 11/16/2022]
Abstract
Image normalization is a building block in medical image analysis. Conventional approaches are customarily employed on a per-dataset basis. This strategy, however, prevents the current normalization algorithms from fully exploiting the complex joint information available across multiple datasets. Consequently, ignoring such joint information has a direct impact on the processing of segmentation algorithms. This paper proposes to revisit the conventional image normalization approach by, instead, learning a common normalizing function across multiple datasets. Jointly normalizing multiple datasets is shown to yield consistent normalized images as well as an improved image segmentation when intensity shifts are large. To do so, a fully automated adversarial and task-driven normalization approach is employed as it facilitates the training of realistic and interpretable images while keeping performance on par with the state-of-the-art. The adversarial training of our network aims at finding the optimal transfer function to improve both, jointly, the segmentation accuracy and the generation of realistic images. We have evaluated the performance of our normalizer on both infant and adult brain images from the iSEG, MRBrainS and ABIDE datasets. The results indicate that our contribution does provide an improved realism to the normalized images, while retaining a segmentation accuracy at par with the state-of-the-art learnable normalization approaches.
Collapse
Affiliation(s)
| | | | | | - Herve Lombaert
- Department of Computer and Software Engineering, ETS Montreal, Canada
| |
Collapse
|
39
|
Palraj K, Kalaivani V. Deep learning methods for predicting brain abnormalities and compute human cognitive power using fMRI. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-202069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In modern times, digital medical images play a significant progression in clinical diagnosis to treat the populace earlier to hoard their lives. Magnetic Resonance Imaging (MRI) is one of the most advanced medical imaging modalities that facilitate scanning various parts of the human body like the head, chest, abdomen, and pelvis and identify the diseases. Numerous studies on the same discipline have proposed different algorithms, techniques, and methods for analyzing medical digital images, especially MRI. Most of them have mainly focused on identifying and classifying the images as either normal or abnormal. Computing brainpower is essential to understand and handle various brain diseases efficiently in critical situations. This paper knuckles down to design and implement a computer-aided framework, enhancing the identification of humans’ cognitive power from their MRI. Images. The proposed framework converts the 3D DICOM images into 2D medical images, preprocessing, enhancement, learning, and extracting various image information to classify it as normal or abnormal and provide the brain’s cognitive power. This study widens the efficient use of machine learning methods, Voxel Residual Network (VRN), with multimodality fusion architecture to learn and analyze the image to classify and predict cognitive power. The experimental results denote that the proposed framework demonstrates better performance than the existing approaches.
Collapse
Affiliation(s)
- K. Palraj
- AP, CSE, Srividya College of Engineering &Technology, Virudhunagar, Tamilnadu, India
| | - V. Kalaivani
- CSE, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
40
|
Fick T, van Doormaal JAM, Tosic L, van Zoest RJ, Meulstee JW, Hoving EW, van Doormaal TPC. Fully automatic brain tumor segmentation for 3D evaluation in augmented reality. Neurosurg Focus 2021; 51:E14. [PMID: 34333477 DOI: 10.3171/2021.5.focus21200] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/18/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.
Collapse
Affiliation(s)
- Tim Fick
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Jesse A M van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Lazar Tosic
- 3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| | - Renate J van Zoest
- 4Department of Neurology and Neurosurgery, Curaçao Medical Center, Willemstad, Curaçao
| | - Jene W Meulstee
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - Eelco W Hoving
- 1Department of Neuro-oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.,2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- 2Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.,3Department of Neurosurgery, University Hospital of Zürich, Zürich, Switzerland; and
| |
Collapse
|
41
|
Gordon S, Kodner B, Goldfryd T, Sidorov M, Goldberger J, Raviv TR. An atlas of classifiers-a machine learning paradigm for brain MRI segmentation. Med Biol Eng Comput 2021; 59:1833-1849. [PMID: 34313921 DOI: 10.1007/s11517-021-02414-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/21/2021] [Indexed: 11/25/2022]
Abstract
We present the Atlas of Classifiers (AoC)-a conceptually novel framework for brain MRI segmentation. The AoC is a spatial map of voxel-wise multinomial logistic regression (LR) functions learned from the labeled data. Upon convergence, the resulting fixed LR weights, a few for each voxel, represent the training dataset. It can, therefore, be considered as a light-weight learning machine, which despite its low capacity does not underfit the problem. The AoC construction is independent of the actual intensities of the test images, providing the flexibility to train it on the available labeled data and use it for the segmentation of images from different datasets and modalities. In this sense, it does not overfit the training data, as well. The proposed method has been applied to numerous publicly available datasets for the segmentation of brain MRI tissues and is shown to be robust to noise and outreach commonly used methods. Promising results were also obtained for multi-modal, cross-modality MRI segmentation. Finally, we show how AoC trained on brain MRIs of healthy subjects can be exploited for lesion segmentation of multiple sclerosis patients.
Collapse
Affiliation(s)
- Shiri Gordon
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Boris Kodner
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tal Goldfryd
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Michael Sidorov
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Jacob Goldberger
- The Faculty of Electrical Engineering, Ber-Ilan University, Ramat-Gan, Israel
| | - Tammy Riklin Raviv
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| |
Collapse
|
42
|
Weiss DA, Saluja R, Xie L, Gee JC, Sugrue LP, Pradhan A, Nick Bryan R, Rauschecker AM, Rudie JD. Automated multiclass tissue segmentation of clinical brain MRIs with lesions. NEUROIMAGE-CLINICAL 2021; 31:102769. [PMID: 34333270 PMCID: PMC8346689 DOI: 10.1016/j.nicl.2021.102769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/29/2021] [Accepted: 07/20/2021] [Indexed: 12/21/2022]
Abstract
A U-Net incorporating spatial prior information can successfully segment 6 brain tissue types. The U-Net was able to segment gray and white matter in the presence of lesions. The U-Net surpassed the performance of its source algorithm in an external dataset. Segmentations were produced in a hundredth of the time of its predecessor algorithm.
Delineation and quantification of normal and abnormal brain tissues on Magnetic Resonance Images is fundamental to the diagnosis and longitudinal assessment of neurological diseases. Here we sought to develop a convolutional neural network for automated multiclass tissue segmentation of brain MRIs that was robust at typical clinical resolutions and in the presence of a variety of lesions. We trained a 3D U-Net for full brain multiclass tissue segmentation from a prior atlas-based segmentation method on an internal dataset that consisted of 558 clinical T1-weighted brain MRIs (453/52/53; training/validation/test) of patients with one of 50 different diagnostic entities (n = 362) or with a normal brain MRI (n = 196). We then used transfer learning to refine our model on an external dataset that consisted of 7 patients with hand-labeled tissue types. We evaluated the tissue-wise and intra-lesion performance with different loss functions and spatial prior information in the validation set and applied the best performing model to the internal and external test sets. The network achieved an average overall Dice score of 0.87 and volume similarity of 0.97 in the internal test set. Further, the network achieved a median intra-lesion tissue segmentation accuracy of 0.85 inside lesions within white matter and 0.61 inside lesions within gray matter. After transfer learning, the network achieved an average overall Dice score of 0.77 and volume similarity of 0.96 in the external dataset compared to human raters. The network had equivalent or better performance than the original atlas-based method on which it was trained across all metrics and produced segmentations in a hundredth of the time. We anticipate that this pipeline will be a useful tool for clinical decision support and quantitative analysis of clinical brain MRIs in the presence of lesions.
Collapse
Affiliation(s)
- David A Weiss
- University of Pennsylvania, United States; University of California, San Francisco, United States.
| | | | - Long Xie
- University of Pennsylvania, United States
| | | | - Leo P Sugrue
- University of California, San Francisco, United States
| | | | | | | | | |
Collapse
|
43
|
Kim BN, Dolz J, Jodoin PM, Desrosiers C. Privacy-Net: An Adversarial Approach for Identity-Obfuscated Segmentation of Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1737-1749. [PMID: 33710953 DOI: 10.1109/tmi.2021.3065727] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper presents a client/server privacy-preserving network in the context of multicentric medical image analysis. Our approach is based on adversarial learning which encodes images to obfuscate the patient identity while preserving enough information for a target task. Our novel architecture is composed of three components: 1) an encoder network which removes identity-specific features from input medical images, 2) a discriminator network that attempts to identify the subject from the encoded images, 3) a medical image analysis network which analyzes the content of the encoded images (segmentation in our case). By simultaneously fooling the discriminator and optimizing the medical analysis network, the encoder learns to remove privacy-specific features while keeping those essentials for the target task. Our approach is illustrated on the problem of segmenting brain MRI from the large-scale Parkinson Progression Marker Initiative (PPMI) dataset. Using longitudinal data from PPMI, we show that the discriminator learns to heavily distort input images while allowing for highly accurate segmentation results. Our results also demonstrate that an encoder trained on the PPMI dataset can be used for segmenting other datasets, without the need for retraining. The code is made available at: https://github.com/bachkimn/Privacy-Net-An-Adversarial-Approach-forIdentity-Obfuscated-Segmentation-of-MedicalImages.
Collapse
|
44
|
Zhuang Y, Liu H, Song E, Ma G, Xu X, Hung CC. APRNet: A 3D Anisotropic Pyramidal Reversible Network with Multi-modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images. IEEE J Biomed Health Inform 2021; 26:749-761. [PMID: 34197331 DOI: 10.1109/jbhi.2021.3093932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.
Collapse
|
45
|
A deep cascade of ensemble of dual domain networks with gradient-based T1 assistance and perceptual refinement for fast MRI reconstruction. Comput Med Imaging Graph 2021; 91:101942. [PMID: 34087612 DOI: 10.1016/j.compmedimag.2021.101942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 05/03/2021] [Accepted: 05/14/2021] [Indexed: 11/23/2022]
Abstract
Deep learning networks have shown promising results in fast magnetic resonance imaging (MRI) reconstruction. In our work, we develop deep networks to further improve the quantitative and the perceptual quality of reconstruction. To begin with, we propose reconsynergynet (RSN), a network that combines the complementary benefits of independently operating on both the image and the Fourier domain. For a single-coil acquisition, we introduce deep cascade RSN (DC-RSN), a cascade of RSN blocks interleaved with data fidelity (DF) units. Secondly, we improve the structure recovery of DC-RSN for T2 weighted Imaging (T2WI) through assistance of T1 weighted imaging (T1WI), a sequence with short acquisition time. T1 assistance is provided to DC-RSN through a gradient of log feature (GOLF) fusion. Furthermore, we propose perceptual refinement network (PRN) to refine the reconstructions for better visual information fidelity (VIF), a metric highly correlated to radiologist's opinion on the image quality. Lastly, for multi-coil acquisition, we propose variable splitting RSN (VS-RSN), a deep cascade of blocks, each block containing RSN, multi-coil DF unit, and a weighted average module. We extensively validate our models DC-RSN and VS-RSN for single-coil and multi-coil acquisitions and report the state-of-the-art performance. We obtain a SSIM of 0.768, 0.923, and 0.878 for knee single-coil-4x, multi-coil-4x, and multi-coil-8x in fastMRI, respectively. We also conduct experiments to demonstrate the efficacy of GOLF based T1 assistance and PRN.
Collapse
|
46
|
Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. SENSORS 2021; 21:s21093232. [PMID: 34067101 PMCID: PMC8124734 DOI: 10.3390/s21093232] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/17/2022]
Abstract
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing.
Collapse
|
47
|
Schirmer MD, Venkataraman A, Rekik I, Kim M, Mostofsky SH, Nebel MB, Rosch K, Seymour K, Crocetti D, Irzan H, Hütel M, Ourselin S, Marlow N, Melbourne A, Levchenko E, Zhou S, Kunda M, Lu H, Dvornek NC, Zhuang J, Pinto G, Samal S, Zhang J, Bernal-Rusiel JL, Pienaar R, Chung AW. Neuropsychiatric disease classification using functional connectomics - results of the connectomics in neuroimaging transfer learning challenge. Med Image Anal 2021; 70:101972. [PMID: 33677261 PMCID: PMC9115580 DOI: 10.1016/j.media.2021.101972] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 11/25/2020] [Accepted: 01/11/2021] [Indexed: 01/26/2023]
Abstract
Large, open-source datasets, such as the Human Connectome Project and the Autism Brain Imaging Data Exchange, have spurred the development of new and increasingly powerful machine learning approaches for brain connectomics. However, one key question remains: are we capturing biologically relevant and generalizable information about the brain, or are we simply overfitting to the data? To answer this, we organized a scientific challenge, the Connectomics in NeuroImaging Transfer Learning Challenge (CNI-TLC), held in conjunction with MICCAI 2019. CNI-TLC included two classification tasks: (1) diagnosis of Attention-Deficit/Hyperactivity Disorder (ADHD) within a pre-adolescent cohort; and (2) transference of the ADHD model to a related cohort of Autism Spectrum Disorder (ASD) patients with an ADHD comorbidity. In total, 240 resting-state fMRI (rsfMRI) time series averaged according to three standard parcellation atlases, along with clinical diagnosis, were released for training and validation (120 neurotypical controls and 120 ADHD). We also provided Challenge participants with demographic information of age, sex, IQ, and handedness. The second set of 100 subjects (50 neurotypical controls, 25 ADHD, and 25 ASD with ADHD comorbidity) was used for testing. Classification methodologies were submitted in a standardized format as containerized Docker images through ChRIS, an open-source image analysis platform. Utilizing an inclusive approach, we ranked the methods based on 16 metrics: accuracy, area under the curve, F1-score, false discovery rate, false negative rate, false omission rate, false positive rate, geometric mean, informedness, markedness, Matthew's correlation coefficient, negative predictive value, optimized precision, precision, sensitivity, and specificity. The final rank was calculated using the rank product for each participant across all measures. Furthermore, we assessed the calibration curves of each methodology. Five participants submitted their method for evaluation, with one outperforming all other methods in both ADHD and ASD classification. However, further improvements are still needed to reach the clinical translation of functional connectomics. We have kept the CNI-TLC open as a publicly available resource for developing and validating new classification methodologies in the field of connectomics.
Collapse
Affiliation(s)
- Markus D Schirmer
- Massachusetts General Hospital, Harvard Medical School, Boston, USA; German Center for Neurodegenerative Diseases, Bonn, Germany; Clinic for Neuroradiology, University Hospital Bonn, Germany; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA.
| | - Archana Venkataraman
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Islem Rekik
- BASIRA lab, Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey; School of Science and Engineering, Computing, University of Dundee, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Minjeong Kim
- Department of Computer Science, University of North Carolina at Greensboro, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Stewart H Mostofsky
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, USA; Department of Neurology, Johns Hopkins School of Medicine, USA; Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Mary Beth Nebel
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, USA; Department of Neurology, Johns Hopkins School of Medicine, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Keri Rosch
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, USA; Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA; Department of Radiology, Boston Children's Hospital,Harvard Medical School, Boston, MA, USA
| | - Karen Seymour
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, USA; Department of Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Deana Crocetti
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Hassna Irzan
- Department of Medical Physics and Biomedical Engineering, University College London, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Michael Hütel
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Neil Marlow
- Institute for Women's Health, University College London, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Andrew Melbourne
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK; Department of Medical Physics and Biomedical Engineering, University College London, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Egor Levchenko
- Institute for Cognitive Neuroscience, Higher School of Economics, Moscow, Russia; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Shuo Zhou
- Department of Computer Science, The University of Sheffield, Sheffield, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Mwiza Kunda
- Department of Computer Science, The University of Sheffield, Sheffield, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Haiping Lu
- Department of Computer Science, The University of Sheffield, Sheffield, UK; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Nicha C Dvornek
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Juntang Zhuang
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Gideon Pinto
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Sandip Samal
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Jennings Zhang
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Jorge L Bernal-Rusiel
- Teradyte LLC, Coral Gables, FL, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Rudolph Pienaar
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Radiology, Boston Children's Hospital,Harvard Medical School, Boston, MA, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA
| | - Ai Wern Chung
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neuropsychology, Kennedy Krieger Institute, Baltimore, USA.
| |
Collapse
|
48
|
Inglese F, Kant IMJ, Monahan RC, Steup-Beekman GM, Huizinga TWJ, van Buchem MA, Magro-Checa C, Ronen I, de Bresser J. Different phenotypes of neuropsychiatric systemic lupus erythematosus are related to a distinct pattern of structural changes on brain MRI. Eur Radiol 2021; 31:8208-8217. [PMID: 33929569 PMCID: PMC8523434 DOI: 10.1007/s00330-021-07970-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 03/16/2021] [Accepted: 03/31/2021] [Indexed: 12/14/2022]
Abstract
Objectives The underlying structural brain correlates of neuropsychiatric involvement in systemic lupus erythematosus (NPSLE) remain unclear, thus hindering correct diagnosis. We compared brain tissue volumes between a clinically well-defined cohort of patients with NPSLE and SLE patients with neuropsychiatric syndromes not attributed to SLE (non-NPSLE). Within the NPSLE patients, we also examined differences between patients with two distinct disease phenotypes: ischemic and inflammatory. Methods In this prospective (May 2007 to April 2015) cohort study, we included 38 NPSLE patients (26 inflammatory and 12 ischemic) and 117 non-NPSLE patients. All patients underwent a 3-T brain MRI scan that was used to automatically determine white matter, grey matter, white matter hyperintensities (WMH) and total brain volumes. Group differences in brain tissue volumes were studied with linear regression analyses corrected for age, gender, and total intracranial volume and expressed as B values and 95% confidence intervals. Results NPSLE patients showed higher WMH volume compared to non-NPSLE patients (p = 0.004). NPSLE inflammatory patients showed lower total brain (p = 0.014) and white matter volumes (p = 0.020), and higher WMH volume (p = 0.002) compared to non-NPSLE patients. Additionally, NPSLE inflammatory patients showed lower white matter (p = 0.020) and total brain volumes (p = 0.038) compared to NPSLE ischemic patients. Conclusion We showed that different phenotypes of NPSLE were related to distinct patterns of underlying structural brain MRI changes. Especially the inflammatory phenotype of NPSLE was associated with the most pronounced brain volume changes, which might facilitate the diagnostic process in SLE patients with neuropsychiatric symptoms. Key Points • Neuropsychiatric systemic lupus erythematosus (NPSLE) patients showed a higher WMH volume compared to SLE patients with neuropsychiatric syndromes not attributed to SLE (non-NPSLE). • NPSLE patients with inflammatory phenotype showed a lower total brain and white matter volume, and a higher volume of white matter hyperintensities, compared to non-NPSLE patients. • NPSLE patients with inflammatory phenotype showed lower white matter and total brain volumes compared to NPSLE patients with ischemic phenotype. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07970-2.
Collapse
Affiliation(s)
- Francesca Inglese
- Department of Radiology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands.
| | - Ilse M J Kant
- Department of Radiology, University Medical Center Utrecht, Heidelberglaan 100, 3584, CX, Utrecht, The Netherlands
| | - Rory C Monahan
- Department of Rheumatology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| | - Gerda M Steup-Beekman
- Department of Rheumatology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| | - Tom W J Huizinga
- Department of Rheumatology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| | - Mark A van Buchem
- Department of Radiology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| | - Cesar Magro-Checa
- Department of Rheumatology, Zuyderland Medical Center, Henri Dunantstraat 5, 6419, PC, Heerlen, The Netherlands
| | - Itamar Ronen
- Department of Radiology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| | - Jeroen de Bresser
- Department of Radiology, Leiden University Medical Center (LUMC), Albinusdreef 2, 2333, ZA, Leiden, The Netherlands
| |
Collapse
|
49
|
Dou H, Karimi D, Rollins CK, Ortinau CM, Vasung L, Velasco-Annis C, Ouaalam A, Yang X, Ni D, Gholipour A. A Deep Attentive Convolutional Neural Network for Automatic Cortical Plate Segmentation in Fetal MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1123-1133. [PMID: 33351755 PMCID: PMC8016740 DOI: 10.1109/tmi.2020.3046579] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Fetal cortical plate segmentation is essential in quantitative analysis of fetal brain maturation and cortical folding. Manual segmentation of the cortical plate, or manual refinement of automatic segmentations is tedious and time-consuming. Automatic segmentation of the cortical plate, on the other hand, is challenged by the relatively low resolution of the reconstructed fetal brain MRI scans compared to the thin structure of the cortical plate, partial voluming, and the wide range of variations in the morphology of the cortical plate as the brain matures during gestation. To reduce the burden of manual refinement of segmentations, we have developed a new and powerful deep learning segmentation method. Our method exploits new deep attentive modules with mixed kernel convolutions within a fully convolutional neural network architecture that utilizes deep supervision and residual connections. We evaluated our method quantitatively based on several performance measures and expert evaluations. Results show that our method outperforms several state-of-the-art deep models for segmentation, as well as a state-of-the-art multi-atlas segmentation technique. We achieved average Dice similarity coefficient of 0.87, average Hausdorff distance of 0.96 mm, and average symmetric surface difference of 0.28 mm on reconstructed fetal brain MRI scans of fetuses scanned in the gestational age range of 16 to 39 weeks (28.6± 5.3). With a computation time of less than 1 minute per fetal brain, our method can facilitate and accelerate large-scale studies on normal and altered fetal brain cortical maturation and folding.
Collapse
|
50
|
Angulakshmi M, Deepa M. A Review on Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. Curr Med Imaging 2021; 17:695-706. [PMID: 33423651 DOI: 10.2174/1573405616666210108122048] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 10/03/2020] [Accepted: 10/15/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The automatic segmentation of brain tumour from MRI medical images is mainly covered in this review. Recently, state-of-the-art performance is provided by deep learning- based approaches in the field of image classification, segmentation, object detection, and tracking tasks. INTRODUCTION The core feature deep learning approach is the hierarchical representation of features from images, thus avoiding domain-specific handcrafted features. METHODS In this review paper, we have dealt with a review of Deep Learning Architecture and Methods for MRI Brain Tumour Segmentation. First, we have discussed the basic architecture and approaches for deep learning methods. Secondly, we have discussed the literature survey of MRI brain tumour segmentation using deep learning methods and its multimodality fusion. Then, the advantages and disadvantages of each method are analyzed and finally, it is concluded with a discussion on the merits and challenges of deep learning techniques. RESULTS The review of brain tumour identification using deep learning. CONCLUSION Techniques may help the researchers to have a better focus on it.
Collapse
Affiliation(s)
- M Angulakshmi
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - M Deepa
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|