1
|
Dollé G, Loron G, Alloux M, Kraus V, Delannoy Q, Beck J, Bednarek N, Rousseau F, Passat N. Multilabel SegSRGAN-A framework for parcellation and morphometry of preterm brain in MRI. PLoS One 2024; 19:e0312822. [PMID: 39485735 PMCID: PMC11530046 DOI: 10.1371/journal.pone.0312822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 10/14/2024] [Indexed: 11/03/2024] Open
Abstract
Magnetic resonance imaging (MRI) is a powerful tool for observing and assessing the properties of brain tissue and structures. In particular, in the context of neonatal care, MR images can be used to analyze neurodevelopmental problems that may arise in premature newborns. However, the intrinsic properties of newborn MR images, combined with the high variability of MR acquisition in a clinical setting, result in complex and heterogeneous images. Segmentation methods dedicated to the processing of clinical data are essential for obtaining relevant biomarkers. In this context, the design of quality control protocols for the associated segmentation is a cornerstone for guaranteeing the accuracy and usefulness of these inferred biomarkers. In recent work, we have proposed a new method, SegSRGAN, designed for super-resolution reconstruction and segmentation of specific brain structures. In this article, we first propose an extension of SegSRGAN from binary segmentation to multi-label segmentation, leading then to a partitioning of an MR image into several labels, each corresponding to a specific brain tissue/area. Secondly, we propose a segmentation quality control protocol designed to assess the performance of the proposed method with regard to this specific parcellation task in neonatal MR imaging. In particular, we combine scores derived from expert analysis, morphometric measurements and topological properties of the structures studied. This segmentation quality control can enable clinicians to select reliable segmentations for clinical analysis, starting with correlations between perinatal risk factors, regional volumes and specific dimensions of cognitive development. Based on this protocol, we are investigating the strengths and weaknesses of SegSRGAN and its potential suitability for clinical research in the context of morphometric analysis of brain structure in preterm infants, and to potentially design new biomarkers of neurodevelopment. The proposed study focuses on MR images from the EPIRMEX dataset, collected as part of a national cohort study. In particular, this work represents a first step towards the design of 3-dimensional neonatal brain morphometry based on segmentation. The (free and open-source) code of multilabel SegSRGAN is publicly available at the following URL: https://doi.org/10.5281/zenodo.12659424.
Collapse
Affiliation(s)
- Guillaume Dollé
- CNRS, LMR, UMR 9008, Université de Reims Champagne Ardenne, Reims, France
| | - Gauthier Loron
- CRESTIC, Université de Reims Champagne Ardenne, Reims, France
- Service de Médecine Néonatale et Réanimation Pédiatrique, CHU de Reims, Reims, France
| | - Margaux Alloux
- Service de Médecine Néonatale et Réanimation Pédiatrique, CHU de Reims, Reims, France
- Unité d’aide Méthodologique - Pôle Recherche, CHU de Reims, Reims, France
| | - Vivien Kraus
- CRESTIC, Université de Reims Champagne Ardenne, Reims, France
| | | | - Jonathan Beck
- Service de Médecine Néonatale et Réanimation Pédiatrique, CHU de Reims, Reims, France
| | - Nathalie Bednarek
- CRESTIC, Université de Reims Champagne Ardenne, Reims, France
- Service de Médecine Néonatale et Réanimation Pédiatrique, CHU de Reims, Reims, France
| | | | - Nicolas Passat
- CRESTIC, Université de Reims Champagne Ardenne, Reims, France
| |
Collapse
|
2
|
Noorizadeh N, Kazemi K, Taji SM, Danyali H, Aarabi A. Subject-specific atlas for automatic brain tissue segmentation of neonatal magnetic resonance images. Sci Rep 2024; 14:19114. [PMID: 39155321 PMCID: PMC11330982 DOI: 10.1038/s41598-024-69995-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 08/12/2024] [Indexed: 08/20/2024] Open
Abstract
Developing advanced systems for 3D brain tissue segmentation from neonatal magnetic resonance (MR) images is vital for newborn structural analysis. However, automatic segmentation of neonatal brain tissues is challenging due to smaller head size and inverted T1/T2 tissue contrast compared to adults. In this work, a subject-specific atlas based technique is presented for segmentation of gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from neonatal MR images. It involves atlas selection, subject-specific atlas creation using random forest (RF) classifier, and brain tissue segmentation using the expectation maximization-Markov random field (EM-MRF) method. To increase the segmentation accuracy, different tissue intensity- and gradient-based features were used. Evaluation on 40 neonatal MR images (gestational age of 37-44 weeks) demonstrated an overall accuracy of 94.3% and an average Dice similarity coefficient (DSC) of 0.945 (GM), 0.947 (WM), and 0.912 (CSF). Compared to multi-atlas segmentation methods like SEGMA and EM-MRF with multiple atlases, our method improved accuracy by up to 4%, particularly in complex tissue regions. Our proposed method allows accurate brain tissue segmentation, a crucial step in brain magnetic resonance imaging (MRI) applications including brain surface reconstruction and realistic head model creation in neonates.
Collapse
Affiliation(s)
- Negar Noorizadeh
- Department of Pediatrics, University of Tennessee Health Science Center, Memphis, TN, USA
| | - Kamran Kazemi
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran.
| | | | - Habibollah Danyali
- Department of Electrical Engineering, Shiraz University of Technology, Shiraz, Iran
| | - Ardalan Aarabi
- Laboratory of Functional Neuroscience and Pathologies (UR UPJV 4559), University Research Center (CURS), University of Picardie Jules Verne, Amiens, France
| |
Collapse
|
3
|
Kuwabara M, Ikawa F, Nakazawa S, Koshino S, Ishii D, Kondo H, Hara T, Maeda Y, Sato R, Kaneko T, Maeyama S, Shimahara Y, Horie N. Artificial intelligence for volumetric measurement of cerebral white matter hyperintensities on thick-slice fluid-attenuated inversion recovery (FLAIR) magnetic resonance images from multiple centers. Sci Rep 2024; 14:10104. [PMID: 38698152 PMCID: PMC11065995 DOI: 10.1038/s41598-024-60789-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/26/2024] [Indexed: 05/05/2024] Open
Abstract
We aimed to develop a new artificial intelligence software that can automatically extract and measure the volume of white matter hyperintensities (WMHs) in head magnetic resonance imaging (MRI) using only thick-slice fluid-attenuated inversion recovery (FLAIR) sequences from multiple centers. We enrolled 1092 participants in Japan, comprising the thick-slice Private Dataset. Based on 207 randomly selected participants, neuroradiologists annotated WMHs using predefined guidelines. The annotated images of participants were divided into training (n = 138) and test (n = 69) datasets. The WMH segmentation model comprised a U-Net ensemble and was trained using the Private Dataset. Two other models were trained for validation using either both thin- and thick-slice MRI datasets or the thin-slice dataset alone. The voxel-wise Dice similarity coefficient (DSC) was used as the evaluation metric. The model trained using only thick-slice MRI showed a DSC of 0.820 for the test dataset, which is comparable to the accuracy of human readers. The model trained with the additional thin-slice dataset showed only a slightly improved DSC of 0.822. This automatic WMH segmentation model comprising a U-Net ensemble trained on a thick-slice FLAIR MRI dataset is a promising new method. Despite some limitations, this model may be applicable in clinical practice.
Collapse
Affiliation(s)
- Masashi Kuwabara
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Fusao Ikawa
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan.
- Department of Neurosurgery, Shimane Prefectural Central Hospital, 4-1-1 Himebara, Izumo, Shimane, 693-0068, Japan.
| | - Shinji Nakazawa
- LPIXEL Inc, 1-6-1 Otemachi, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Daizo Ishii
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Hiroshi Kondo
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Takeshi Hara
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Yuyo Maeda
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| | - Ryo Sato
- LPIXEL Inc, 1-6-1 Otemachi, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Taiki Kaneko
- LPIXEL Inc, 1-6-1 Otemachi, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Shiyuki Maeyama
- LPIXEL Inc, 1-6-1 Otemachi, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Yuki Shimahara
- LPIXEL Inc, 1-6-1 Otemachi, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Nobutaka Horie
- Department of Neurosurgery, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi, Minami-Ku, Hiroshima, Hiroshima, 734-8551, Japan
| |
Collapse
|
4
|
Jafrasteh B, Lubián-Gutiérrez M, Lubián-López SP, Benavente-Fernández I. Enhanced Spatial Fuzzy C-Means Algorithm for Brain Tissue Segmentation in T1 Images. Neuroinformatics 2024:10.1007/s12021-024-09661-x. [PMID: 38656595 DOI: 10.1007/s12021-024-09661-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2024] [Indexed: 04/26/2024]
Abstract
Magnetic Resonance Imaging (MRI) plays an important role in neurology, particularly in the precise segmentation of brain tissues. Accurate segmentation is crucial for diagnosing brain injuries and neurodegenerative conditions. We introduce an Enhanced Spatial Fuzzy C-means (esFCM) algorithm for 3D T1 MRI segmentation to three tissues, i.e. White Matter (WM), Gray Matter (GM), and Cerebrospinal Fluid (CSF). The esFCM employs a weighted least square algorithm utilizing the Structural Similarity Index (SSIM) for polynomial bias field correction. It also takes advantage of the information from the membership function of the last iteration to compute neighborhood impact. This strategic refinement enhances the algorithm's adaptability to complex image structures, effectively addressing challenges such as intensity irregularities and contributing to heightened segmentation accuracy. We compare the segmentation accuracy of esFCM against four variants of FCM, Gaussian Mixture Model (GMM) and FSL and ANTs algorithms using four various dataset, employing three measurement criteria. Comparative assessments underscore esFCM's superior performance, particularly in scenarios involving added noise and bias fields.The obtained results emphasize the significant potential of the proposed method in the segmentation of MRI images.
Collapse
Affiliation(s)
- Bahram Jafrasteh
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain.
| | - Manuel Lubián-Gutiérrez
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
| | - Simón Pedro Lubián-López
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
| | - Isabel Benavente-Fernández
- Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Division of Neonatology, Department of Paediatrics, Puerta del Mar University Hospital, Cádiz, 11008, Spain
- Area of Paediatrics, Department of Child and Mother Health and Radiology, Medical School, University of Cádiz, Cádiz, 11003, Spain
| |
Collapse
|
5
|
Tarvonen M, Manninen M, Lamminaho P, Jehkonen P, Tuppurainen V, Andersson S. Computer Vision for Identification of Increased Fetal Heart Variability in Cardiotocogram. Neonatology 2024; 121:460-467. [PMID: 38565092 DOI: 10.1159/000538134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/27/2024] [Indexed: 04/04/2024]
Abstract
INTRODUCTION Increased fetal heart rate variability (IFHRV), defined as fetal heart rate (FHR) baseline amplitude changes of >25 beats per minute with a duration of ≥1 min, is an early sign of intrapartum fetal hypoxia. This study evaluated the level of agreement of machine learning (ML) algorithms-based recognition of IFHRV patterns with expert analysis. METHODS Cardiotocographic recordings and cardiotocograms from 4,988 singleton term childbirths were evaluated independently by two expert obstetricians blinded to the outcomes. Continuous FHR monitoring with computer vision analysis was compared with visual analysis by the expert obstetricians. FHR signals were graphically processed and measured by the computer vision model labeled SALKA. RESULTS In visual analysis, IFHRV pattern occurred in 582 cardiotocograms (11.7%). Compared with visual analysis, SALKA recognized IFHRV patterns with an average Cohen's kappa coefficient of 0.981 (95% CI: 0.972-0.993). The sensitivity of SALKA was 0.981, the positive predictive rate was 0.822 (95% CI: 0.774-0.903), and the false-negative rate was 0.01 (95% CI: 0.00-0.02). The agreement between visual analysis and SALKA in identification of IFHRV was almost perfect (0.993) in cases (N = 146) with neonatal acidemia (i.e., umbilical artery pH <7.10). CONCLUSIONS Computer vision analysis by SALKA is a novel ML technique that, with high sensitivity and specificity, identifies IFHRV features in intrapartum cardiotocograms. SALKA recognizes potential early signs of fetal distress close to those of expert obstetricians, particularly in cases of neonatal acidemia.
Collapse
Affiliation(s)
- Mikko Tarvonen
- Department of Gynecology and Obstetrics, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Matti Manninen
- School of Engineering, Aalto University, Espoo, Finland
- Department of Geosciences and Geography, University of Helsinki, Espoo, Finland
| | - Petri Lamminaho
- Department of Mathematics and Statistic, University of Jyväskylä, Jyväskylä, Finland
| | - Petri Jehkonen
- Department of Computer, Communication and Information Sciences, Aalto University, Espoo, Finland
| | - Ville Tuppurainen
- Department of Industrial Engineering and Management, LUT University of Technology, Lappeenranta, Finland
- Helsinki University Hospital Area Administration, Helsinki, Finland
| | - Sture Andersson
- Children's Hospital, Pediatric Research Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
6
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
7
|
Zoetmulder R, Baak L, Khalili N, Marquering HA, Wagenaar N, Benders M, van der Aa NE, Išgum I. Brain segmentation in patients with perinatal arterial ischemic stroke. Neuroimage Clin 2023; 38:103381. [PMID: 36965456 PMCID: PMC10074207 DOI: 10.1016/j.nicl.2023.103381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/20/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023]
Abstract
BACKGROUND Perinatal arterial ischemic stroke (PAIS) is associated with adverse neurological outcomes. Quantification of ischemic lesions and consequent brain development in newborn infants relies on labor-intensive manual assessment of brain tissues and ischemic lesions. Hence, we propose an automatic method utilizing convolutional neural networks (CNNs) to segment brain tissues and ischemic lesions in MRI scans of infants suffering from PAIS. MATERIALS AND METHODS This single-center retrospective study included 115 patients with PAIS that underwent MRI after the stroke onset (baseline) and after three months (follow-up). Nine baseline and 12 follow-up MRI scans were manually annotated to provide reference segmentations (white matter, gray matter, basal ganglia and thalami, brainstem, ventricles, extra-ventricular cerebrospinal fluid, and cerebellum, and additionally on the baseline scans the ischemic lesions). Two CNNs were trained to perform automatic segmentation on the baseline and follow-up MRIs, respectively. Automatic segmentations were quantitatively evaluated using the Dice coefficient (DC) and the mean surface distance (MSD). Volumetric agreement between segmentations that were manually and automatically obtained was computed. Moreover, the scan quality and automatic segmentations were qualitatively evaluated in a larger set of MRIs without manual annotation by two experts. In addition, the scan quality was qualitatively evaluated in these scans to establish its impact on the automatic segmentation performance. RESULTS Automatic brain tissue segmentation led to a DC and MSD between 0.78-0.92 and 0.18-1.08 mm for baseline, and between 0.88-0.95 and 0.10-0.58 mm for follow-up scans, respectively. For the ischemic lesions at baseline the DC and MSD were between 0.72-0.86 and 1.23-2.18 mm, respectively. Volumetric measurements indicated limited oversegmentation of the extra-ventricular cerebrospinal fluid in both the follow-up and baseline scans, oversegmentation of the ischemic lesions in the left hemisphere, and undersegmentation of the ischemic lesions in the right hemisphere. In scans without imaging artifacts, brain tissue segmentation was graded as excellent in more than 85% and 91% of cases, respectively for the baseline and follow-up scans. For the ischemic lesions at baseline, this was in 61% of cases. CONCLUSIONS Automatic segmentation of brain tissue and ischemic lesions in MRI scans of patients with PAIS is feasible. The method may allow evaluation of the brain development and efficacy of treatment in large datasets.
Collapse
Affiliation(s)
- Riaan Zoetmulder
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands
| | - Lisanne Baak
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nadieh Khalili
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Henk A Marquering
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands
| | - Nienke Wagenaar
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Manon Benders
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Niek E van der Aa
- Department of Neonatology and Utrecht Brain Center, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, the Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Cardiovascular Sciences, Amsterdam, the Netherlands.
| |
Collapse
|
8
|
Drai M, Testud B, Brun G, Hak JF, Scavarda D, Girard N, Stellmann JP. Borrowing strength from adults: Transferability of AI algorithms for paediatric brain and tumour segmentation. Eur J Radiol 2022; 151:110291. [DOI: 10.1016/j.ejrad.2022.110291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 03/28/2022] [Accepted: 03/31/2022] [Indexed: 11/03/2022]
|
9
|
Edwards AD, Rueckert D, Smith SM, Abo Seada S, Alansary A, Almalbis J, Allsop J, Andersson J, Arichi T, Arulkumaran S, Bastiani M, Batalle D, Baxter L, Bozek J, Braithwaite E, Brandon J, Carney O, Chew A, Christiaens D, Chung R, Colford K, Cordero-Grande L, Counsell SJ, Cullen H, Cupitt J, Curtis C, Davidson A, Deprez M, Dillon L, Dimitrakopoulou K, Dimitrova R, Duff E, Falconer S, Farahibozorg SR, Fitzgibbon SP, Gao J, Gaspar A, Harper N, Harrison SJ, Hughes EJ, Hutter J, Jenkinson M, Jbabdi S, Jones E, Karolis V, Kyriakopoulou V, Lenz G, Makropoulos A, Malik S, Mason L, Mortari F, Nosarti C, Nunes RG, O’Keeffe C, O’Muircheartaigh J, Patel H, Passerat-Palmbach J, Pietsch M, Price AN, Robinson EC, Rutherford MA, Schuh A, Sotiropoulos S, Steinweg J, Teixeira RPAG, Tenev T, Tournier JD, Tusor N, Uus A, Vecchiato K, Williams LZJ, Wright R, Wurie J, Hajnal JV. The Developing Human Connectome Project Neonatal Data Release. Front Neurosci 2022; 16:886772. [PMID: 35677357 PMCID: PMC9169090 DOI: 10.3389/fnins.2022.886772] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 04/19/2022] [Indexed: 11/24/2022] Open
Abstract
The Developing Human Connectome Project has created a large open science resource which provides researchers with data for investigating typical and atypical brain development across the perinatal period. It has collected 1228 multimodal magnetic resonance imaging (MRI) brain datasets from 1173 fetal and/or neonatal participants, together with collateral demographic, clinical, family, neurocognitive and genomic data from 1173 participants, together with collateral demographic, clinical, family, neurocognitive and genomic data. All subjects were studied in utero and/or soon after birth on a single MRI scanner using specially developed scanning sequences which included novel motion-tolerant imaging methods. Imaging data are complemented by rich demographic, clinical, neurodevelopmental, and genomic information. The project is now releasing a large set of neonatal data; fetal data will be described and released separately. This release includes scans from 783 infants of whom: 583 were healthy infants born at term; as well as preterm infants; and infants at high risk of atypical neurocognitive development. Many infants were imaged more than once to provide longitudinal data, and the total number of datasets being released is 887. We now describe the dHCP image acquisition and processing protocols, summarize the available imaging and collateral data, and provide information on how the data can be accessed.
Collapse
Affiliation(s)
- A. David Edwards
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- Institute for AI and Informatics in Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - Stephen M. Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Samy Abo Seada
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Amir Alansary
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Jennifer Almalbis
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Joanna Allsop
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jesper Andersson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Tomoki Arichi
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
| | - Sophie Arulkumaran
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Matteo Bastiani
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Sir Peter Mansfield Imaging Centre, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Dafnis Batalle
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Luke Baxter
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jelena Bozek
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
| | - Eleanor Braithwaite
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Jacqueline Brandon
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Olivia Carney
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Andrew Chew
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Daan Christiaens
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - Raymond Chung
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Kathleen Colford
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Lucilio Cordero-Grande
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BBN, Madrid, Spain
| | - Serena J. Counsell
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Harriet Cullen
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Medical and Molecular Genetics, School of Basic and Medical Biosciences, King’s College London, London, United Kingdom
| | - John Cupitt
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Charles Curtis
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Alice Davidson
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Maria Deprez
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Louise Dillon
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Konstantina Dimitrakopoulou
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Translational Bioinformatics Platform, NIHR Biomedical Research Centre, Guy’s and St. Thomas’ NHS Foundation Trust and King’s College London, London, United Kingdom
| | - Ralica Dimitrova
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Eugene Duff
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Shona Falconer
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Seyedeh-Rezvan Farahibozorg
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Sean P. Fitzgibbon
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jianliang Gao
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Andreia Gaspar
- Institute for Systems and Robotics (ISR-Lisboa)/LaRSyS, Department of Bioengineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Nicholas Harper
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Sam J. Harrison
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Emer J. Hughes
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jana Hutter
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Saad Jbabdi
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Emily Jones
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Vyacheslav Karolis
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Vanessa Kyriakopoulou
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Gregor Lenz
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Antonios Makropoulos
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Shaihan Malik
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Luke Mason
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Filippo Mortari
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chiara Nosarti
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Child and Adolescent Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, United Kingdom
| | - Rita G. Nunes
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Institute for Systems and Robotics (ISR-Lisboa)/LaRSyS, Department of Bioengineering, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Camilla O’Keeffe
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Jonathan O’Muircheartaigh
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- MRC Centre for Neurodevelopmental Disorders, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Hamel Patel
- BioResource Centre, NIHR Biomedical Research Centre, South London and Maudsley NHS Trust, London, United Kingdom
| | - Jonathan Passerat-Palmbach
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Maximillian Pietsch
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Anthony N. Price
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Emma C. Robinson
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Mary A. Rutherford
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Andreas Schuh
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Stamatios Sotiropoulos
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
- Sir Peter Mansfield Imaging Centre, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Johannes Steinweg
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Rui Pedro Azeredo Gomes Teixeira
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Tencho Tenev
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Jacques-Donald Tournier
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Nora Tusor
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Alena Uus
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| | - Katy Vecchiato
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Logan Z. J. Williams
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Robert Wright
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Julia Wurie
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Joseph V. Hajnal
- Centre for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Biomedical Engineering Department, School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom
| |
Collapse
|
10
|
Enhanced Pre-Processing for Deep Learning in MRI Whole Brain Segmentation using Orthogonal Moments. BRAIN MULTIPHYSICS 2022. [DOI: 10.1016/j.brain.2022.100049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
|
11
|
Billardello R, Ntolkeras G, Chericoni A, Madsen JR, Papadelis C, Pearl PL, Grant PE, Taffoni F, Tamilia E. Novel User-Friendly Application for MRI Segmentation of Brain Resection following Epilepsy Surgery. Diagnostics (Basel) 2022; 12:diagnostics12041017. [PMID: 35454065 PMCID: PMC9032020 DOI: 10.3390/diagnostics12041017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Delineation of resected brain cavities on magnetic resonance images (MRIs) of epilepsy surgery patients is essential for neuroimaging/neurophysiology studies investigating biomarkers of the epileptogenic zone. The gold standard to delineate the resection on MRI remains manual slice-by-slice tracing by experts. Here, we proposed and validated a semiautomated MRI segmentation pipeline, generating an accurate model of the resection and its anatomical labeling, and developed a graphical user interface (GUI) for user-friendly usage. We retrieved pre- and postoperative MRIs from 35 patients who had focal epilepsy surgery, implemented a region-growing algorithm to delineate the resection on postoperative MRIs and tested its performance while varying different tuning parameters. Similarity between our output and hand-drawn gold standards was evaluated via dice similarity coefficient (DSC; range: 0-1). Additionally, the best segmentation pipeline was trained to provide an automated anatomical report of the resection (based on presurgical brain atlas). We found that the best-performing set of parameters presented DSC of 0.83 (0.72-0.85), high robustness to seed-selection variability and anatomical accuracy of 90% to the clinical postoperative MRI report. We presented a novel user-friendly open-source GUI that implements a semiautomated segmentation pipeline specifically optimized to generate resection models and their anatomical reports from epilepsy surgery patients, while minimizing user interaction.
Collapse
Affiliation(s)
- Roberto Billardello
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
- Correspondence: (R.B.); (E.T.)
| | - Georgios Ntolkeras
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Baystate Children’s Hospital, Springfield, MA 01199, USA
| | - Assia Chericoni
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Joseph R. Madsen
- Epilepsy Surgery Program, Department of Neurosurgery, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Christos Papadelis
- Jane and John Justin Neurosciences Center, Cook Children’s Health Care System, Fort Worth, TX 76104, USA;
| | - Phillip L. Pearl
- Division of Epilepsy and Clinical Neurophysiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Patricia Ellen Grant
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
| | - Fabrizio Taffoni
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Eleonora Tamilia
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Correspondence: (R.B.); (E.T.)
| |
Collapse
|
12
|
Shiohama T, Tsujimura K. Quantitative Structural Brain Magnetic Resonance Imaging Analyses: Methodological Overview and Application to Rett Syndrome. Front Neurosci 2022; 16:835964. [PMID: 35450016 PMCID: PMC9016334 DOI: 10.3389/fnins.2022.835964] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Congenital genetic disorders often present with neurological manifestations such as neurodevelopmental disorders, motor developmental retardation, epilepsy, and involuntary movement. Through qualitative morphometric evaluation of neuroimaging studies, remarkable structural abnormalities, such as lissencephaly, polymicrogyria, white matter lesions, and cortical tubers, have been identified in these disorders, while no structural abnormalities were identified in clinical settings in a large population. Recent advances in data analysis programs have led to significant progress in the quantitative analysis of anatomical structural magnetic resonance imaging (MRI) and diffusion-weighted MRI tractography, and these approaches have been used to investigate psychological and congenital genetic disorders. Evaluation of morphometric brain characteristics may contribute to the identification of neuroimaging biomarkers for early diagnosis and response evaluation in patients with congenital genetic diseases. This mini-review focuses on the methodologies and attempts employed to study Rett syndrome using quantitative structural brain MRI analyses, including voxel- and surface-based morphometry and diffusion-weighted MRI tractography. The mini-review aims to deepen our understanding of how neuroimaging studies are used to examine congenital genetic disorders.
Collapse
Affiliation(s)
- Tadashi Shiohama
- Department of Pediatrics, Chiba University Hospital, Chiba, Japan
- *Correspondence: Tadashi Shiohama,
| | - Keita Tsujimura
- Group of Brain Function and Development, Nagoya University Neuroscience Institute of the Graduate School of Science, Nagoya, Japan
- Research Unit for Developmental Disorders, Institute for Advanced Research, Nagoya University, Nagoya, Japan
- Department of Radiology, Harvard Medical School, Boston, MA, United States
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, United States
| |
Collapse
|
13
|
Nenning KH, Langs G. Machine learning in neuroimaging: from research to clinical practice. RADIOLOGIE (HEIDELBERG, GERMANY) 2022; 62:1-10. [PMID: 36044070 PMCID: PMC9732070 DOI: 10.1007/s00117-022-01051-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 12/14/2022]
Abstract
Neuroimaging is critical in clinical care and research, enabling us to investigate the brain in health and disease. There is a complex link between the brain's morphological structure, physiological architecture, and the corresponding imaging characteristics. The shape, function, and relationships between various brain areas change during development and throughout life, disease, and recovery. Like few other areas, neuroimaging benefits from advanced analysis techniques to fully exploit imaging data for studying the brain and its function. Recently, machine learning has started to contribute (a) to anatomical measurements, detection, segmentation, and quantification of lesions and disease patterns, (b) to the rapid identification of acute conditions such as stroke, or (c) to the tracking of imaging changes over time. As our ability to image and analyze the brain advances, so does our understanding of its intricate relationships and their role in therapeutic decision-making. Here, we review the current state of the art in using machine learning techniques to exploit neuroimaging data for clinical care and research, providing an overview of clinical applications and their contribution to fundamental computational neuroscience.
Collapse
Affiliation(s)
- Karl-Heinz Nenning
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Georg Langs
- Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
14
|
Lei Y, Wang T, Dong X, Tian S, Liu Y, Mao H, Curran WJ, Shu HK, Liu T, Yang X. MRI classification using semantic random forest with auto-context model. Quant Imaging Med Surg 2021; 11:4753-4766. [PMID: 34888187 PMCID: PMC8611460 DOI: 10.21037/qims-20-1114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Accepted: 04/28/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND It is challenging to differentiate air and bone on MR images of conventional sequences due to their low contrast. We propose to combine semantic feature extraction under auto-context manner into random forest to improve reasonability of the MRI segmentation for MRI-based radiotherapy treatment planning or PET attention correction. METHODS We applied a semantic classification random forest (SCRF) method which consists of a training stage and a segmentation stage. In the training stage, patch-based MRI features were extracted from registered MRI-CT training images, and the most informative elements were selected via feature selection to train an initial random forest. The rest sequence of random forests was trained by a combination of MRI feature and semantic feature under an auto-context manner. During segmentation, the MRI patches were first fed into these random forests to derive patch-based segmentation. By using patch fusion, the final end-to-end segmentation was obtained. RESULTS The Dice similarity coefficient (DSC) for air, bone and soft tissue classes obtained via proposed method were 0.976±0.007, 0.819±0.050 and 0.932±0.031, compared to 0.916±0.099, 0.673±0.151 and 0.830±0.083 with random forest (RF), and 0.942±0.086, 0.791±0.046 and 0.917±0.033 with U-Net. SCRF also outperformed the competing methods in sensitivity and specificity for all three structure types. CONCLUSIONS The proposed method accurately segmented bone, air and soft tissue. It is promising in facilitating advanced MR application in diagnosis and therapy.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hui-Kuo Shu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
15
|
Wang J, Lv Y, Wang J, Ma F, Du Y, Fan X, Wang M, Ke J. Fully automated segmentation in temporal bone CT with neural network: a preliminary assessment study. BMC Med Imaging 2021; 21:166. [PMID: 34753454 PMCID: PMC8576911 DOI: 10.1186/s12880-021-00698-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/26/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Segmentation of important structures in temporal bone CT is the basis of image-guided otologic surgery. Manual segmentation of temporal bone CT is time- consuming and laborious. We assessed the feasibility and generalization ability of a proposed deep learning model for automated segmentation of critical structures in temporal bone CT scans. METHODS Thirty-nine temporal bone CT volumes including 58 ears were divided into normal (n = 20) and abnormal groups (n = 38). Ossicular chain disruption (n = 10), facial nerve covering vestibular window (n = 10), and Mondini dysplasia (n = 18) were included in abnormal group. All facial nerves, auditory ossicles, and labyrinths of the normal group were manually segmented. For the abnormal group, aberrant structures were manually segmented. Temporal bone CT data were imported into the network in unmarked form. The Dice coefficient (DC) and average symmetric surface distance (ASSD) were used to evaluate the accuracy of automatic segmentation. RESULTS In the normal group, the mean values of DC and ASSD were respectively 0.703, and 0.250 mm for the facial nerve; 0.910, and 0.081 mm for the labyrinth; and 0.855, and 0.107 mm for the ossicles. In the abnormal group, the mean values of DC and ASSD were respectively 0.506, and 1.049 mm for the malformed facial nerve; 0.775, and 0.298 mm for the deformed labyrinth; and 0.698, and 1.385 mm for the aberrant ossicles. CONCLUSIONS The proposed model has good generalization ability, which highlights the promise of this approach for otologist education, disease diagnosis, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Xin Fan
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Menglin Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
16
|
Peterson MR, Cherukuri V, Paulson JN, Ssentongo P, Kulkarni AV, Warf BC, Monga V, Schiff SJ. Normal childhood brain growth and a universal sex and anthropomorphic relationship to cerebrospinal fluid. J Neurosurg Pediatr 2021; 28:458-468. [PMID: 34243147 PMCID: PMC8594737 DOI: 10.3171/2021.2.peds201006] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 02/19/2021] [Indexed: 11/23/2022]
Abstract
OBJECTIVE The study of brain size and growth has a long and contentious history, yet normal brain volume development has yet to be fully described. In particular, the normal brain growth and cerebrospinal fluid (CSF) accumulation relationship is critical to characterize because it is impacted in numerous conditions of early childhood in which brain growth and fluid accumulation are affected, such as infection, hemorrhage, hydrocephalus, and a broad range of congenital disorders. The authors of this study aim to describe normal brain volume growth, particularly in the setting of CSF accumulation. METHODS The authors analyzed 1067 magnetic resonance imaging scans from 505 healthy pediatric subjects from birth to age 18 years to quantify component and regional brain volumes. The volume trajectories were compared between the sexes and hemispheres using smoothing spline ANOVA. Population growth curves were developed using generalized additive models for location, scale, and shape. RESULTS Brain volume peaked at 10-12 years of age. Males exhibited larger age-adjusted total brain volumes than females, and body size normalization procedures did not eliminate this difference. The ratio of brain to CSF volume, however, revealed a universal age-dependent relationship independent of sex or body size. CONCLUSIONS These findings enable the application of normative growth curves in managing a broad range of childhood diseases in which cognitive development, brain growth, and fluid accumulation are interrelated.
Collapse
Affiliation(s)
- Mallory R. Peterson
- Center for Neural Engineering, The Pennsylvania State University, University Park
- Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park
- The Pennsylvania State University College of Medicine, Hershey, Pennsylvania
| | - Venkateswararao Cherukuri
- Center for Neural Engineering, The Pennsylvania State University, University Park
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park
| | - Joseph N. Paulson
- Department of Biostatistics, Product Development, Genentech Inc., South San Francisco, California
| | - Paddy Ssentongo
- Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park
| | - Abhaya V. Kulkarni
- Department of Neurosurgery, University of Toronto
- Department of Neurosurgery, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Benjamin C. Warf
- Department of Neurosurgery, Harvard Medical School
- Department of Neurosurgery, Boston Children’s Hospital, Boston, Massachusetts
| | - Vishal Monga
- School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park
| | - Steven J. Schiff
- Center for Neural Engineering, The Pennsylvania State University, University Park
- Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park
- Department of Neurosurgery, The Pennsylvania State University, University Park
- Department of Physics, The Pennsylvania State University, University Park
| |
Collapse
|
17
|
Sui Y, Afacan O, Gholipour A, Warfield SK. Fast and High-Resolution Neonatal Brain MRI Through Super-Resolution Reconstruction From Acquisitions With Variable Slice Selection Direction. Front Neurosci 2021; 15:636268. [PMID: 34220414 PMCID: PMC8242183 DOI: 10.3389/fnins.2021.636268] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 05/19/2021] [Indexed: 12/18/2022] Open
Abstract
The brain of neonates is small in comparison to adults. Imaging at typical resolutions such as one cubic mm incurs more partial voluming artifacts in a neonate than in an adult. The interpretation and analysis of MRI of the neonatal brain benefit from a reduction in partial volume averaging that can be achieved with high spatial resolution. Unfortunately, direct acquisition of high spatial resolution MRI is slow, which increases the potential for motion artifact, and suffers from reduced signal-to-noise ratio. The purpose of this study is thus that using super-resolution reconstruction in conjunction with fast imaging protocols to construct neonatal brain MRI images at a suitable signal-to-noise ratio and with higher spatial resolution than can be practically obtained by direct Fourier encoding. We achieved high quality brain MRI at a spatial resolution of isotropic 0.4 mm with 6 min of imaging time, using super-resolution reconstruction from three short duration scans with variable directions of slice selection. Motion compensation was achieved by aligning the three short duration scans together. We applied this technique to 20 newborns and assessed the quality of the images we reconstructed. Experiments show that our approach to super-resolution reconstruction achieved considerable improvement in spatial resolution and signal-to-noise ratio, while, in parallel, substantially reduced scan times, as compared to direct high-resolution acquisitions. The experimental results demonstrate that our approach allowed for fast and high-quality neonatal brain MRI for both scientific research and clinical studies.
Collapse
Affiliation(s)
- Yao Sui
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Onur Afacan
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| | - Simon K. Warfield
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| |
Collapse
|
18
|
Shaari H, Kevrić J, Jukić S, Bešić L, Jokić D, Ahmed N, Rajs V. Deep Learning-Based Studies on Pediatric Brain Tumors Imaging: Narrative Review of Techniques and Challenges. Brain Sci 2021; 11:brainsci11060716. [PMID: 34071202 PMCID: PMC8230188 DOI: 10.3390/brainsci11060716] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 05/10/2021] [Accepted: 05/17/2021] [Indexed: 11/16/2022] Open
Abstract
Brain tumors diagnosis in children is a scientific concern due to rapid anatomical, metabolic, and functional changes arising in the brain and non-specific or conflicting imaging results. Pediatric brain tumors diagnosis is typically centralized in clinical practice on the basis of diagnostic clues such as, child age, tumor location and incidence, clinical history, and imaging (Magnetic resonance imaging MRI / computed tomography CT) findings. The implementation of deep learning has rapidly propagated in almost every field in recent years, particularly in the medical images’ evaluation. This review would only address critical deep learning issues specific to pediatric brain tumor imaging research in view of the vast spectrum of other applications of deep learning. The purpose of this review paper is to include a detailed summary by first providing a succinct guide to the types of pediatric brain tumors and pediatric brain tumor imaging techniques. Then, we will present the research carried out by summarizing the scientific contributions to the field of pediatric brain tumor imaging processing and analysis. Finally, to establish open research issues and guidance for potential study in this emerging area, the medical and technical limitations of the deep learning-based approach were included.
Collapse
Affiliation(s)
- Hala Shaari
- Department of Information Technologies, Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina;
| | - Jasmin Kevrić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Samed Jukić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Larisa Bešić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Dejan Jokić
- Faculty of Engineering and Natural Sciences, International BURCH University, 71000 Sarajevo, Bosnia and Herzegovina; (J.K.); (S.J.); (L.B.); (D.J.)
| | - Nuredin Ahmed
- Control Department, Technical Computer College Tripoli, Tripoli 00218, Libya;
| | - Vladimir Rajs
- Department of Power, Electronics and Telecommunication Engineering, Faculty of Technical Science, University of Novi Sad, 21000 Novi Sad, Serbia
- Correspondence:
| |
Collapse
|
19
|
Comparison of Multispectral Image-Processing Methods for Brain Tissue Classification in BrainWeb Synthetic Data and Real MR Images. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9820145. [PMID: 33748284 PMCID: PMC7959972 DOI: 10.1155/2021/9820145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 01/28/2021] [Accepted: 02/08/2021] [Indexed: 11/30/2022]
Abstract
Accurate quantification of brain tissue is a fundamental and challenging task in neuroimaging. Over the past two decades, statistical parametric mapping (SPM) and FMRIB's Automated Segmentation Tool (FAST) have been widely used to estimate gray matter (GM) and white matter (WM) volumes. However, they cannot reliably estimate cerebrospinal fluid (CSF) volumes. To address this problem, we developed the TRIO algorithm (TRIOA), a new magnetic resonance (MR) multispectral classification method. SPM8, SPM12, FAST, and the TRIOA were evaluated using the BrainWeb database and real magnetic resonance imaging (MRI) data. In this paper, the MR brain images of 140 healthy volunteers (51.5 ± 15.8 y/o) were obtained using a whole-body 1.5 T MRI system (Aera, Siemens, Erlangen, Germany). Before classification, several preprocessing steps were performed, including skull stripping and motion and inhomogeneity correction. After extensive experimentation, the TRIOA was shown to be more effective than SPM and FAST. For real data, all test methods revealed that the participants aged 20–83 years exhibited an age-associated decline in GM and WM volume fractions. However, for CSF volume estimation, SPM8-s and SPM12-m both produced different results, which were also different compared with those obtained by FAST and the TRIOA. Furthermore, the TRIOA performed consistently better than both SPM and FAST for GM, WM, and CSF volume estimation. Compared with SPM and FAST, the proposed TRIOA showed more advantages by providing more accurate MR brain tissue classification and volume measurements, specifically in CSF volume estimation.
Collapse
|
20
|
Raurale SA, Boylan GB, Mathieson S, Marnane WP, Lightbody G, O'Toole JM. Grading hypoxic-ischemic encephalopathy in neonatal EEG with convolutional neural networks and quadratic time-frequency distributions. J Neural Eng 2021; 18. [PMID: 33618337 PMCID: PMC8208632 DOI: 10.1088/1741-2552/abe8ae] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 02/22/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE To develop an automated system to classify the severity of hypoxic-ischaemic encephalopathy injury (HIE) in neonates from the background electroencephalogram (EEG). METHOD By combining a quadratic time{frequency distribution (TFD) with a convolutional neural network, we develop a system that classifies 4 EEG grades of HIE. The network learns directly from the two- dimensional TFD through 3 independent layers with convolution in the time, frequency, and time{frequency directions. Computationally efficient algorithms make it feasible to transform each 5 minute epoch to the time-frequency domain by controlling for oversampling to reduce both computation and computer memory. The system is developed on EEG recordings from 54 neonates. Then the system is validated on a large unseen dataset of 338 hours of EEG recordings from 91 neonates obtained across multiple international centres. RESULTS The proposed EEG HIE-grading system achieves a leave-one-subject-out testing accuracy of 88.9% and kappa of 0.84 on the development dataset. Accuracy for the large unseen test dataset is 69.5% (95% confidence interval, CI: 65.3 to 73.6%) and kappa of 0.54, which is a significant (P < 0.001) improvement over a state-of-the-art feature-based method with an accuracy of 56.8% (95% CI: 51.4 to 61.7%) and kappa of 0.39. Performance of the proposed system was unaffected when the number of channels in testing was reduced from 8 to 2|accuracy for large validation dataset remained at 69.5% (95% CI: 65.5 to 74.0%). SIGNIFICANCE The proposed system outperforms the state-of-the-art machine learning algorithms for EEG grade classification on a large multi-centre unseen dataset, indicating the potential to assist clinical decision making for neonates with HIE.
Collapse
Affiliation(s)
- Sumit Arun Raurale
- Pediatrics and child health, INFANT Centre, University College Cork, Cork, Cork, T12 DC4A, IRELAND
| | - Geraldine B Boylan
- Department of Paediatrics and Child Health, University College Cork, University College Cork,, Cork, IRELAND
| | - Sean Mathieson
- Podiatric and Child Health, INFANT Centre, University College Cork, Wilton, Cork, T12 DC4A, IRELAND
| | - W P Marnane
- Department of Electrical Engineering and Microelectronics, University College Cork, College Road, Cork, T12 DC4A, IRELAND
| | - Gordon Lightbody
- Department of Electrical Engineering and Microelectronics, University College Cork, College Road, Cork, T12 DC4A, IRELAND
| | - John M O'Toole
- Irish Centre for Fetal and Neonatal Translational Research, Dept. ofPaediatrics and Child Health, University College Cork National University of Ireland, Western Gateway Building, Western Road, Cork, IRELAND
| |
Collapse
|
21
|
Toğaçar M, Ergen B, Cömert Z. Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks. Med Biol Eng Comput 2021; 59:57-70. [PMID: 33222016 DOI: 10.1007/s11517-020-02290-x/published] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/11/2020] [Indexed: 05/19/2023]
Abstract
Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.
Collapse
Affiliation(s)
- Mesut Toğaçar
- Department of Computer Technology, Technical Sciences Vocational School, Fırat University, Elazig, Turkey.
| | - Burhan Ergen
- Department of Computer Technology, Technical Sciences Vocational School, Fırat University, Elazig, Turkey
| | - Zafer Cömert
- Department of Software Engineering, Faculty of Engineering, Samsun University, Samsun, Turkey
| |
Collapse
|
22
|
Tumor type detection in brain MR images of the deep model developed using hypercolumn technique, attention modules, and residual blocks. Med Biol Eng Comput 2020; 59:57-70. [PMID: 33222016 DOI: 10.1007/s11517-020-02290-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 11/11/2020] [Indexed: 12/26/2022]
Abstract
Brain cancer is a disease caused by the growth of abnormal aggressive cells in the brain outside of normal cells. Symptoms and diagnosis of brain cancer cases are producing more accurate results day by day in parallel with the development of technological opportunities. In this study, a deep learning model called BrainMRNet which is developed for mass detection in open-source brain magnetic resonance images was used. The BrainMRNet model includes three processing steps: attention modules, the hypercolumn technique, and residual blocks. To demonstrate the accuracy of the proposed model, three types of tumor data leading to brain cancer were examined in this study: glioma, meningioma, and pituitary. In addition, a segmentation method was proposed, which additionally determines in which lobe area of the brain the two classes of tumors that cause brain cancer are more concentrated. The classification accuracy rates were performed in the study; it was 98.18% in glioma tumor, 96.73% in meningioma tumor, and 98.18% in pituitary tumor. At the end of the experiment, using the subset of glioma and meningioma tumor images, it was determined which at brain lobe the tumor region was seen, and 100% success was achieved in the analysis of this determination. In this study, a hybrid deep learning model is presented to determine the detection of the brain tumor. In addition, open-source software was proposed, which statistically found in which lobe region of the human brain the brain tumor occurred. The methods applied and tested in the experiments have shown promising results with a high level of accuracy, precision, and specificity. These results demonstrate the availability of the proposed approach in clinical settings to support the medical decision regarding brain tumor detection.
Collapse
|