1
|
Daskareh M, Vakilpour A, Barzegar-Golmoghani E, Esmaeilian S, Gilanchi S, Ezzati F, Alikhani M, Rahmanipour E, Amini N, Ghorbani M, Pezeshk P. Predicting Rheumatoid Arthritis Development Using Hand Ultrasound and Machine Learning-A Two-Year Follow-Up Cohort Study. Diagnostics (Basel) 2024; 14:1181. [PMID: 38893708 PMCID: PMC11171890 DOI: 10.3390/diagnostics14111181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 05/25/2024] [Accepted: 06/01/2024] [Indexed: 06/21/2024] Open
Abstract
BACKGROUND The early diagnosis and treatment of rheumatoid arthritis (RA) are essential to prevent joint damage and enhance patient outcomes. Diagnosing RA in its early stages is challenging due to the nonspecific and variable clinical signs and symptoms. Our study aimed to identify the most predictive features of hand ultrasound (US) for RA development and assess the performance of machine learning models in diagnosing preclinical RA. METHODS We conducted a prospective cohort study with 326 adults who had experienced hand joint pain for less than 12 months and no clinical arthritis. We assessed the participants clinically and via hand US at baseline and followed them for 24 months. Clinical progression to RA was defined according to the ACR/EULAR criteria. Regression modeling and machine learning approaches were used to analyze the predictive US features. RESULTS Of the 326 participants (45.10 ± 11.37 years/83% female), 123 (37.7%) developed clinical RA during follow-up. At baseline, 84.6% of the progressors had US synovitis, whereas 16.3% of the non-progressors did (p < 0.0001). Only 5.7% of the progressors had positive PD. Multivariate analysis revealed that the radiocarpal synovial thickness (OR = 39.8), PIP/MCP synovitis (OR = 68 and 39), and wrist effusion (OR = 12.56) on US significantly increased the odds of developing RA. ML confirmed these US features, along with the RF and anti-CCP levels, as the most important predictors of RA. CONCLUSIONS Hand US can identify preclinical synovitis and determine the RA risk. The radiocarpal synovial thickness, PIP/MCP synovitis, wrist effusion, and RF and anti-CCP levels are associated with RA development.
Collapse
Affiliation(s)
- Mahyar Daskareh
- Department of Radiology, University of California San Diego, San Diego, CA 92093, USA;
| | - Azin Vakilpour
- Division of Cardiovascular Diseases, Department of Medicine, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, USA;
| | | | - Saeid Esmaeilian
- Department of Radiology, Shiraz University of Medical Sciences, Shiraz 71348, Iran;
| | - Samira Gilanchi
- Proteomics Research Center, Shahid Beheshti University of Medical Sciences, Tehran 19839-63113, Iran;
| | - Fatemeh Ezzati
- Division of Rheumatic Disease, Department of Internal Medicine, UT Southwestern Medical Center, Dallas, TX 75390, USA;
| | - Majid Alikhani
- Department of Internal Medicine, Rheumatology Research Center, Shariati Hospital, Tehran University of Medical Sciences, Tehran 14117-13135, Iran;
| | - Elham Rahmanipour
- Immunology Research Center, Mashhad University of Medical Sciences, Mashhad 91779-48564, Iran;
| | - Niloofar Amini
- Department of Internal Medicine, Rheumatology Research Center, Shariati Hospital, Tehran University of Medical Sciences, Tehran 14117-13135, Iran;
| | - Mohammad Ghorbani
- Orthopedic Research Center, Mashhad University of Medical Sciences, Mashhad 91779-48564, Iran;
| | - Parham Pezeshk
- Division of Musculoskeletal Imaging, Department of Radiology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
2
|
Sabeti M, Alikhani S, Shakoor M, Boostani R, Moradi E. Automatic determination of ventricular indices in hydrocephalic pediatric brain CT scan. INTERDISCIPLINARY NEUROSURGERY 2023. [DOI: 10.1016/j.inat.2022.101675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
3
|
Gu D, Shi F, Hua R, Wei Y, Li Y, Zhu J, Zhang W, Zhang H, Yang Q, Huang P, Jiang Y, Bo B, Li Y, Zhang Y, Zhang M, Wu J, Shi H, Liu S, He Q, Zhang Q, Zhang X, Wei H, Liu G, Xue Z, Shen D. An artificial-intelligence-based age-specific template construction framework for brain structural analysis using magnetic resonance images. Hum Brain Mapp 2022; 44:861-875. [PMID: 36269199 PMCID: PMC9875934 DOI: 10.1002/hbm.26126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/23/2022] [Accepted: 10/01/2022] [Indexed: 01/28/2023] Open
Abstract
It is an essential task to construct brain templates and analyze their anatomical structures in neurological and cognitive science. Generally, templates constructed from magnetic resonance imaging (MRI) of a group of subjects can provide a standard reference space for analyzing the structural and functional characteristics of the group. With recent development of artificial intelligence (AI) techniques, it is desirable to explore AI registration methods for quantifying age-specific brain variations and tendencies across different ages. In this article, we present an AI-based age-specific template construction (called ASTC) framework for longitudinal structural brain analysis using T1-weighted MRIs of 646 subjects from 18 to 82 years old collected from four medical centers. Altogether, 13 longitudinal templates were constructed at a 5-year age interval using ASTC, and tissue segmentation and substructure parcellation were performed for analysis across different age groups. The results indicated consistent changes in brain structures along with aging and demonstrated the capability of ASTC for longitudinal neuroimaging study.
Collapse
Affiliation(s)
- Dongdong Gu
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Rui Hua
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Ying Wei
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Yufei Li
- School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
- School of Mathematics and Computer ScienceChifeng UniversityChifengChina
| | - Jiayu Zhu
- Shanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
| | - Weijun Zhang
- Shanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
| | - Han Zhang
- School of Biomedical EngineeringShanghaiTech UniversityShanghaiChina
- Institute of Brain‐Intelligence TechnologyZhangjiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai Center of Brain‐Intelligence EngineeringShanghaiChina
| | - Qing Yang
- School of Biomedical EngineeringShanghaiTech UniversityShanghaiChina
- Institute of Brain‐Intelligence TechnologyZhangjiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai Center of Brain‐Intelligence EngineeringShanghaiChina
| | - Peiyu Huang
- Department of RadiologyThe Second Affiliated Hospital, Zhejiang University School of MedicineHangzhouChina
| | - Yi Jiang
- Institute of Brain‐Intelligence TechnologyZhangjiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai Center of Brain‐Intelligence EngineeringShanghaiChina
| | - Bin Bo
- School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | - Yao Li
- School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | - Yaoyu Zhang
- School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | - Minming Zhang
- Department of RadiologyThe Second Affiliated Hospital, Zhejiang University School of MedicineHangzhouChina
| | - Jinsong Wu
- Glioma Surgery Division, Neurologic Surgery DepartmentHuashan HospitalShanghaiChina
- Medical CollegeFudan UniversityShanghaiChina
| | - Hongcheng Shi
- Department of Nuclear MedicineZhongshan Hospital, Fudan UniversityShanghaiChina
| | - Siwei Liu
- Department of Nuclear MedicineZhongshan Hospital, Fudan UniversityShanghaiChina
| | - Qiang He
- Shanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
- United Imaging Research Institute of Innovative Medical EquipmentShenzhenChina
| | - Qiang Zhang
- Shanghai United Imaging Healthcare Co., Ltd.ShanghaiChina
| | - Xu Zhang
- Institute of Brain‐Intelligence TechnologyZhangjiang Lab, Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai Center of Brain‐Intelligence EngineeringShanghaiChina
| | - Hongjiang Wei
- School of Biomedical EngineeringShanghai Jiao Tong UniversityShanghaiChina
| | | | - Zhong Xue
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
| | - Dinggang Shen
- Shanghai United Imaging Intelligence Co., Ltd.ShanghaiChina
- School of Biomedical EngineeringShanghaiTech UniversityShanghaiChina
- Shanghai Clinical Research and Trial CenterShanghaiChina
| | | |
Collapse
|
4
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
5
|
A Fuzzy Consensus Clustering Algorithm for MRI Brain Tissue Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Brain tissue segmentation is an important component of the clinical diagnosis of brain diseases using multi-modal magnetic resonance imaging (MR). Brain tissue segmentation has been developed by many unsupervised methods in the literature. The most commonly used unsupervised methods are K-Means, Expectation-Maximization, and Fuzzy Clustering. Fuzzy clustering methods offer considerable benefits compared with the aforementioned methods as they are capable of handling brain images that are complex, largely uncertain, and imprecise. However, this approach suffers from the intrinsic noise and intensity inhomogeneity (IIH) in the data resulting from the acquisition process. To resolve these issues, we propose a fuzzy consensus clustering algorithm that defines a membership function resulting from a voting schema to cluster the pixels. In particular, we first pre-process the MRI data and employ several segmentation techniques based on traditional fuzzy sets and intuitionistic sets. Then, we adopted a voting schema to fuse the results of the applied clustering methods. Finally, to evaluate the proposed method, we used the well-known performance measures (boundary measure, overlap measure, and volume measure) on two publicly available datasets (OASIS and IBSR18). The experimental results show the superior performance of the proposed method in comparison with the recent state of the art. The performance of the proposed method is also presented using a real-world Autism Spectrum Disorder Detection problem with better accuracy compared to other existing methods.
Collapse
|
6
|
De Asis-Cruz J, Krishnamurthy D, Jose C, Cook KM, Limperopoulos C. FetalGAN: Automated Segmentation of Fetal Functional Brain MRI Using Deep Generative Adversarial Learning and Multi-Scale 3D U-Net. Front Neurosci 2022; 16:887634. [PMID: 35747213 PMCID: PMC9209698 DOI: 10.3389/fnins.2022.887634] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/16/2022] [Indexed: 01/02/2023] Open
Abstract
An important step in the preprocessing of resting state functional magnetic resonance images (rs-fMRI) is the separation of brain from non-brain voxels. Widely used imaging tools such as FSL's BET2 and AFNI's 3dSkullStrip accomplish this task effectively in children and adults. In fetal functional brain imaging, however, the presence of maternal tissue around the brain coupled with the non-standard position of the fetal head limit the usefulness of these tools. Accurate brain masks are thus generated manually, a time-consuming and tedious process that slows down preprocessing of fetal rs-fMRI. Recently, deep learning-based segmentation models such as convolutional neural networks (CNNs) have been increasingly used for automated segmentation of medical images, including the fetal brain. Here, we propose a computationally efficient end-to-end generative adversarial neural network (GAN) for segmenting the fetal brain. This method, which we call FetalGAN, yielded whole brain masks that closely approximated the manually labeled ground truth. FetalGAN performed better than 3D U-Net model and BET2: FetalGAN, Dice score = 0.973 ± 0.013, precision = 0.977 ± 0.015; 3D U-Net, Dice score = 0.954 ± 0.054, precision = 0.967 ± 0.037; BET2, Dice score = 0.856 ± 0.084, precision = 0.758 ± 0.113. FetalGAN was also faster than 3D U-Net and the manual method (7.35 s vs. 10.25 s vs. ∼5 min/volume). To the best of our knowledge, this is the first successful implementation of 3D CNN with GAN on fetal fMRI brain images and represents a significant advance in fully automating processing of rs-MRI images.
Collapse
Affiliation(s)
- Josepheen De Asis-Cruz
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Dhineshvikram Krishnamurthy
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Chris Jose
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Kevin M. Cook
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Catherine Limperopoulos
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| |
Collapse
|
7
|
SVF-Net: spatial and visual feature enhancement network for brain structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03706-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Konar D, Bhattacharyya S, Dey S, Panigrahi BK. Optimized activation for quantum-inspired self-supervised neural network based fully automated brain lesion segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03108-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
9
|
Arabahmadi M, Farahbakhsh R, Rezazadeh J. Deep Learning for Smart Healthcare-A Survey on Brain Tumor Detection from Medical Imaging. SENSORS (BASEL, SWITZERLAND) 2022; 22:1960. [PMID: 35271115 PMCID: PMC8915095 DOI: 10.3390/s22051960] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 02/18/2022] [Accepted: 02/28/2022] [Indexed: 12/13/2022]
Abstract
Advances in technology have been able to affect all aspects of human life. For example, the use of technology in medicine has made significant contributions to human society. In this article, we focus on technology assistance for one of the most common and deadly diseases to exist, which is brain tumors. Every year, many people die due to brain tumors; based on "braintumor" website estimation in the U.S., about 700,000 people have primary brain tumors, and about 85,000 people are added to this estimation every year. To solve this problem, artificial intelligence has come to the aid of medicine and humans. Magnetic resonance imaging (MRI) is the most common method to diagnose brain tumors. Additionally, MRI is commonly used in medical imaging and image processing to diagnose dissimilarity in different parts of the body. In this study, we conducted a comprehensive review on the existing efforts for applying different types of deep learning methods on the MRI data and determined the existing challenges in the domain followed by potential future directions. One of the branches of deep learning that has been very successful in processing medical images is CNN. Therefore, in this survey, various architectures of CNN were reviewed with a focus on the processing of medical images, especially brain MRI images.
Collapse
Affiliation(s)
| | - Reza Farahbakhsh
- Institut Polytechnique de Paris, Telecom SudParis, 91000 Evry, France;
| | - Javad Rezazadeh
- North Tehran Branch, Azad University, Tehran 1667914161, Iran;
- Kent Institute Australia, Sydney, NSW 2000, Australia
| |
Collapse
|
10
|
Gordon S, Kodner B, Goldfryd T, Sidorov M, Goldberger J, Raviv TR. An atlas of classifiers-a machine learning paradigm for brain MRI segmentation. Med Biol Eng Comput 2021; 59:1833-1849. [PMID: 34313921 DOI: 10.1007/s11517-021-02414-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/21/2021] [Indexed: 11/25/2022]
Abstract
We present the Atlas of Classifiers (AoC)-a conceptually novel framework for brain MRI segmentation. The AoC is a spatial map of voxel-wise multinomial logistic regression (LR) functions learned from the labeled data. Upon convergence, the resulting fixed LR weights, a few for each voxel, represent the training dataset. It can, therefore, be considered as a light-weight learning machine, which despite its low capacity does not underfit the problem. The AoC construction is independent of the actual intensities of the test images, providing the flexibility to train it on the available labeled data and use it for the segmentation of images from different datasets and modalities. In this sense, it does not overfit the training data, as well. The proposed method has been applied to numerous publicly available datasets for the segmentation of brain MRI tissues and is shown to be robust to noise and outreach commonly used methods. Promising results were also obtained for multi-modal, cross-modality MRI segmentation. Finally, we show how AoC trained on brain MRIs of healthy subjects can be exploited for lesion segmentation of multiple sclerosis patients.
Collapse
Affiliation(s)
- Shiri Gordon
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Boris Kodner
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tal Goldfryd
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Michael Sidorov
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Jacob Goldberger
- The Faculty of Electrical Engineering, Ber-Ilan University, Ramat-Gan, Israel
| | - Tammy Riklin Raviv
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| |
Collapse
|
11
|
Quon JL, Han M, Kim LH, Koran ME, Cheng LC, Lee EH, Wright J, Ramaswamy V, Lober RM, Taylor MD, Grant GA, Cheshier SH, Kestle JRW, Edwards MS, Yeom KW. Artificial intelligence for automatic cerebral ventricle segmentation and volume calculation: a clinical tool for the evaluation of pediatric hydrocephalus. J Neurosurg Pediatr 2021; 27:131-138. [PMID: 33260138 PMCID: PMC9707365 DOI: 10.3171/2020.6.peds20251] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 06/10/2020] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Imaging evaluation of the cerebral ventricles is important for clinical decision-making in pediatric hydrocephalus. Although quantitative measurements of ventricular size, over time, can facilitate objective comparison, automated tools for calculating ventricular volume are not structured for clinical use. The authors aimed to develop a fully automated deep learning (DL) model for pediatric cerebral ventricle segmentation and volume calculation for widespread clinical implementation across multiple hospitals. METHODS The study cohort consisted of 200 children with obstructive hydrocephalus from four pediatric hospitals, along with 199 controls. Manual ventricle segmentation and volume calculation values served as "ground truth" data. An encoder-decoder convolutional neural network architecture, in which T2-weighted MR images were used as input, automatically delineated the ventricles and output volumetric measurements. On a held-out test set, segmentation accuracy was assessed using the Dice similarity coefficient (0 to 1) and volume calculation was assessed using linear regression. Model generalizability was evaluated on an external MRI data set from a fifth hospital. The DL model performance was compared against FreeSurfer research segmentation software. RESULTS Model segmentation performed with an overall Dice score of 0.901 (0.946 in hydrocephalus, 0.856 in controls). The model generalized to external MR images from a fifth pediatric hospital with a Dice score of 0.926. The model was more accurate than FreeSurfer, with faster operating times (1.48 seconds per scan). CONCLUSIONS The authors present a DL model for automatic ventricle segmentation and volume calculation that is more accurate and rapid than currently available methods. With near-immediate volumetric output and reliable performance across institutional scanner types, this model can be adapted to the real-time clinical evaluation of hydrocephalus and improve clinician workflow.
Collapse
Affiliation(s)
- Jennifer L. Quon
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California
| | - Michelle Han
- Stanford University School of Medicine, Stanford, California
| | - Lily H. Kim
- Stanford University School of Medicine, Stanford, California
| | - Mary Ellen Koran
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Leo C. Cheng
- Department of Urology, Stanford University School of Medicine, Stanford, California
| | - Edward H. Lee
- Department of Electrical Engineering, Stanford University, Stanford, California
| | - Jason Wright
- Department of Radiology, Seattle Children’s Hospital, University of Washington School of Medicine, Seattle, Washington
| | - Vijay Ramaswamy
- Department of Neurosurgery, The Hospital for Sick Children, University of Toronto, Ontario, Canada
| | - Robert M. Lober
- Department of Neurosurgery, Dayton Children’s Hospital, Wright State University Boonshoft School of Medicine, Dayton, Ohio
| | - Michael D. Taylor
- Department of Neurosurgery, University of Utah School of Medicine, Salt Lake City, Utah
| | - Gerald A. Grant
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California
| | - Samuel H. Cheshier
- Department of Neurosurgery, University of Utah School of Medicine, Salt Lake City, Utah
| | - John R. W. Kestle
- Department of Neurosurgery, University of Utah School of Medicine, Salt Lake City, Utah
| | - Michael S.B. Edwards
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California
| | - Kristen W. Yeom
- Division of Pediatric Neurosurgery, Lucile Packard Children’s Hospital, Stanford, California
| |
Collapse
|
12
|
Maragkos GA, Filippidis AS, Chilamkurthy S, Salem MM, Tanamala S, Gomez-Paz S, Rao P, Moore JM, Papavassiliou E, Hackney D, Thomas AJ. Automated Lateral Ventricular and Cranial Vault Volume Measurements in 13,851 Patients Using Deep Learning Algorithms. World Neurosurg 2021; 148:e363-e373. [PMID: 33421645 DOI: 10.1016/j.wneu.2020.12.148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 12/27/2020] [Accepted: 12/28/2020] [Indexed: 11/24/2022]
Abstract
BACKGROUND No large dataset-derived standard has been established for normal or pathologic human cerebral ventricular and cranial vault volumes. Automated volumetric measurements could be used to assist in diagnosis and follow-up of hydrocephalus or craniofacial syndromes. In this work, we use deep learning algorithms to measure ventricular and cranial vault volumes in a large dataset of head computed tomography (CT) scans. METHODS A cross-sectional dataset comprising 13,851 CT scans was used to deploy U-Net deep learning networks to segment and quantify lateral cerebral ventricular and cranial vault volumes in relation to age and sex. The models were validated against manual segmentations. Corresponding radiologic reports were annotated using a rule-based natural language processing framework to identify normal scans, cerebral atrophy, or hydrocephalus. RESULTS U-Net models had high fidelity to manual segmentations for lateral ventricular and cranial vault volume measurements (Dice index, 0.878 and 0.983, respectively). The natural language processing identified 6239 (44.7%) normal radiologic reports, 1827 (13.1%) with cerebral atrophy, and 1185 (8.5%) with hydrocephalus. Age-based and sex-based reference tables with medians, 25th and 75th percentiles for scans classified as normal, atrophy, and hydrocephalus were constructed. The median lateral ventricular volume in normal scans was significantly smaller compared with hydrocephalus (15.7 vs. 82.0 mL; P < 0.001). CONCLUSIONS This is the first study to measure lateral ventricular and cranial vault volumes in a large dataset, made possible with artificial intelligence. We provide a robust method to establish normal values for these volumes and a tool to report these on CT scans when evaluating for hydrocephalus.
Collapse
Affiliation(s)
- Georgios A Maragkos
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Aristotelis S Filippidis
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | | | - Mohamed M Salem
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | | | - Santiago Gomez-Paz
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | | | - Justin M Moore
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Efstathios Papavassiliou
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - David Hackney
- Radiology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
| | - Ajith J Thomas
- Neurosurgery Service, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA.
| |
Collapse
|
13
|
Segmentation of MRI brain scans using spatial constraints and 3D features. Med Biol Eng Comput 2020; 58:3101-3112. [PMID: 33155095 DOI: 10.1007/s11517-020-02270-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 09/08/2020] [Indexed: 10/23/2022]
Abstract
This paper presents a novel unsupervised algorithm for brain tissue segmentation in magnetic resonance imaging (MRI). The proposed algorithm, named Gardens2, adopts a clustering approach to segment voxels of a given MRI into three classes: cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM). Using an overlapping criterion, 3D feature descriptors and prior atlas information, Gardens2 generates a segmentation mask per class in order to parcellate the brain tissues. We assessed our method using three neuroimaging datasets: BrainWeb, IBSR18, and IBSR20, the last two provided by the Internet Brain Segmentation Repository. Its performance was compared with eleven well established as well as newly proposed unsupervised segmentation methods. Overall, Gardens2 obtained better segmentation performance than the rest of the methods in two of the three databases and competitive results when its performance was measured by class. Graphical Abstract Brain tissue segmentation using 3D features and an adjusted atlas template.
Collapse
|
14
|
Murugesan M, Ragavan D. An Intensity Variation Pattern Analysis Based Machine Learning Classifier for MRI Brain Tumor Detection. Curr Med Imaging 2020; 15:555-564. [PMID: 32008563 DOI: 10.2174/1573405614666180718122353] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 06/08/2018] [Accepted: 06/24/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND An accurate detection of tumor from the Magnetic Resonance Images (MRIs) is a critical and demanding task in medical image processing, due to the varying shape and structure of brain. So, different segmentation approaches such as manual, semi-automatic, and fully automatic are developed in the traditional works. Among them, the fully automatic segmentation techniques are increasingly used by the medical experts for an efficient disease diagnosis. But, it has the limitations of over segmentation, increased complexity, and time consumption. OBJECTIVE In order to solve these problems, this paper aims to develop an efficient segmentation and classification system by incorporating a novel image processing techniques. METHODS Here, the Distribution based Adaptive Median Filtering (DMAF) technique is employed for preprocessing the image. Then, skull removal is performed to extract the tumor portion from the filtered image. Further, the Neighborhood Differential Edge Detection (NDED) technique is implemented to cluster the tumor affected pixels, and it is segmented by the use of Intensity Variation Pattern Analysis (IVPA) technique. Finally, the normal and abnormal images are classified by using the Weighted Machine Learning (WML) technique. RESULTS During experiments, the results of the existing and proposed segmentation and classification techniques are evaluated based on different performance measures. To prove the superiority of the proposed technique, it is compared with the existing techniques. CONCLUSION From the analysis, it is observed that the proposed IVPA-WML techniques provide the better results compared than the existing techniques.
Collapse
Affiliation(s)
- Muthalakshmi Murugesan
- Department of Electronics and Communication Engineering, PSN Engineering College, Tirunelveli-627152, Tamilnadu, India
| | - Dhanasekaran Ragavan
- Department of Electrical and Electronics Engineering, Syed Ammal Engineering College, Ramanathapuram, India
| |
Collapse
|
15
|
Lee MH, Kim KH, Cho KR, Choi JW, Kong DS, Seol HJ, Nam DH, Lee JI. Volumetric changes of intracranial metastases during the course of fractionated stereotactic radiosurgery and significance of adaptive planning. J Neurosurg 2020; 133:129-134. [PMID: 31151111 DOI: 10.3171/2019.3.jns183130] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Accepted: 03/05/2019] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Fractionated Gamma Knife surgery (FGKS) has recently been used to treat large brain metastases. However, little is known about specific volume changes of lesions during the course of treatment. The authors investigated short-term volume changes of metastatic lesions during FGKS. METHODS The authors analyzed 33 patients with 40 lesions who underwent FGKS for intracranial metastases of non-small-cell lung cancer (NSCLC; 25 patients with 32 lesions) and breast cancer (8 patients with 8 lesions). FGKS was performed in 3-5 fractions. Baseline MRI was performed before the first fraction. MRI was repeated after 1 or 2 fractions. Adaptive planning was executed based on new images. The median prescription dose was 8 Gy (range 6-10 Gy) with a 50% isodose line. RESULTS On follow-up MRI, 18 of 40 lesions (45.0%) showed decreased tumor volumes (TVs). A significant difference was observed between baseline (median 15.8 cm3) and follow-up (median 14.2 cm3) volumes (p < 0.001). A conformity index was significantly decreased when it was assumed that adaptive planning was not implemented, from baseline (mean 0.96) to follow-up (mean 0.90, p < 0.001). The average reduction rate was 1.5% per day. The median follow-up duration was 29.5 weeks (range 9-94 weeks). During the follow-up period, local recurrence occurred in 5 lesions. CONCLUSIONS The TV showed changes with a high dose of radiation during the course of FGKS. Volumetric change caused a significant difference in the clinical parameters. It is expected that adaptive planning would be helpful in the case of radiosensitive tumors such as NSCLCs or breast cancer to ensure an adequate dose to the target area and reduce unnecessary exposure of normal tissue to radiation.
Collapse
Affiliation(s)
- Min Ho Lee
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
- 2Department of Neurosurgery, Uijeongbu St. Mary's Hospital, The Catholic University of Korea, Uijeongbu; and
| | - Kyung Hwan Kim
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
- 3Department of Neurosurgery, Chungnam National University Hospital, Chungnam National University School of Medicine, Daejeon, Korea
| | - Kyung Rae Cho
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Jung Won Choi
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Doo-Sik Kong
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Ho Jun Seol
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Do-Hyun Nam
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Jung-Il Lee
- 1Department of Neurosurgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| |
Collapse
|
16
|
Karimi D, Salcudean SE. Reducing the Hausdorff Distance in Medical Image Segmentation With Convolutional Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:499-513. [PMID: 31329113 DOI: 10.1109/tmi.2019.2930068] [Citation(s) in RCA: 130] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Hausdorff Distance (HD) is widely used in evaluating medical image segmentation methods. However, the existing segmentation methods do not attempt to reduce HD directly. In this paper, we present novel loss functions for training convolutional neural network (CNN)-based segmentation methods with the goal of reducing HD directly. We propose three methods to estimate HD from the segmentation probability map produced by a CNN. One method makes use of the distance transform of the segmentation boundary. Another method is based on applying morphological erosion on the difference between the true and estimated segmentation maps. The third method works by applying circular/spherical convolution kernels of different radii on the segmentation probability maps. Based on these three methods for estimating HD, we suggest three loss functions that can be used for training to reduce HD. We use these loss functions to train CNNs for segmentation of the prostate, liver, and pancreas in ultrasound, magnetic resonance, and computed tomography images and compare the results with commonly-used loss functions. Our results show that the proposed loss functions can lead to approximately 18-45% reduction in HD without degrading other segmentation performance criteria such as the Dice similarity coefficient. The proposed loss functions can be used for training medical image segmentation methods in order to reduce the large segmentation errors.
Collapse
|
17
|
Yu B, Fan Z. A comprehensive review of conditional random fields: variants, hybrids and applications. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09793-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
18
|
Yanase J, Triantaphyllou E. The seven key challenges for the future of computer-aided diagnosis in medicine. Int J Med Inform 2019; 129:413-422. [DOI: 10.1016/j.ijmedinf.2019.06.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 06/15/2019] [Accepted: 06/19/2019] [Indexed: 12/23/2022]
|
19
|
Bui TD, Shin J, Moon T. Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101613] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
20
|
Lin X, Li X. Image Based Brain Segmentation: From Multi-Atlas Fusion to Deep Learning. Curr Med Imaging 2019; 15:443-452. [DOI: 10.2174/1573405614666180817125454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/28/2018] [Accepted: 08/07/2018] [Indexed: 01/10/2023]
Abstract
Background:
This review aims to identify the development of the algorithms for brain
tissue and structure segmentation in MRI images.
Discussion:
Starting from the results of the Grand Challenges on brain tissue and structure segmentation
held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this
review analyses the development of the algorithms and discusses the tendency from multi-atlas label
fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand
Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully.
Conclusion:
Although deep learning has got higher rankings in the challenge, it has not yet met the
expectations in terms of accuracy. More effective and specialized work should be done in the future.
Collapse
Affiliation(s)
- Xiangbo Lin
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| | - Xiaoxi Li
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| |
Collapse
|
21
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
22
|
Abstract
In brain magnetic resonance (MR) images, image quality is often degraded due to the influence of noise and outliers, which brings some difficulties for doctors to segment and extract brain tissue accurately. In this paper, a modified robust fuzzy c-means (MRFCM) algorithm for brain MR image segmentation is proposed. According to the gray level information of the pixels in the local neighborhood, the deviation values of each adjacent pixel are calculated in kernel space based on their median value, and the normalized adaptive weighted measure of each pixel is obtained. Both impulse noise and Gaussian noise in the image can be effectively suppressed, and the detail and edge information of the brain MR image can be better preserved. At the same time, the gray histogram is used to replace single pixel during the clustering process. The results of segmentation of MRFCM are compared with the state-of-the-art algorithms based on fuzzy clustering, and the proposed algorithm has the stronger anti-noise property, better robustness to various noises and higher segmentation accuracy.
Collapse
|
23
|
Fang L, Zhang L, Nie D, Cao X, Rekik I, Lee SW, He H, Shen D. Automatic brain labeling via multi-atlas guided fully convolutional networks. Med Image Anal 2019; 51:157-168. [DOI: 10.1016/j.media.2018.10.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 10/27/2018] [Accepted: 10/30/2018] [Indexed: 12/26/2022]
|
24
|
Arce-Santana ER, Mejia-Rodriguez AR, Martinez-Peña E, Alba A, Mendez M, Scalco E, Mastropietro A, Rizzo G. A new Probabilistic Active Contour region-based method for multiclass medical image segmentation. Med Biol Eng Comput 2018; 57:565-576. [DOI: 10.1007/s11517-018-1896-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 09/05/2018] [Indexed: 11/27/2022]
|
25
|
Chen H, Dou Q, Yu L, Qin J, Heng PA. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. Neuroimage 2018; 170:446-455. [PMID: 28445774 DOI: 10.1016/j.neuroimage.2017.04.041] [Citation(s) in RCA: 302] [Impact Index Per Article: 50.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Revised: 03/24/2017] [Accepted: 04/18/2017] [Indexed: 01/04/2023] Open
Affiliation(s)
- Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| | - Lequan Yu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
26
|
McGillivray MF, Cheng W, Peters NS, Christensen K. Machine learning methods for locating re-entrant drivers from electrograms in a model of atrial fibrillation. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172434. [PMID: 29765687 PMCID: PMC5936952 DOI: 10.1098/rsos.172434] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Accepted: 03/13/2018] [Indexed: 05/14/2023]
Abstract
Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.
Collapse
Affiliation(s)
- Max Falkenberg McGillivray
- The Blackett Laboratory, Imperial College London, London SW7 2AZ, UK
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
| | - William Cheng
- The Blackett Laboratory, Imperial College London, London SW7 2AZ, UK
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
| | - Nicholas S Peters
- ElectroCardioMaths Programme, Imperial Centre for Cardiac Engineering, Imperial College London, London W12 0NN, UK
| | - Kim Christensen
- The Blackett Laboratory, Imperial College London, London SW7 2AZ, UK
- Centre for Complexity Science, Imperial College London, London SW7 2AZ, UK
- ElectroCardioMaths Programme, Imperial Centre for Cardiac Engineering, Imperial College London, London W12 0NN, UK
| |
Collapse
|
27
|
Yepes-Calderon F, Nelson MD, McComb JG. Automatically measuring brain ventricular volume within PACS using artificial intelligence. PLoS One 2018; 13:e0193152. [PMID: 29543817 PMCID: PMC5854260 DOI: 10.1371/journal.pone.0193152] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Accepted: 02/04/2018] [Indexed: 01/28/2023] Open
Abstract
The picture archiving and communications system (PACS) is currently the standard platform to manage medical images but lacks analytical capabilities. Staying within PACS, the authors have developed an automatic method to retrieve the medical data and access it at a voxel level, decrypted and uncompressed that allows analytical capabilities while not perturbing the system’s daily operation. Additionally, the strategy is secure and vendor independent. Cerebral ventricular volume is important for the diagnosis and treatment of many neurological disorders. A significant change in ventricular volume is readily recognized, but subtle changes, especially over longer periods of time, may be difficult to discern. Clinical imaging protocols and parameters are often varied making it difficult to use a general solution with standard segmentation techniques. Presented is a segmentation strategy based on an algorithm that uses four features extracted from the medical images to create a statistical estimator capable of determining ventricular volume. When compared with manual segmentations, the correlation was 94% and holds promise for even better accuracy by incorporating the unlimited data available. The volume of any segmentable structure can be accurately determined utilizing the machine learning strategy presented and runs fully automatically within the PACS.
Collapse
Affiliation(s)
- Fernando Yepes-Calderon
- Children’s Hospital Los Angeles, Division of Neurosurgery, Los Angeles, CA, United States of America
- University of Southern California, Keck School of Medicine, Los Angeles, CA, United States of America
- * E-mail:
| | - Marvin D. Nelson
- Children’s Hospital Los Angeles, Department of Radiology, Los Angeles, CA, United States of America
- University of Southern California, Keck School of Medicine, Los Angeles, CA, United States of America
| | - J. Gordon McComb
- Children’s Hospital Los Angeles, Division of Neurosurgery, Los Angeles, CA, United States of America
- University of Southern California, Keck School of Medicine, Los Angeles, CA, United States of America
| |
Collapse
|
28
|
Pereira S, Meier R, McKinley R, Wiest R, Alves V, Silva CA, Reyes M. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. Med Image Anal 2017; 44:228-244. [PMID: 29289703 DOI: 10.1016/j.media.2017.12.009] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Revised: 10/15/2017] [Accepted: 12/12/2017] [Indexed: 12/19/2022]
Abstract
Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images.
Collapse
Affiliation(s)
- Sérgio Pereira
- CMEMS-UMinho Research Unit, University of Minho, Guimarães, Portugal; Centro Algoritmi, University of Minho, Braga, Portugal.
| | - Raphael Meier
- Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.
| | - Richard McKinley
- Support Center for Advanced Neuroimaging - Institute for Diagnostic and Interventional Neuroradiology, University Hospital and University of Bern, Switzerland.
| | - Roland Wiest
- Support Center for Advanced Neuroimaging - Institute for Diagnostic and Interventional Neuroradiology, University Hospital and University of Bern, Switzerland.
| | - Victor Alves
- Centro Algoritmi, University of Minho, Braga, Portugal.
| | - Carlos A Silva
- CMEMS-UMinho Research Unit, University of Minho, Guimarães, Portugal.
| | - Mauricio Reyes
- Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.
| |
Collapse
|
29
|
Serag A, Wilkinson AG, Telford EJ, Pataky R, Sparrow SA, Anblagan D, Macnaught G, Semple SI, Boardman JP. SEGMA: An Automatic SEGMentation Approach for Human Brain MRI Using Sliding Window and Random Forests. Front Neuroinform 2017; 11:2. [PMID: 28163680 PMCID: PMC5247463 DOI: 10.3389/fninf.2017.00002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 01/05/2017] [Indexed: 11/29/2022] Open
Abstract
Quantitative volumes from brain magnetic resonance imaging (MRI) acquired across the life course may be useful for investigating long term effects of risk and resilience factors for brain development and healthy aging, and for understanding early life determinants of adult brain structure. Therefore, there is an increasing need for automated segmentation tools that can be applied to images acquired at different life stages. We developed an automatic segmentation method for human brain MRI, where a sliding window approach and a multi-class random forest classifier were applied to high-dimensional feature vectors for accurate segmentation. The method performed well on brain MRI data acquired from 179 individuals, analyzed in three age groups: newborns (38–42 weeks gestational age), children and adolescents (4–17 years) and adults (35–71 years). As the method can learn from partially labeled datasets, it can be used to segment large-scale datasets efficiently. It could also be applied to different populations and imaging modalities across the life course.
Collapse
Affiliation(s)
- Ahmed Serag
- MRC Centre for Reproductive Health, University of Edinburgh Edinburgh, UK
| | | | - Emma J Telford
- MRC Centre for Reproductive Health, University of Edinburgh Edinburgh, UK
| | - Rozalia Pataky
- MRC Centre for Reproductive Health, University of Edinburgh Edinburgh, UK
| | - Sarah A Sparrow
- MRC Centre for Reproductive Health, University of Edinburgh Edinburgh, UK
| | - Devasuda Anblagan
- MRC Centre for Reproductive Health, University of EdinburghEdinburgh, UK; Centre for Clinical Brain Sciences, University of EdinburghEdinburgh, UK
| | - Gillian Macnaught
- Clinical Research Imaging Centre, University of Edinburgh Edinburgh, UK
| | - Scott I Semple
- Clinical Research Imaging Centre, University of EdinburghEdinburgh, UK; Centre for Cardiovascular Science, University of EdinburghEdinburgh, UK
| | - James P Boardman
- MRC Centre for Reproductive Health, University of EdinburghEdinburgh, UK; Centre for Clinical Brain Sciences, University of EdinburghEdinburgh, UK
| |
Collapse
|