1
|
Debiasi G, Mazzonetto I, Bertoldo A. The effect of processing pipelines, input images and age on automatic cortical morphology estimates. Comput Methods Programs Biomed 2023; 242:107825. [PMID: 37806120 DOI: 10.1016/j.cmpb.2023.107825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 09/01/2023] [Accepted: 09/20/2023] [Indexed: 10/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging of the brain allows to enrich the study of the relationship between cortical morphology, healthy ageing, diseases and cognition. Since manual segmentation of the cerebral cortex is time consuming and subjective, many software packages have been developed. FreeSurfer (FS) and Advanced Normalization Tools (ANTs) are the most used and allow as inputs a T1-weighted (T1w) image or its combination with a T2-weighted (T2w) image. In this study we evaluated the impact of different software and input images on cortical estimates. Additionally, we investigated whether the variation of the results depending on software and inputs is also influenced by age. METHODS For 240 healthy subjects, cortical thickness was computed with ANTs and FreeSurfer. Estimates were derived using both the T1w image and adding the T2w image. Significant effects due to software, input images and age range were investigated with ANOVA statistical analysis. Moreover, the accuracy of the cortical thickness estimates was assessed based on their age-prediction precision. RESULTS Using FreeSurfer and ANTs with T1w or T1w-T2w images resulted in significant differences in the cortical thickness estimates. These differences change with the age range of the subjects. Regardless of the images used, the more recent FS version tested exhibited the best performances in terms of age prediction. CONCLUSIONS Our study points out the importance of i) consistently processing data using the same tool; ii) considering the software, input images and the age range of the subjects when comparing multiple studies.
Collapse
Affiliation(s)
- Giulia Debiasi
- Department of Information Engineering, University of Padova, via Gradenigo 6/b, Padova 35131, Italy
| | - Ilaria Mazzonetto
- Department of Information Engineering, University of Padova, via Gradenigo 6/b, Padova 35131, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padova, via Gradenigo 6/b, Padova 35131, Italy; Padova Neuroscience Center (PNC), University of Padova, via Orus 2/b, Padova 35131, Italy.
| |
Collapse
|
2
|
Tustison NJ, Altes TA, Qing K, He M, Miller GW, Avants BB, Shim YM, Gee JC, Mugler JP, Mata JF. Image- versus histogram-based considerations in semantic segmentation of pulmonary hyperpolarized gas images. Magn Reson Med 2021; 86:2822-2836. [PMID: 34227163 DOI: 10.1002/mrm.28908] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 06/05/2021] [Accepted: 06/09/2021] [Indexed: 12/13/2022]
Abstract
PURPOSE To characterize the differences between histogram-based and image-based algorithms for segmentation of hyperpolarized gas lung images. METHODS Four previously published histogram-based segmentation algorithms (ie, linear binning, hierarchical k-means, fuzzy spatial c-means, and a Gaussian mixture model with a Markov random field prior) and an image-based convolutional neural network were used to segment 2 simulated data sets derived from a public (n = 29 subjects) and a retrospective collection (n = 51 subjects) of hyperpolarized 129Xe gas lung images transformed by common MRI artifacts (noise and nonlinear intensity distortion). The resulting ventilation-based segmentations were used to assess algorithmic performance and characterize optimization domain differences in terms of measurement bias and precision. RESULTS Although facilitating computational processing and providing discriminating clinically relevant measures of interest, histogram-based segmentation methods discard important contextual spatial information and are consequently less robust in terms of measurement precision in the presence of common MRI artifacts relative to the image-based convolutional neural network. CONCLUSIONS Direct optimization within the image domain using convolutional neural networks leverages spatial information, which mitigates problematic issues associated with histogram-based approaches and suggests a preferred future research direction. Further, the entire processing and evaluation framework, including the newly reported deep learning functionality, is available as open source through the well-known Advanced Normalization Tools ecosystem.
Collapse
Affiliation(s)
- Nicholas J Tustison
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - Talissa A Altes
- Department of Radiology, University of Missouri, Columbia, Missouri, USA
| | - Kun Qing
- Department of Radiation Oncology, City of Hope, Los Angeles, California, USA
| | - Mu He
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - G Wilson Miller
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - Brian B Avants
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - Yun M Shim
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - James C Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - John P Mugler
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - Jaime F Mata
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
3
|
Sankareswaran SP, Krishnan M. Unsupervised end-to-end Brain Tumor Magnetic Resonance Image Registration using RBCNN: Rigid Transformation, B-Spline Transformation and Convolutional Neural Network. Curr Med Imaging 2021; 18:387-397. [PMID: 34365954 DOI: 10.2174/1573405617666210806125526] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 05/20/2021] [Accepted: 06/01/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Image registration is the process of aligning two or more images in a single coordinate. Now a days, medical image registration plays a significant role in computer assisted disease diagnosis, treatment, and surgery. The different modalities available in the medical image makes medical image registration as an essential step in Computer Assisted Diagnosis(CAD), Computer-Aided Therapy (CAT) and Computer-Assisted Surgery (CAS). Problem definition: Recently many learning based methods were employed for disease detection and classification but those methods were not suitable for real time due to delayed response and need of pre alignment,labeling. METHOD The proposed research constructed a deep learning model with Rigid transform and B-Spline transform for medical image registration for an automatic brain tumour finding. The proposed research consists of two steps. First steps uses Rigid transformation based Convolutional Neural Network and the second step uses B-Spline transform based Convolutional Neural Network. The model is trained and tested with 3624 MR (Magnetic Resonance) images to assess the performance. The researchers believe that MR images helps in success the treatment of brain tumour people. RESULT The result of the proposed method is compared with the Rigid Convolutional Neural Network (CNN), Rigid CNN + Thin-Plat Spline (TPS), Affine CNN, Voxel morph, ADMIR (Affine and Deformable Medical Image Registration) and ANT(Advanced Normalization Tools) using DICE score, average symmetric surface distance (ASD), and Hausdorff distance. CONCLUSION The RBCNN model will help the physician to automatically detect and classify the brain tumor quickly(18 Sec) and efficiently with out doing any pre-alignment and labeling.
Collapse
Affiliation(s)
- Senthil Pandi Sankareswaran
- Department of Computer Science and Engineering, Mohamed Sathak A. J. College of Engineering, Tamil Nadu. India
| | - Mahadevan Krishnan
- Department of Electrical and Electronics Engineering, PSNA College of Engineering and Technology, Tamil Nadu. India
| |
Collapse
|
4
|
Tustison NJ, Avants BB, Lin Z, Feng X, Cullen N, Mata JF, Flors L, Gee JC, Altes TA, Mugler, III JP, Qing K. Convolutional Neural Networks with Template-Based Data Augmentation for Functional Lung Image Quantification. Acad Radiol 2019; 26:412-423. [PMID: 30195415 DOI: 10.1016/j.acra.2018.08.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 08/04/2018] [Accepted: 08/06/2018] [Indexed: 12/12/2022]
Abstract
RATIONALE AND OBJECTIVES We propose an automated segmentation pipeline based on deep learning for proton lung MRI segmentation and ventilation-based quantification which improves on our previously reported methodologies in terms of computational efficiency while demonstrating accuracy and robustness. The large data requirement for the proposed framework is made possible by a novel template-based data augmentation strategy. Supporting this work is the open-source ANTsRNet-a growing repository of well-known deep learning architectures first introduced here. MATERIALS AND METHODS Deep convolutional neural network (CNN) models were constructed and trained using a custom multilabel Dice metric loss function and a novel template-based data augmentation strategy. Training (including template generation and data augmentation) employed 205 proton MR images and 73 functional lung MRI. Evaluation was performed using data sets of size 63 and 40 images, respectively. RESULTS Accuracy for CNN-based proton lung MRI segmentation (in terms of Dice overlap) was left lung: 0.93 ± 0.03, right lung: 0.94 ± 0.02, and whole lung: 0.94 ± 0.02. Although slightly less accurate than our previously reported joint label fusion approach (left lung: 0.95 ± 0.02, right lung: 0.96 ± 0.01, and whole lung: 0.96 ± 0.01), processing time is <1 second per subject for the proposed approach versus ∼30 minutes per subject using joint label fusion. Accuracy for quantifying ventilation defects was determined based on a consensus labeling where average accuracy (Dice multilabel overlap of ventilation defect regions plus normal region) was 0.94 for the CNN method; 0.92 for our previously reported method; and 0.90, 0.92, and 0.94 for expert readers. CONCLUSION The proposed framework yields accurate automated quantification in near real time. CNNs drastically reduce processing time after offline model construction and demonstrate significant future potential for facilitating quantitative analysis of functional lung MRI.
Collapse
|
5
|
Tustison NJ, Cook PA, Klein A, Song G, Das SR, Duda JT, Kandel BM, van Strien N, Stone JR, Gee JC, Avants BB. Large-scale evaluation of ANTs and FreeSurfer cortical thickness measurements. Neuroimage 2014; 99:166-79. [PMID: 24879923 DOI: 10.1016/j.neuroimage.2014.05.044] [Citation(s) in RCA: 413] [Impact Index Per Article: 41.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2014] [Revised: 05/11/2014] [Accepted: 05/15/2014] [Indexed: 12/20/2022] Open
Abstract
Many studies of the human brain have explored the relationship between cortical thickness and cognition, phenotype, or disease. Due to the subjectivity and time requirements in manual measurement of cortical thickness, scientists have relied on robust software tools for automation which facilitate the testing and refinement of neuroscientific hypotheses. The most widely used tool for cortical thickness studies is the publicly available, surface-based FreeSurfer package. Critical to the adoption of such tools is a demonstration of their reproducibility, validity, and the documentation of specific implementations that are robust across large, diverse imaging datasets. To this end, we have developed the automated, volume-based Advanced Normalization Tools (ANTs) cortical thickness pipeline comprising well-vetted components such as SyGN (multivariate template construction), SyN (image registration), N4 (bias correction), Atropos (n-tissue segmentation), and DiReCT (cortical thickness estimation). In this work, we have conducted the largest evaluation of automated cortical thickness measures in publicly available data, comparing FreeSurfer and ANTs measures computed on 1205 images from four open data sets (IXI, MMRR, NKI, and OASIS), with parcellation based on the recently proposed Desikan-Killiany-Tourville (DKT) cortical labeling protocol. We found good scan-rescan repeatability with both FreeSurfer and ANTs measures. Given that such assessments of precision do not necessarily reflect accuracy or an ability to make statistical inferences, we further tested the neurobiological validity of these approaches by evaluating thickness-based prediction of age and gender. ANTs is shown to have a higher predictive performance than FreeSurfer for both of these measures. In promotion of open science, we make all of our scripts, data, and results publicly available which complements the use of open image data sets and the open source availability of the proposed ANTs cortical thickness pipeline.
Collapse
Affiliation(s)
- Nicholas J Tustison
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA.
| | - Philip A Cook
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Gang Song
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Sandhitsu R Das
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Jeffrey T Duda
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Benjamin M Kandel
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Niels van Strien
- Sage Bionetworks, Seattle, WA, USA; Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - James R Stone
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA
| | - James C Gee
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| | - Brian B Avants
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|