51
|
Ramanarayanan S, Murugesan B, Kalyanasundaram A, Prabhakaran S, Ram K, Patil S, Sivaprakasam M. MRI Super-Resolution using Laplacian Pyramid Convolutional Neural Networks with Isotropic Undecimated Wavelet Loss. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1584-1587. [PMID: 33018296 DOI: 10.1109/embc44109.2020.9176100] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
High spatial resolution of Magnetic Resonance images (MRI) provide rich structural details to facilitate accurate diagnosis and quantitative image analysis. However the long acquisition time of MRI leads to patient discomfort and possible motion artifacts in the reconstructed image. Single Image Super-Resolution (SISR) using Convolutional Neural networks (CNN) is an emerging trend in biomedical imaging especially Magnetic Resonance (MR) image analysis for image post processing. An efficient choice of SISR architecture is required to achieve better quality reconstruction. In addition, a robust choice of loss function together with the domain in which these loss functions operate play an important role in enhancing the fine structural details as well as removing the blurring effects to form a high resolution image. In this work, we propose a novel combined loss function consisting of an L1 Charbonnier loss function in the image domain and a wavelet domain loss function called the Isotropic Undecimated Wavelet loss (IUW loss) to train the existing Laplacian Pyramid Super-Resolution CNN. The proposed loss function was evaluated on three MRI datasets - privately collected Knee MRI dataset and the publicly available Kirby21 brain and iSeg infant brain datasets and on benchmark SISR datasets for natural images. Experimental analysis shows promising results with better recovery of structure and improvements in qualitative metrics.
Collapse
|
52
|
|
53
|
Sun Y, Gao K, Niu S, Lin W, Li G, Wang L. Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:663-673. [PMID: 33598664 PMCID: PMC7885085 DOI: 10.1007/978-3-030-59861-7_67] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
To characterize early cerebellum development, accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues is one of the most pivotal steps. However, due to the weak tissue contrast, extremely folded tiny structures, and severe partial volume effect, infant cerebellum tissue segmentation is especially challenging, and the manual labels are hard to obtain and correct for learning-based methods. To the best of our knowledge, there is no work on the cerebellum segmentation for infant subjects less than 24 months of age. In this work, we develop a semi-supervised transfer learning framework guided by a confidence map for tissue segmentation of cerebellum MR images from 24-month-old to 6-month-old infants. Note that only 24-month-old subjects have reliable manual labels for training, due to their high tissue contrast. Through the proposed semi-supervised transfer learning, the labels from 24-month-old subjects are gradually propagated to the 18-, 12-, and 6-month-old subjects, which have a low tissue contrast. Comparison with the state-of-the-art methods demonstrates the superior performance of the proposed method, especially for 6-month-old subjects.
Collapse
Affiliation(s)
- Yue Sun
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Kun Gao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Sijie Niu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
54
|
Pei Y, Wang L, Zhao F, Zhong T, Liao L, Shen D, Li G. Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:384-393. [PMID: 33644782 PMCID: PMC7912521 DOI: 10.1007/978-3-030-59861-7_39] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Yuchen Pei
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Tao Zhong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lufan Liao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
55
|
Zhao Y, Li P, Gao C, Liu Y, Chen Q, Yang F, Meng D. TSASNet: Tooth segmentation on dental panoramic X-ray images by Two-Stage Attention Segmentation Network. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106338] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
56
|
Zöllei L, Iglesias JE, Ou Y, Grant PE, Fischl B. Infant FreeSurfer: An automated segmentation and surface extraction pipeline for T1-weighted neuroimaging data of infants 0-2 years. Neuroimage 2020; 218:116946. [PMID: 32442637 PMCID: PMC7415702 DOI: 10.1016/j.neuroimage.2020.116946] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 03/03/2020] [Accepted: 05/12/2020] [Indexed: 01/23/2023] Open
Abstract
The development of automated tools for brain morphometric analysis in infants has lagged significantly behind analogous tools for adults. This gap reflects the greater challenges in this domain due to: 1) a smaller-scaled region of interest, 2) increased motion corruption, 3) regional changes in geometry due to heterochronous growth, and 4) regional variations in contrast properties corresponding to ongoing myelination and other maturation processes. Nevertheless, there is a great need for automated image-processing tools to quantify differences between infant groups and other individuals, because aberrant cortical morphologic measurements (including volume, thickness, surface area, and curvature) have been associated with neuropsychiatric, neurologic, and developmental disorders in children. In this paper we present an automated segmentation and surface extraction pipeline designed to accommodate clinical MRI studies of infant brains in a population 0-2 year-olds. The algorithm relies on a single channel of T1-weighted MR images to achieve automated segmentation of cortical and subcortical brain areas, producing volumes of subcortical structures and surface models of the cerebral cortex. We evaluated the algorithm both qualitatively and quantitatively using manually labeled datasets, relevant comparator software solutions cited in the literature, and expert evaluations. The computational tools and atlases described in this paper will be distributed to the research community as part of the FreeSurfer image analysis package.
Collapse
Affiliation(s)
- Lilla Zöllei
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| | - Juan Eugenio Iglesias
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Center for Medical Image Computing, University College London, United Kingdom; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Yangming Ou
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - Bruce Fischl
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
57
|
Hu X, Guo R, Chen J, Li H, Waldmannstetter D, Zhao Y, Li B, Shi K, Menze B. Coarse-to-Fine Adversarial Networks and Zone-Based Uncertainty Analysis for NK/T-Cell Lymphoma Segmentation in CT/PET Images. IEEE J Biomed Health Inform 2020; 24:2599-2608. [DOI: 10.1109/jbhi.2020.2972694] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
58
|
Fu H, Li F, Xu Y, Liao J, Xiong J, Shen J, Liu J, Zhang X. A Retrospective Comparison of Deep Learning to Manual Annotations for Optic Disc and Optic Cup Segmentation in Fundus Photographs. Transl Vis Sci Technol 2020; 9:33. [PMID: 32832206 PMCID: PMC7414704 DOI: 10.1167/tvst.9.2.33] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 04/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Optic disc (OD) and optic cup (OC) segmentation are fundamental for fundus image analysis. Manual annotation is time consuming, expensive, and highly subjective, whereas an automated system is invaluable to the medical community. The aim of this study is to develop a deep learning system to segment OD and OC in fundus photographs, and evaluate how the algorithm compares against manual annotations. Methods A total of 1200 fundus photographs with 120 glaucoma cases were collected. The OD and OC annotations were labeled by seven licensed ophthalmologists, and glaucoma diagnoses were based on comprehensive evaluations of the subject medical records. A deep learning system for OD and OC segmentation was developed. The performances of segmentation and glaucoma discriminating based on the cup-to-disc ratio (CDR) of automated model were compared against the manual annotations. Results The algorithm achieved an OD dice of 0.938 (95% confidence interval [CI] = 0.934–0.941), OC dice of 0.801 (95% CI = 0.793–0.809), and CDR mean absolute error (MAE) of 0.077 (95% CI = 0.073 mean absolute error (MAE)0.082). For glaucoma discriminating based on CDR calculations, the algorithm obtained an area under receiver operator characteristic curve (AUC) of 0.948 (95% CI = 0.920 mean absolute error (MAE)0.973), with a sensitivity of 0.850 (95% CI = 0.794–0.923) and specificity of 0.853 (95% CI = 0.798–0.918). Conclusions We demonstrated the potential of the deep learning system to assist ophthalmologists in analyzing OD and OC segmentation and discriminating glaucoma from nonglaucoma subjects based on CDR calculations. Translational Relevance We investigate the segmentation of OD and OC by deep learning system compared against the manual annotations.
Collapse
Affiliation(s)
- Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Jingan Liao
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - Jian Xiong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianbing Shen
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Guangzhou, Guangdong, China.,Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | | |
Collapse
|
59
|
Bui TD, Wang L, Lin W, Li G, Shen D. 6-MONTH INFANT BRAIN MRI SEGMENTATION GUIDED BY 24-MONTH DATA USING CYCLE-CONSISTENT ADVERSARIAL NETWORKS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020. [PMID: 34422223 DOI: 10.1109/isbi45749.2020.9098515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.
Collapse
Affiliation(s)
- Toan Duc Bui
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | | |
Collapse
|
60
|
Ding Y, Acosta R, Enguix V, Suffren S, Ortmann J, Luck D, Dolz J, Lodygensky GA. Using Deep Convolutional Neural Networks for Neonatal Brain Image Segmentation. Front Neurosci 2020; 14:207. [PMID: 32273836 PMCID: PMC7114297 DOI: 10.3389/fnins.2020.00207] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/25/2020] [Indexed: 12/13/2022] Open
Abstract
INTRODUCTION Deep learning neural networks are especially potent at dealing with structured data, such as images and volumes. Both modified LiviaNET and HyperDense-Net performed well at a prior competition segmenting 6-month-old infant magnetic resonance images, but neonatal cerebral tissue type identification is challenging given its uniquely inverted tissue contrasts. The current study aims to evaluate the two architectures to segment neonatal brain tissue types at term equivalent age. METHODS Both networks were retrained over 24 pairs of neonatal T1 and T2 data from the Developing Human Connectome Project public data set and validated on another eight pairs against ground truth. We then reported the best-performing model from training and its performance by computing the Dice similarity coefficient (DSC) for each tissue type against eight test subjects. RESULTS During the testing phase, among the segmentation approaches tested, the dual-modality HyperDense-Net achieved the best statistically significantly test mean DSC values, obtaining 0.94/0.95/0.92 for the tissue types and took 80 h to train and 10 min to segment, including preprocessing. The single-modality LiviaNET was better at processing T2-weighted images than processing T1-weighted images across all tissue types, achieving mean DSC values of 0.90/0.90/0.88 for gray matter, white matter, and cerebrospinal fluid, respectively, while requiring 30 h to train and 8 min to segment each brain, including preprocessing. DISCUSSION Our evaluation demonstrates that both neural networks can segment neonatal brains, achieving previously reported performance. Both networks will be continuously retrained over an increasingly larger repertoire of neonatal brain data and be made available through the Canadian Neonatal Brain Platform to better serve the neonatal brain imaging research community.
Collapse
Affiliation(s)
- Yang Ding
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Rolando Acosta
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Vicente Enguix
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Sabrina Suffren
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Janosch Ortmann
- Department of Management and Technology, Université du Québec à Montréal, Montreal, QC, Canada
| | - David Luck
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| | - Gregory A. Lodygensky
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| |
Collapse
|
61
|
Parcellation of the neonatal cortex using Surface-based Melbourne Children's Regional Infant Brain atlases (M-CRIB-S). Sci Rep 2020; 10:4359. [PMID: 32152381 PMCID: PMC7062836 DOI: 10.1038/s41598-020-61326-2] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 02/21/2020] [Indexed: 11/12/2022] Open
Abstract
Longitudinal studies measuring changes in cortical morphology over time are best facilitated by parcellation schemes compatible across all life stages. The Melbourne Children’s Regional Infant Brain (M-CRIB) and M-CRIB 2.0 atlases provide voxel-based parcellations of the cerebral cortex compatible with the Desikan-Killiany (DK) and the Desikan-Killiany-Tourville (DKT) cortical labelling schemes. This study introduces surface-based versions of the M-CRIB and M-CRIB 2.0 atlases, termed M-CRIB-S(DK) and M-CRIB-S(DKT), with a pipeline for automated parcellation utilizing FreeSurfer and developing Human Connectome Project (dHCP) tools. Using T2-weighted magnetic resonance images of healthy neonates (n = 58), we created average spherical templates of cortical curvature and sulcal depth. Manually labelled regions in a subset (n = 10) were encoded into the spherical template space to construct M-CRIB-S(DK) and M-CRIB-S(DKT) atlases. Labelling accuracy was assessed using Dice overlap and boundary discrepancy measures with leave-one-out cross-validation. Cross-validated labelling accuracy was high for both atlases (average regional Dice = 0.79–0.83). Worst-case boundary discrepancy instances ranged from 9.96–10.22 mm, which appeared to be driven by variability in anatomy for some cases. The M-CRIB-S atlas data and automatic pipeline allow extraction of neonatal cortical surfaces labelled according to the DK or DKT parcellation schemes.
Collapse
|
62
|
Karimi D, Salcudean SE. Reducing the Hausdorff Distance in Medical Image Segmentation With Convolutional Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:499-513. [PMID: 31329113 DOI: 10.1109/tmi.2019.2930068] [Citation(s) in RCA: 130] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Hausdorff Distance (HD) is widely used in evaluating medical image segmentation methods. However, the existing segmentation methods do not attempt to reduce HD directly. In this paper, we present novel loss functions for training convolutional neural network (CNN)-based segmentation methods with the goal of reducing HD directly. We propose three methods to estimate HD from the segmentation probability map produced by a CNN. One method makes use of the distance transform of the segmentation boundary. Another method is based on applying morphological erosion on the difference between the true and estimated segmentation maps. The third method works by applying circular/spherical convolution kernels of different radii on the segmentation probability maps. Based on these three methods for estimating HD, we suggest three loss functions that can be used for training to reduce HD. We use these loss functions to train CNNs for segmentation of the prostate, liver, and pancreas in ultrasound, magnetic resonance, and computed tomography images and compare the results with commonly-used loss functions. Our results show that the proposed loss functions can lead to approximately 18-45% reduction in HD without degrading other segmentation performance criteria such as the Dice similarity coefficient. The proposed loss functions can be used for training medical image segmentation methods in order to reduce the large segmentation errors.
Collapse
|
63
|
Deep neural network for automatic characterization of lesions on 68Ga-PSMA-11 PET/CT. Eur J Nucl Med Mol Imaging 2019; 47:603-613. [PMID: 31813050 DOI: 10.1007/s00259-019-04606-y] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Accepted: 11/07/2019] [Indexed: 12/24/2022]
Abstract
PURPOSE This study proposes an automated prostate cancer (PC) lesion characterization method based on the deep neural network to determine tumor burden on 68Ga-PSMA-11 PET/CT to potentially facilitate the optimization of PSMA-directed radionuclide therapy. METHODS We collected 68Ga-PSMA-11 PET/CT images from 193 patients with metastatic PC at three medical centers. For proof-of-concept, we focused on the detection of pelvis bone and lymph node lesions. A deep neural network (triple-combining 2.5D U-Net) was developed for the automated characterization of these lesions. The proposed method simultaneously extracts features from axial, coronal, and sagittal planes, which mimics the workflow of physicians and reduces computational and memory requirements. RESULTS Among all the labeled lesions, the network achieved 99% precision, 99% recall, and an F1 score of 99% on bone lesion detection and 94%, precision 89% recall, and an F1 score of 92% on lymph node lesion detection. The segmentation accuracy is lower than the detection. The performance of the network was correlated with the amount of training data. CONCLUSION We developed a deep neural network to characterize automatically the PC lesions on 68Ga-PSMA-11 PET/CT. The preliminary test within the pelvic area confirms the potential of deep learning methods. Increasing the amount of training data should further enhance the performance of the proposed method and may ultimately allow whole-body assessments.
Collapse
|
64
|
Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Comput Med Imaging Graph 2019; 79:101660. [PMID: 31785402 DOI: 10.1016/j.compmedimag.2019.101660] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/30/2019] [Accepted: 09/24/2019] [Indexed: 01/02/2023]
Abstract
Precise 3D segmentation of infant brain tissues is an essential step towards comprehensive volumetric studies and quantitative analysis of early brain development. However, computing such segmentations is very challenging, especially for 6-month infant brain, due to the poor image quality, among other difficulties inherent to infant brain MRI, e.g., the isointense contrast between white and gray matter and the severe partial volume effect due to small brain sizes. This study investigates the problem with an ensemble of semi-dense fully convolutional neural networks (CNNs), which employs T1-weighted and T2-weighted MR images as input. We demonstrate that the ensemble agreement is highly correlated with the segmentation errors. Therefore, our method provides measures that can guide local user corrections. To the best of our knowledge, this work is the first ensemble of 3D CNNs for suggesting annotations within images. Our quasi-dense architecture allows the efficient propagation of gradients during training, while limiting the number of parameters, requiring one order of magnitude less parameters than popular medical image segmentation networks such as 3D U-Net (Çiçek, et al.). We also investigated the impact that early or late fusions of multiple image modalities might have on the performances of deep architectures. We report evaluations of our method on the public data of the MICCAI iSEG-2017 Challenge on 6-month infant brain MRI segmentation, and show very competitive results among 21 teams, ranking first or second in most metrics.
Collapse
|
65
|
Bui TD, Wang L, Chen J, Lin W, Li G, Shen D. Multi-task Learning for Neonatal Brain Segmentation Using 3D Dense-Unet with Dense Attention Guided by Geodesic Distance. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA : FIRST MICCAI WORKSHOP, DART 2019, AND FIRST INTERNATIONAL WORKSHOP, MIL3ID 2019, SHENZHEN, HELD IN CONJUNCTION WITH MICCAI 20... 2019; 11795:243-251. [PMID: 32090208 PMCID: PMC7034948 DOI: 10.1007/978-3-030-33391-1_28] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The deep convolutional neural network has achieved outstanding performance on neonatal brain MRI tissue segmentation. However, it may fail to produce reasonable results on unseen datasets that have different imaging appearance distributions with the training data. The main reason is that deep learning models tend to have a good fitting to the training dataset, but do not lead to a good generalization on the unseen datasets. To address this problem, we propose a multi-task learning method, which simultaneously learns both tissue segmentation and geodesic distance regression to regularize a shared encoder network. Furthermore, a dense attention gate is explored to force the network to learn rich contextual information. By using three neonatal brain datasets with different imaging protocols from different scanners, our experimental results demonstrate superior performance of our proposed method over the existing deep learning-based methods on the unseen datasets.
Collapse
Affiliation(s)
- Toan Duc Bui
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jian Chen
- School of Information Science and Engineering, Fujian University of Technology, Fuzhou 350118, China
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
66
|
Bui TD, Shin J, Moon T. Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101613] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
67
|
Zhou T, Ruan S, Canu S. A review: Deep learning for medical image segmentation using multi-modality fusion. ARRAY 2019. [DOI: 10.1016/j.array.2019.100004] [Citation(s) in RCA: 198] [Impact Index Per Article: 39.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
68
|
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal 2019; 58:101552. [PMID: 31521965 DOI: 10.1016/j.media.2019.101552] [Citation(s) in RCA: 542] [Impact Index Per Article: 108.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 08/23/2019] [Accepted: 08/30/2019] [Indexed: 01/30/2023]
Abstract
Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada; Philips Canada, 281 Hillmount Road, Markham, Ontario, ON L6C 2S3, Canada.
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| |
Collapse
|
69
|
Huang C, Tian J, Yuan C, Zeng P, He X, Chen H, Huang Y, Huang B. Fully Automated Segmentation of Lower Extremity Deep Vein Thrombosis Using Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2019; 2019:3401683. [PMID: 31281832 PMCID: PMC6590596 DOI: 10.1155/2019/3401683] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 05/07/2019] [Accepted: 05/26/2019] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. METHODS 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network's performance. RESULTS It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). CONCLUSION Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.
Collapse
Affiliation(s)
- Chen Huang
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Junru Tian
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Chenglang Yuan
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Ping Zeng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Xueping He
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Hanwei Chen
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Yi Huang
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| |
Collapse
|