1
|
Qasem A, Qin G, Zhou Z. AMS- U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net. J Med Imaging (Bellingham) 2024; 11:024005. [PMID: 38525294 PMCID: PMC10960181 DOI: 10.1117/1.jmi.11.2.024005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 03/07/2024] [Accepted: 03/11/2024] [Indexed: 03/26/2024] Open
Abstract
Purpose The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency. Approach The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance. Results The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively. Conclusions The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.
Collapse
Affiliation(s)
- Ahmad Qasem
- University of Kansas Medical Center, The Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, Kansas City, Kansas, United States
| | - Genggeng Qin
- Nanfang Hospital, Southern Medical University, Department of Radiology, Guangzhou, China
| | - Zhiguo Zhou
- University of Kansas Medical Center, The Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, Kansas City, Kansas, United States
- University of Kansas Cancer Center, Kansas City, Kansas, United States
| |
Collapse
|
2
|
Cheng YK, Lin CL, Huang YC, Lin GS, Lian ZY, Chuang CH. Accurate Intervertebral Disc Segmentation Approach Based on Deep Learning. Diagnostics (Basel) 2024; 14:191. [PMID: 38248069 PMCID: PMC10814817 DOI: 10.3390/diagnostics14020191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/23/2024] Open
Abstract
Automatically segmenting specific tissues or structures from medical images is a straightforward task for deep learning models. However, identifying a few specific objects from a group of similar targets can be a challenging task. This study focuses on the segmentation of certain specific intervertebral discs from lateral spine images acquired from an MRI scanner. In this research, an approach is proposed that utilizes MultiResUNet models and employs saliency maps for target intervertebral disc segmentation. First, a sub-image cropping method is used to separate the target discs. This method uses MultiResUNet to predict the saliency maps of target discs and crop sub-images for easier segmentation. Then, MultiResUNet is used to segment the target discs in these sub-images. The distance maps of the segmented discs are then calculated and combined with their original image for data augmentation to predict the remaining target discs. The training set and test set use 2674 and 308 MRI images, respectively. Experimental results demonstrate that the proposed method significantly enhances segmentation accuracy to about 98%. The performance of this approach highlights its effectiveness in segmenting specific intervertebral discs from closely similar discs.
Collapse
Affiliation(s)
- Yu-Kai Cheng
- Department of Neurosurgery, China Medical University Hospital, Taichung 404, Taiwan;
| | - Chih-Lung Lin
- Department of Neurosurgery, Asia University Hospital, Taichung 413, Taiwan;
- Department of Occupational Therapy, Asia University, Taichung 413, Taiwan
| | - Yi-Chi Huang
- Department of Radiology, Asia University Hospital, Taichung 413, Taiwan;
| | - Guo-Shiang Lin
- Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan;
| | - Zhen-You Lian
- Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan;
| | - Cheng-Hung Chuang
- Department of Artificial Intelligence and Computer Engineering, National Chin-Yi University of Technology, Taichung 411, Taiwan;
| |
Collapse
|
3
|
Chen L, Li J, Zou Y, Wang T. ET U-Net: edge enhancement-guided U-Net with transformer for skin lesion segmentation. Phys Med Biol 2023; 69:015001. [PMID: 38131313 DOI: 10.1088/1361-6560/ad13d2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 12/08/2023] [Indexed: 12/23/2023]
Abstract
Objective.Convolutional neural network (CNN)-based deep learning algorithms have been widely used in recent years for automatic skin lesion segmentation. However, the limited receptive fields of convolutional architectures hinder their ability to effectively model dependencies between different image ranges. The transformer is often employed in conjunction with CNN to extract both global and local information from images, as it excels at capturing long-range dependencies. However, this method cannot accurately segment skin lesions with blurred boundaries. To overcome this difficulty, we proposed ETU-Net.Approach.ETU-Net, a novel multi-scale architecture, combines edge enhancement, CNN, and transformer. We introduce the concept of edge detection operators into difference convolution, resulting in the design of the edge enhanced convolution block (EC block) and the local transformer block (LT block), which emphasize edge features. To capture the semantic information contained in local features, we propose the multi-scale local attention block (MLA block), which utilizes convolutions with different kernel sizes. Furthermore, to address the boundary uncertainty caused by patch division in the transformer, we introduce a novel global transformer block (GT block), which allows each patch to gather full-size feature information.Main results.Extensive experimental results on three publicly available skin datasets (PH2, ISIC-2017, and ISIC-2018) demonstrate that ETU-Net outperforms state-of-the-art hybrid methods based on CNN and Transformer in terms of segmentation performance. Moreover, ETU-Net exhibits excellent generalization ability in practical segmentation applications on dermatoscopy images contributed by the Wuxi No.2 People's Hospital.Significance.We propose ETU-Net, a novel multi-scale U-Net model guided by edge enhancement, which can address the challenges posed by complex lesion shapes and ambiguous boundaries in skin lesion segmentation tasks.
Collapse
Affiliation(s)
- Lifang Chen
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People's Republic of China
| | - Jiawei Li
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People's Republic of China
| | - Yunmin Zou
- Department of Dermatology, Wuxi No.2 People's Hospital, Wuxi, People's Republic of China
| | - Tao Wang
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, People's Republic of China
| |
Collapse
|
4
|
Herzberg W, Hauptmann A, Hamilton SJ. Domain independent post-processing with graph U-nets: applications to electrical impedance tomographic imaging⋆. Physiol Meas 2023; 44:10.1088/1361-6579/ad0b3d. [PMID: 37944184 PMCID: PMC10777538 DOI: 10.1088/1361-6579/ad0b3d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 11/09/2023] [Indexed: 11/12/2023]
Abstract
Objective.To extend the highly successful U-Net Convolutional Neural Network architecture, which is limited to rectangular pixel/voxel domains, to a graph-based equivalent that works flexibly on irregular meshes; and demonstrate the effectiveness on electrical impedance tomography (EIT).Approach.By interpreting the irregular mesh as a graph, we develop a graph U-Net with new cluster pooling and unpooling layers that mimic the classic neighborhood based max-pooling important for imaging applications.Mainresults.The proposed graph U-Net is shown to be flexible and effective for improving early iterate total variation (TV) reconstructions from EIT measurements, using as little as the first iteration. The performance is evaluated for simulated data, and on experimental data from three measurement devices with different measurement geometries and instrumentations. We successfully show that such networks can be trained with a simple two-dimensional simulated training set, and generalize to very different domains, including measurements from a three-dimensional device and subsequent 3D reconstructions.Significance.As many inverse problems are solved on irregular (e.g. finite element) meshes, the proposed graph U-Net and pooling layers provide the added flexibility to process directly on the computational mesh. Post-processing an early iterate reconstruction greatly reduces the computational cost which can become prohibitive in higher dimensions with dense meshes. As the graph structure is independent of 'dimension', the flexibility to extend networks trained on 2D domains to 3D domains offers a possibility to further reduce computational cost in training.
Collapse
Affiliation(s)
- William Herzberg
- Department of Mathematical and Statistical Sciences; Marquette University, Milwaukee, WI 53233, United States of America
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland and also with the Department of Computer Science, University College London, United Kingdom
| | - Sarah J Hamilton
- Department of Mathematical and Statistical Sciences; Marquette University, Milwaukee, WI 53233, United States of America
| |
Collapse
|
5
|
Ye Z, Saraf A, Ravipati Y, Hoebers F, Zha Y, Zapaishchykova A, Likitlersuang J, Tishler RB, Schoenfeld JD, Margalit DN, Haddad RI, Mak RH, Naser M, Wahid KA, Sahlsten J, Jaskari J, Kaski K, Mäkitie AA, Fuller CD, Aerts HJ, Kann BH. Fully-automated sarcopenia assessment in head and neck cancer: development and external validation of a deep learning pipeline. medRxiv 2023:2023.03.01.23286638. [PMID: 36945519 PMCID: PMC10029039 DOI: 10.1101/2023.03.01.23286638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Purpose Sarcopenia is an established prognostic factor in patients diagnosed with head and neck squamous cell carcinoma (HNSCC). The quantification of sarcopenia assessed by imaging is typically achieved through the skeletal muscle index (SMI), which can be derived from cervical neck skeletal muscle (SM) segmentation and cross-sectional area. However, manual SM segmentation is labor-intensive, prone to inter-observer variability, and impractical for large-scale clinical use. To overcome this challenge, we have developed and externally validated a fully-automated image-based deep learning (DL) platform for cervical vertebral SM segmentation and SMI calculation, and evaluated the relevance of this with survival and toxicity outcomes. Materials and Methods 899 patients diagnosed as having HNSCC with CT scans from multiple institutes were included, with 335 cases utilized for training, 96 for validation, 48 for internal testing and 393 for external testing. Ground truth single-slice segmentations of SM at the C3 vertebra level were manually generated by experienced radiation oncologists. To develop an efficient method of segmenting the SM, a multi-stage DL pipeline was implemented, consisting of a 2D convolutional neural network (CNN) to select the middle slice of C3 section and a 2D U-Net to segment SM areas. The model performance was evaluated using the Dice Similarity Coefficient (DSC) as the primary metric for the internal test set, and for the external test set the quality of automated segmentation was assessed manually by two experienced radiation oncologists. The L3 skeletal muscle area (SMA) and SMI were then calculated from the C3 cross sectional area (CSA) of the auto-segmented SM. Finally, established SMI cut-offs were used to perform further analyses to assess the correlation with survival and toxicity endpoints in the external institution with univariable and multivariable Cox regression. Results DSCs for validation set (n = 96) and internal test set (n = 48) were 0.90 (95% CI: 0.90 - 0.91) and 0.90 (95% CI: 0.89 - 0.91), respectively. The predicted CSA is highly correlated with the ground-truth CSA in both validation (r = 0.99, p < 0.0001) and test sets (r = 0.96, p < 0.0001). In the external test set (n = 377), 96.2% of the SM segmentations were deemed acceptable by consensus expert review. Predicted SMA and SMI values were highly correlated with the ground-truth values, with Pearson r β 0.99 (p < 0.0001) for both the female and male patients in all datasets. Sarcopenia was associated with worse OS (HR 2.05 [95% CI 1.04 - 4.04], p = 0.04) and longer PEG tube duration (median 162 days vs. 134 days, HR 1.51 [95% CI 1.12 - 2.08], p = 0.006 in multivariate analysis. Conclusion We developed and externally validated a fully-automated platform that strongly correlates with imaging-assessed sarcopenia in patients with H&N cancer that correlates with survival and toxicity outcomes. This study constitutes a significant stride towards the integration of sarcopenia assessment into decision-making for individuals diagnosed with HNSCC. SUMMARY STATEMENT In this study, we developed and externally validated a deep learning model to investigate the impact of sarcopenia, defined as the loss of skeletal muscle mass, on patients with head and neck squamous cell carcinoma (HNSCC) undergoing radiotherapy. We demonstrated an efficient, fullyautomated deep learning pipeline that can accurately segment C3 skeletal muscle area, calculate cross-sectional area, and derive a skeletal muscle index to diagnose sarcopenia from a standard of care CT scan. In multi-institutional data, we found that pre-treatment sarcopenia was associated with significantly reduced overall survival and an increased risk of adverse events. Given the increased vulnerability of patients with HNSCC, the assessment of sarcopenia prior to radiotherapy may aid in informed treatment decision-making and serve as a predictive marker for the necessity of early supportive measures.
Collapse
Affiliation(s)
- Zezhong Ye
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Anurag Saraf
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Yashwanth Ravipati
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Frank Hoebers
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, the Netherlands
| | - Yining Zha
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Anna Zapaishchykova
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Jirapat Likitlersuang
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Roy B. Tishler
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Jonathan D. Schoenfeld
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Danielle N. Margalit
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Robert I. Haddad
- Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Raymond H. Mak
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Mohamed Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kareem A. Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jaakko Sahlsten
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Joel Jaskari
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Kimmo Kaski
- Department of Computer Science, Aalto University School of Science, Espoo, Finland
| | - Antti A. Mäkitie
- Department Otorhinolaryngology – Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Hugo J.W.L. Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Department Otorhinolaryngology – Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Research Program in Systems Oncology, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Radiology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, United States
| | - Benjamin H. Kann
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States
- Department of Radiation Oncology, Dana-Farber Cancer Institute and Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
6
|
Chandrashekar A, Handa A, Shivakumar N, Lapolla P, Uberoi R, Grau V, Lee R. A Deep Learning Pipeline to Automate High-Resolution Arterial Segmentation With or Without Intravenous Contrast. Ann Surg 2022; 276:e1017-lpagee1027. [PMID: 33234786 DOI: 10.1097/SLA.0000000000004595] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Existing methods to reconstruct vascular structures from a computerized tomography (CT) angiogram rely on contrast injection to enhance the radio-density within the vessel lumen. However, pathological changes in the vasculature may be present that prevent accurate reconstruction. In aortic aneurysmal disease, a thrombus adherent to the aortic wall within the expanding aneurysmal sac is present in >90% of cases. These deformations prevent the automatic extraction of vital clinical information by existing image reconstruction methods. AIM In this study, a deep learning architecture consisting of a modified U-Net with attention-gating was implemented to establish a high-throughput and automated segmentation pipeline of pathological blood vessels in CT images acquired with or without the use of a contrast agent. METHODS AND RESULTS Seventy-Five patients with paired noncontrast and contrast-enhanced CT images were randomly selected from an ongoing study (Ethics Ref 13/SC/0250), manually annotated and used for model training and evaluation. Data augmentation was implemented to diversify the training data set in a ratio of 10:1. The performance of our Attention-based U-Net in extracting both the inner (blood flow) lumen and the wall structure of the aortic aneurysm from CT angiograms was compared against a generic 3-D U-Net and displayed superior results. Implementation of this network within the aortic segmentation pipeline for both contrast and noncontrast CT images has allowed for accurate and efficient extraction of the morphological and pathological features of the entire aortic volume. CONCLUSIONS This extraction method can be used to standardize aneurysmal disease management and sets the foundation for complex geometric and morphological analysis. Furthermore, this pipeline can be extended to other vascular pathologies.
Collapse
|
7
|
Abstract
Imaging applications tailored towards ultrasound-based treatment, such as high intensity focused ultrasound (FUS), where higher power ultrasound generates a radiation force for ultrasound elasticity imaging or therapeutics/theranostics, are affected by interference from FUS. The artifact becomes more pronounced with intensity and power. To overcome this limitation, we propose FUS-net, a method that incorporates a CNN-based U-net autoencoder trained end-to-end on 'clean' and 'corrupted' RF data in Tensorflow 2.3 for FUS artifact removal. The network learns the representation of RF data and FUS artifacts in latent space so that the output of corrupted RF input is clean RF data. We find that FUS-net perform 15% better than stacked autoencoders (SAE) on evaluated test datasets. B-mode images beamformed from FUS-net RF shows superior speckle quality and better contrast-to-noise (CNR) than both notch-filtered and adaptive least means squares filtered RF data. Furthermore, FUS-net filtered images had lower errors and higher similarity to clean images collected from unseen scans at all pressure levels. Lastly, FUS-net RF can be used with existing cross-correlation speckle-tracking algorithms to generate displacement maps. FUS-net currently outperforms conventional filtering and SAEs for removing high pressure FUS interference from RF data, and hence may be applicable to all FUS-based imaging and therapeutic methods.
Collapse
|
8
|
Tsai KJ, Chang CC, Lo LC, Chiang JY, Chang CS, Huang YJ. Automatic segmentation of paravertebral muscles in abdominal CT scan by U-Net: The application of data augmentation technique to increase the Jaccard ratio of deep learning. Medicine (Baltimore) 2021; 100:e27649. [PMID: 34871238 PMCID: PMC8568419 DOI: 10.1097/md.0000000000027649] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 09/17/2021] [Accepted: 10/11/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Sarcopenia, characterized by a decline of skeletal muscle mass, has emerged as an important prognostic factor for cancer patients. Trunk computed tomography (CT) is a commonly used modality for assessment of cancer disease extent and treatment outcome. CT images can also be used to analyze the skeletal muscle mass filtered by the appropriate range of Hounsfield scale. However, a manual depiction of skeletal muscle in CT scan images for assessing skeletal muscle mass is labor-intensive and unrealistic in clinical practice. In this paper, we propose a novel U-Net based segmentation system for CT scan of paravertebral muscles in the third and fourth lumbar spines. Since the number of training samples is limited (i.e., 1024 CT images only), it is well-known that the performance of the deep learning approach is restricted due to overfitting. A data augmentation strategy to enlarge the diversity of the training set to boost the performance further is employed. On the other hand, we also discuss how the number of features in our U-Net affects the performance of the semantic segmentation. The efficacies of the proposed methodology based on w/ and w/o data augmentation and different feature maps are compared in the experiments. We show that the Jaccard score is approximately 95.0% based on the proposed data augmentation method with only 16 feature maps used in U-Net. The stability and efficiency of the proposed U-Net are verified in the experiments in a cross-validation manner.
Collapse
Affiliation(s)
- Kuen-Jang Tsai
- Department of Surgery, E-Da Cancer Hospital, Taiwan
- College of Medicine, I-Shou University, Kaohsiung, Taiwan
| | - Chih-Chun Chang
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Lun-Chien Lo
- School of Chinese Medicine, China Medical University, Taichung, Taiwan
- Department of Chinese Medicine, China Medical University Hospital, Taichung, Taiwan
| | - John Y. Chiang
- Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung, Taiwan
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Chao-Sung Chang
- Department of Hematology/Oncology, E-Da Cancer Hospital, School of Medicine for International Students, I-Shou University, Kaohsiung, Taiwan
| | - Yu-Jung Huang
- Department of Electronic Engineering, I-Shou University, Kaohsiung, Taiwan
| |
Collapse
|
9
|
Shilpashree PS, Suresh KV, Sudhir RR, Srinivas SP. Automated Image Segmentation of the Corneal Endothelium in Patients With Fuchs Dystrophy. Transl Vis Sci Technol 2021; 10:27. [PMID: 34807254 PMCID: PMC8626858 DOI: 10.1167/tvst.10.13.27] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 09/19/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose To perform segmentation of specular microscopy (SM) images of the corneal endothelium for comparing average perimeter length (APL) between Fuchs endothelial corneal dystrophy (FECD) patients and healthy subjects. Methods A retrospective review of clinical records of FECD patients and those with healthy endothelium was carried out to collect images of the endothelium. The images were segmented by modified U-Net, a deep learning architecture, followed by the Watershed algorithm to resolve merged cell borders (<5%). The segmented images were analyzed for endothelial cell density (ECDUW) and APL. Results The combination of the U-Net and Watershed algorithm, referred to as the UW approach, enabled a complete segmentation of the endothelium. In healthy, ECDUW was close to estimates by SM and manual segmentation (31 subjects; P > 0.1). However, in FECD, ECDUW was closer to estimates by manual segmentation but not by SM (27 patients; P < 0.001). ECDUW in FECD (2547 ± 499 cells/mm2; 60 patients) was smaller compared to that in the healthy (2713 ± 401 cells/mm2; 70 subjects) (P < 0.001). APL in the healthy was 66.87 ± 7.68 µm/cell (70 subjects), but it increased with %Guttae in FECD (56.60-195.30 µm/cell; 60 patients) (P < 0.0001). Conclusions The UW approach is precise for the segmentation of SM images from the healthy and FECD. Our analysis has revealed that APL increases with %Guttae. Translational Relevance The average perimeter length of the corneal endothelium, which represents the length of the paracellular pathway for fluid flux into the stroma, is increased in Fuchs dystrophy.
Collapse
Affiliation(s)
- Palanahalli S. Shilpashree
- Department of Electronics and Communication Engineering, Siddaganga Institute of Technology (Affiliated to Visvesvaraya Technological University, Belagavi), Tumkur, India
| | - Kaggere V. Suresh
- Department of Electronics and Communication Engineering, Siddaganga Institute of Technology (Affiliated to Visvesvaraya Technological University, Belagavi), Tumkur, India
| | | | | |
Collapse
|
10
|
Zhang X, Liu X, Zhang B, Dong J, Zhang B, Zhao S, Li S. Accurate segmentation for different types of lung nodules on CT images using improved U-Net convolutional network. Medicine (Baltimore) 2021; 100:e27491. [PMID: 34622882 PMCID: PMC8500581 DOI: 10.1097/md.0000000000027491] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 09/02/2021] [Accepted: 09/23/2021] [Indexed: 01/05/2023] Open
Abstract
ABSTRACT Since lung nodules on computed tomography images can have different shapes, contours, textures or locations and may be attached to neighboring blood vessels or pleural surfaces, accurate segmentation is still challenging. In this study, we propose an accurate segmentation method based on an improved U-Net convolutional network for different types of lung nodules on computed tomography images.The first phase is to segment lung parenchyma and correct the lung contour by applying α-hull algorithm. The second phase is to extract image pairs of patches containing lung nodules in the center and the corresponding ground truth and build an improved U-Net network with introduction of batch normalization.A large number of experiments manifest that segmentation performance of Dice loss has superior results than mean square error and Binary_crossentropy loss. The α-hull algorithm and batch normalization can improve the segmentation performance effectively. Our best result for Dice similar coefficient (0.8623) is also more competitive than other state-of-the-art segmentation algorithms.In order to segment different types of lung nodules accurately, we propose an improved U-Net network, which can improve the segmentation accuracy effectively. Moreover, this work also has practical value in helping radiologists segment lung nodules and diagnose lung cancer.
Collapse
|
11
|
Fayad LM, Parekh VS, Luna RDC, Ko CC, Tank D, Fritz J, Ahlawat S, Jacobs MA. A Deep Learning System for Synthetic Knee Magnetic Resonance Imaging: Is Artificial Intelligence-Based Fat-Suppressed Imaging Feasible? Invest Radiol 2021; 56:357-368. [PMID: 33350717 PMCID: PMC8087629 DOI: 10.1097/rli.0000000000000751] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
MATERIALS AND METHODS This single-center study was approved by the institutional review board. Artificial intelligence-based FS MRI scans were created from non-FS images using a deep learning system with a modified convolutional neural network-based U-Net that used a training set of 25,920 images and validation set of 16,416 images. Three musculoskeletal radiologists reviewed 88 knee MR studies in 2 sessions, the original (proton density [PD] + FSPD) and the synthetic (PD + AFSMRI). Readers recorded AFSMRI quality (diagnostic/nondiagnostic) and the presence or absence of meniscal, ligament, and tendon tears; cartilage defects; and bone marrow abnormalities. Contrast-to-noise rate measurements were made among subcutaneous fat, fluid, bone marrow, cartilage, and muscle. The original MRI sequences were used as the reference standard to determine the diagnostic performance of AFSMRI (combined with the original PD sequence). This is a fully balanced study design, where all readers read all images the same number of times, which allowed the determination of the interchangeability of the original and synthetic protocols. Descriptive statistics, intermethod agreement, interobserver concordance, and interchangeability tests were applied. A P value less than 0.01 was considered statistically significant for the likelihood ratio testing, and P value less than 0.05 for all other statistical analyses. RESULTS Artificial intelligence-based FS MRI quality was rated as diagnostic (98.9% [87/88] to 100% [88/88], all readers). Diagnostic performance (sensitivity/specificity) of the synthetic protocol was high, for tears of the menisci (91% [71/78], 86% [84/98]), cruciate ligaments (92% [12/13], 98% [160/163]), collateral ligaments (80% [16/20], 100% [156/156]), and tendons (90% [9/10], 100% [166/166]). For cartilage defects and bone marrow abnormalities, the synthetic protocol offered an overall sensitivity/specificity of 77% (170/221)/93% (287/307) and 76% (95/125)/90% (443/491), respectively. Intermethod agreement ranged from moderate to substantial for almost all evaluated structures (menisci, cruciate ligaments, collateral ligaments, and bone marrow abnormalities). No significant difference was observed between methods for all structural abnormalities by all readers (P > 0.05), except for cartilage assessment. Interobserver agreement ranged from moderate to substantial for almost all evaluated structures. Original and synthetic protocols were interchangeable for the diagnosis of all evaluated structures. There was no significant difference for the common exact match proportions for all combinations (P > 0.01). The conspicuity of all tissues assessed through contrast-to-noise rate was higher on AFSMRI than on original FSPD images (P < 0.05). CONCLUSIONS Artificial intelligence-based FS MRI (3D AFSMRI) is feasible and offers a method for fast imaging, with similar detection rates for structural abnormalities of the knee, compared with original 3D MR sequences.
Collapse
Affiliation(s)
- Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Vishwa S. Parekh
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, USA
| | - Rodrigo de Castro Luna
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Charles C. Ko
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Dharmesh Tank
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Jan Fritz
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Department of Radiology, New York University Grossman School of Medicine, NYU Langone Health, New York, NY, USA
| | - Shivani Ahlawat
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
| | - Michael A. Jacobs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions MD, USA
- Sidney Kimmel Comprehensive Cancer Center., The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
12
|
Kassim YM, Palaniappan K, Yang F, Poostchi M, Palaniappan N, Maude RJ, Antani S, Jaeger S. Clustering-Based Dual Deep Learning Architecture for Detecting Red Blood Cells in Malaria Diagnostic Smears. IEEE J Biomed Health Inform 2021; 25:1735-1746. [PMID: 33119516 PMCID: PMC8127616 DOI: 10.1109/jbhi.2020.3034863] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 09/09/2020] [Accepted: 09/30/2020] [Indexed: 12/18/2022]
Abstract
Computer-assisted algorithms have become a mainstay of biomedical applications to improve accuracy and reproducibility of repetitive tasks like manual segmentation and annotation. We propose a novel pipeline for red blood cell detection and counting in thin blood smear microscopy images, named RBCNet, using a dual deep learning architecture. RBCNet consists of a U-Net first stage for cell-cluster or superpixel segmentation, followed by a second refinement stage Faster R-CNN for detecting small cell objects within the connected component clusters. RBCNet uses cell clustering instead of region proposals, which is robust to cell fragmentation, is highly scalable for detecting small objects or fine scale morphological structures in very large images, can be trained using non-overlapping tiles, and during inference is adaptive to the scale of cell-clusters with a low memory footprint. We tested our method on an archived collection of human malaria smears with nearly 200,000 labeled cells across 965 images from 193 patients, acquired in Bangladesh, with each patient contributing five images. Cell detection accuracy using RBCNet was higher than 97 %. The novel dual cascade RBCNet architecture provides more accurate cell detections because the foreground cell-cluster masks from U-Net adaptively guide the detection stage, resulting in a notably higher true positive and lower false alarm rates, compared to traditional and other deep learning methods. The RBCNet pipeline implements a crucial step towards automated malaria diagnosis.
Collapse
Affiliation(s)
- Yasmin M. Kassim
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | | | - Feng Yang
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Mahdieh Poostchi
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Nila Palaniappan
- School of MedicineUniversity of Missouri-Kansas CityKansas CityMO64110USA
| | - Richard J Maude
- Mahidol-Oxford Tropical Medicine Research UnitMahidol UniversityBangkok10400Thailand
- Centre for Tropical Medicine and Global Health, Nuffield Department of MedicineUniversity of OxfordOxfordOX3 7LGU.K.
- Harvard TH Chan School of Public HealthHarvard UniversityBostonMA02115USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| | - Stefan Jaeger
- Lister Hill National Center for Biomedical CommunicationsNational Library of MedicineBethesdaMD20894USA
| |
Collapse
|
13
|
Do NT, Jung ST, Yang HJ, Kim SH. Multi-Level Seg-Unet Model with Global and Patch-Based X-ray Images for Knee Bone Tumor Detection. Diagnostics (Basel) 2021; 11:691. [PMID: 33924426 DOI: 10.3390/diagnostics11040691] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 04/09/2021] [Accepted: 04/09/2021] [Indexed: 12/22/2022] Open
Abstract
Tumor classification and segmentation problems have attracted interest in recent years. In contrast to the abundance of studies examining brain, lung, and liver cancers, there has been a lack of studies using deep learning to classify and segment knee bone tumors. In this study, our objective is to assist physicians in radiographic interpretation to detect and classify knee bone regions in terms of whether they are normal, begin-tumor, or malignant-tumor regions. We proposed the Seg-Unet model with global and patched-based approaches to deal with challenges involving the small size, appearance variety, and uncommon nature of bone lesions. Our model contains classification, tumor segmentation, and high-risk region segmentation branches to learn mutual benefits among the global context on the whole image and the local texture at every pixel. The patch-based model improves our performance in malignant-tumor detection. We built the knee bone tumor dataset supported by the physicians of Chonnam National University Hospital (CNUH). Experiments on the dataset demonstrate that our method achieves better performance than other methods with an accuracy of 99.05% for the classification and an average Mean IoU of 84.84% for segmentation. Our results showed a significant contribution to help the physicians in knee bone tumor detection.
Collapse
|
14
|
Zhao F, Wu Z, Wang L, Lin W, Gilmore JH, Xia S, Shen D, Li G. Spherical Deformable U-Net: Application to Cortical Surface Parcellation and Development Prediction. IEEE Trans Med Imaging 2021; 40:1217-1228. [PMID: 33417540 PMCID: PMC8016713 DOI: 10.1109/tmi.2021.3050072] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Convolutional Neural Networks (CNNs) have achieved overwhelming success in learning-related problems for 2D/3D images in the Euclidean space. However, unlike in the Euclidean space, the shapes of many structures in medical imaging have an inherent spherical topology in a manifold space, e.g., the convoluted brain cortical surfaces represented by triangular meshes. There is no consistent neighborhood definition and thus no straightforward convolution/pooling operations for such cortical surface data. In this paper, leveraging the regular and hierarchical geometric structure of the resampled spherical cortical surfaces, we create the 1-ring filter on spherical cortical triangular meshes and accordingly develop convolution/pooling operations for constructing Spherical U-Net for cortical surface data. However, the regular nature of the 1-ring filter makes it inherently limited to model fixed geometric transformations. To further enhance the transformation modeling capability of Spherical U-Net, we introduce the deformable convolution and deformable pooling to cortical surface data and accordingly propose the Spherical Deformable U-Net (SDU-Net). Specifically, spherical offsets are learned to freely deform the 1-ring filter on the sphere to adaptively localize cortical structures with different sizes and shapes. We then apply the SDU-Net to two challenging and scientifically important tasks in neuroimaging: cortical surface parcellation and cortical attribute map prediction. Both applications validate the competitive performance of our approach in accuracy and computational efficiency in comparison with state-of-the-art methods.
Collapse
|
15
|
Jin Q, Meng Z, Sun C, Cui H, Su R. RA-UNet: A Hybrid Deep Attention-Aware Network to Extract Liver and Tumor in CT Scans. Front Bioeng Biotechnol 2020; 8:605132. [PMID: 33425871 PMCID: PMC7785874 DOI: 10.3389/fbioe.2020.605132] [Citation(s) in RCA: 92] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 12/01/2020] [Indexed: 02/01/2023] Open
Abstract
Automatic extraction of liver and tumor from CT volumes is a challenging task due to their heterogeneous and diffusive shapes. Recently, 2D deep convolutional neural networks have become popular in medical image segmentation tasks because of the utilization of large labeled datasets to learn hierarchical features. However, few studies investigate 3D networks for liver tumor segmentation. In this paper, we propose a 3D hybrid residual attention-aware segmentation method, i.e., RA-UNet, to precisely extract the liver region and segment tumors from the liver. The proposed network has a basic architecture as U-Net which extracts contextual information combining low-level feature maps with high-level ones. Attention residual modules are integrated so that the attention-aware features change adaptively. This is the first work that an attention residual mechanism is used to segment tumors from 3D medical volumetric images. We evaluated our framework on the public MICCAI 2017 Liver Tumor Segmentation dataset and tested the generalization on the 3DIRCADb dataset. The experiments show that our architecture obtains competitive results.
Collapse
Affiliation(s)
- Qiangguo Jin
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
- CSIRO Data61, Sydney, NSW, Australia
| | - Zhaopeng Meng
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
- Tianjin University of Traditional Chinese Medicine, Tianjin, China
| | | | - Hui Cui
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, VIC, Australia
| | - Ran Su
- School of Computer Software, College of Intelligence and Computing, Tianjin University, Tianjin, China
| |
Collapse
|
16
|
Seo H, Huang C, Bassenne M, Xiao R, Xing L. Modified U-Net (mU-Net) With Incorporation of Object-Dependent High Level Features for Improved Liver and Liver-Tumor Segmentation in CT Images. IEEE Trans Med Imaging 2020; 39:1316-1325. [PMID: 31634827 PMCID: PMC8095064 DOI: 10.1109/tmi.2019.2948320] [Citation(s) in RCA: 136] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Segmentation of livers and liver tumors is one of the most important steps in radiation therapy of hepatocellular carcinoma. The segmentation task is often done manually, making it tedious, labor intensive, and subject to intra-/inter- operator variations. While various algorithms for delineating organ-at-risks (OARs) and tumor targets have been proposed, automatic segmentation of livers and liver tumors remains intractable due to their low tissue contrast with respect to the surrounding organs and their deformable shape in CT images. The U-Net has gained increasing popularity recently for image analysis tasks and has shown promising results. Conventional U-Net architectures, however, suffer from three major drawbacks. First, skip connections allow for the duplicated transfer of low resolution information in feature maps to improve efficiency in learning, but this often leads to blurring of extracted image features. Secondly, high level features extracted by the network often do not contain enough high resolution edge information of the input, leading to greater uncertainty where high resolution edge dominantly affects the network's decisions such as liver and liver-tumor segmentation. Thirdly, it is generally difficult to optimize the number of pooling operations in order to extract high level global features, since the number of pooling operations used depends on the object size. To cope with these problems, we added a residual path with deconvolution and activation operations to the skip connection of the U-Net to avoid duplication of low resolution information of features. In the case of small object inputs, features in the skip connection are not incorporated with features in the residual path. Furthermore, the proposed architecture has additional convolution layers in the skip connection in order to extract high level global features of small object inputs as well as high level features of high resolution edge information of large object inputs. Efficacy of the modified U-Net (mU-Net) was demonstrated using the public dataset of Liver tumor segmentation (LiTS) challenge 2017. For liver-tumor segmentation, Dice similarity coefficient (DSC) of 89.72 %, volume of error (VOE) of 21.93 %, and relative volume difference (RVD) of - 0.49 % were obtained. For liver segmentation, DSC of 98.51 %, VOE of 3.07 %, and RVD of 0.26 % were calculated. For the public 3D Image Reconstruction for Comparison of Algorithm Database (3Dircadb), DSCs were 96.01 % for the liver and 68.14 % for liver-tumor segmentations, respectively. The proposed mU-Net outperformed existing state-of-art networks.
Collapse
|
17
|
Yang L, Coleman MC, Hines MR, Kluz PN, Brouillette MJ, Goetz JE. Deep Learning for Chondrocyte Identification in Automated Histological Analysis of Articular Cartilage. Iowa Orthop J 2019; 39:1-8. [PMID: 32577101 PMCID: PMC7047299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Histology-based methods are commonly used in osteoarthritis (OA) research because they provide detailed information about cartilage health at the cellular and tissue level. Computer-based cartilage scoring systems have previously been developed using standard image analysis techniques to give more objective and reliable evaluations of OA severity. The goal of this work was to develop a deep learning-based method to segment chondrocytes from histological images of cartilage and validate the resulting method via comparison with human segmentation. METHODS The U-Net approach was adapted for the task of chondrocyte segmentation. A training dataset consisting of 235 images and a validation set consisting of 25 images in which individual chondrocytes had been manually segmented, were used for training the U-Net. Chondrocyte count, detection accuracy, and boundary segmentation of the trained U-Net was evaluated by comparing its results with those of human observers. RESULTS The U-Net chondrocyte counts were not significantly different (p = 0.361 in a paired t-test) than the algorithm trainer counts (Pearson correlation coefficient = 0.92). The five expert observers had good agreement on chondrocyte counts (intraclass correlation coefficient = 0.868), however the resulting U-Net counted a significantly fewer chondrocytes than the average of those expert observers (p < 0.001 in a paired t-test). Chondrocytes were accurately detected by the U-Net (F1 scores = 0.86, 0.90, with respect to the selected expert observer and algorithm trainer). Segmentation accuracy was also high (IOU = 0.828) relative to the algorithm trainer. CONCLUSIONS This work developed a method for chondrocyte segmentation from histological images of arthritic cartilage using a deep learning approach. The resulting method detected chondrocytes and delineated them with high accuracy. The method will continue to be improved through expansion to detect more complex cellular features representative of OA such as cell cloning. CLINICAL RELEVANCE The imaging tool developed in this work can be integrated into an automated cartilage health scoring system and helps provide a robust, objective and reliable assessment of OA severity in cartilage.
Collapse
Affiliation(s)
- Linjun Yang
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - Mitchell C. Coleman
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Madeline R. Hines
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, IA, USA
| | - Paige N. Kluz
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
| | - Marc J. Brouillette
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
| | - Jessica E. Goetz
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, IA, USA
- Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
18
|
Mohseni Salehi SS, Erdogmus D, Gholipour A. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging. IEEE Trans Med Imaging 2017; 36:2319-2330. [PMID: 28678704 PMCID: PMC5715475 DOI: 10.1109/tmi.2017.2721362] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.
Collapse
|