1
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance. Comput Biol Med 2024; 183:109242. [PMID: 39388839 DOI: 10.1016/j.compbiomed.2024.109242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 09/30/2024] [Accepted: 10/01/2024] [Indexed: 10/12/2024]
Abstract
BACKGROUND Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images. METHOD We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain. To this end, we investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets. The effect of these different reduction strategies on contrastive learning is evaluated on two pre-training datasets and several downstream classification tasks. RESULTS In all of our experiments, dataset reduction leads to a considerable performance gain in downstream tasks, e.g., an AUC score improvement from 0.78 to 0.83 for the COVID CT Classification Grand Challenge, 0.97 to 0.98 for the OrganSMNIST Classification Challenge and 0.73 to 0.83 for a brain hemorrhage classification task. Furthermore, pre-training is up to nine times faster due to the dataset reduction. CONCLUSIONS In conclusion, the proposed approach highlights the importance of dataset quality and provides a transferable approach to improve contrastive pre-training for classification downstream tasks on medical images.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany; Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| |
Collapse
|
2
|
谭 双, 李 俊, 张 晓, 严 馨, 张 彤, 吴 下, 刘 自, 李 莉, 冯 娟, 韩 海, 唐 国, 韩 俊, 邓 友. [A design of interactive review for computer aided diagnosis of pulmonary nodules based on active learning]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2024; 41:503-510. [PMID: 38932536 PMCID: PMC11208657 DOI: 10.7507/1001-5515.202310044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/18/2024] [Indexed: 06/28/2024]
Abstract
Automatic detection of pulmonary nodule based on computer tomography (CT) images can significantly improve the diagnosis and treatment of lung cancer. However, there is a lack of effective interactive tools to record the marked results of radiologists in real time and feed them back to the algorithm model for iterative optimization. This paper designed and developed an online interactive review system supporting the assisted diagnosis of lung nodules in CT images. Lung nodules were detected by the preset model and presented to doctors, who marked or corrected the lung nodules detected by the system with their professional knowledge, and then iteratively optimized the AI model with active learning strategy according to the marked results of radiologists to continuously improve the accuracy of the model. The subset 5-9 dataset of the lung nodule analysis 2016(LUNA16) was used for iteration experiments. The precision, F1-score and MioU indexes were steadily improved with the increase of the number of iterations, and the precision increased from 0.213 9 to 0.565 6. The results in this paper show that the system not only uses deep segmentation model to assist radiologists, but also optimizes the model by using radiologists' feedback information to the maximum extent, iteratively improving the accuracy of the model and better assisting radiologists.
Collapse
Affiliation(s)
- 双平 谭
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 俊 李
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 晓娟 张
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 馨月 严
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 彤 张
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 下里 吴
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 自强 刘
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 莉莉 李
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 娟 冯
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 海斌 韩
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 国英 唐
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 俊洲 韩
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| | - 友锋 邓
- 华中科技大学 同济医学院 附属普爱医院(武汉市第四医院)(武汉 430033)Wuhan Puai Hospital (Wuhan Fourth Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430033, P. R. China
| |
Collapse
|
3
|
Ikuta M, Zhang J. A Deep Convolutional Gated Recurrent Unit for CT Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10612-10625. [PMID: 35522637 DOI: 10.1109/tnnls.2022.3169569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Computed tomography (CT) is one of the most important medical imaging technologies in use today. Most commercial CT products use a technique known as the filtered backprojection (FBP) that is fast and can produce decent image quality when an X-ray dose is high. However, the FBP is not good enough on low-dose X-ray CT imaging because the CT image reconstruction problem becomes more stochastic. A more effective reconstruction technique proposed recently and implemented in a limited number of CT commercial products is an iterative reconstruction (IR). The IR technique is based on a Bayesian formulation of the CT image reconstruction problem with an explicit model of the CT scanning, including its stochastic nature, and a prior model that incorporates our knowledge about what a good CT image should look like. However, constructing such prior knowledge is more complicated than it seems. In this article, we propose a novel neural network for CT image reconstruction. The network is based on the IR formulation and constructed with a recurrent neural network (RNN). Specifically, we transform the gated recurrent unit (GRU) into a neural network performing CT image reconstruction. We call it "GRU reconstruction." This neural network conducts concurrent dual-domain learning. Many deep learning (DL)-based methods in medical imaging are single-domain learning, but dual-domain learning performs better because it learns from both the sinogram and the image domain. In addition, we propose backpropagation through stage (BPTS) as a new RNN backpropagation algorithm. It is similar to the backpropagation through time (BPTT) of an RNN; however, it is tailored for iterative optimization. Results from extensive experiments indicate that our proposed method outperforms conventional model-based methods, single-domain DL methods, and state-of-the-art DL techniques in terms of the root mean squared error (RMSE), the peak signal-to-noise ratio (PSNR), and the structure similarity (SSIM) and in terms of visual appearance.
Collapse
|
4
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging. Sci Rep 2023; 13:20260. [PMID: 37985685 PMCID: PMC10662445 DOI: 10.1038/s41598-023-46433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 10/31/2023] [Indexed: 11/22/2023] Open
Abstract
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany.
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| |
Collapse
|
5
|
Abdulkadir Y, Luximon D, Morris E, Chow P, Kishan AU, Mikaeilian A, Lamb JM. Human factors in the clinical implementation of deep learning-based automated contouring of pelvic organs at risk for MRI-guided radiotherapy. Med Phys 2023; 50:5969-5977. [PMID: 37646527 DOI: 10.1002/mp.16676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 03/07/2023] [Accepted: 04/28/2023] [Indexed: 09/01/2023] Open
Abstract
PURPOSE Deep neural nets have revolutionized the science of auto-segmentation and present great promise for treatment planning automation. However, little data exists regarding clinical implementation and human factors. We evaluated the performance and clinical implementation of a novel deep learning-based auto-contouring workflow for 0.35T magnetic resonance imaging (MRI)-guided pelvic radiotherapy, focusing on automation bias and objective measures of workflow savings. METHODS An auto-contouring model was developed using a UNet-derived architecture for the femoral heads, bladder, and rectum in 0.35T MR images. Training data was taken from 75 patients treated with MRI-guided radiotherapy at our institution. The model was tested against 20 retrospective cases outside the training set, and subsequently was clinically implemented. Usability was evaluated on the first 30 clinical cases by computing Dice coefficient (DSC), Hausdorff distance (HD), and the fraction of slices that were used un-modified by planners. Final contours were retrospectively reviewed by an experienced planner and clinical significance of deviations was graded as negligible, low, moderate, and high probability of leading to actionable dosimetric variations. In order to assess whether the use of auto-contouring led to final contours more or less in agreement with an objective standard, 10 pre-treatment and 10 post-treatment blinded cases were re-contoured from scratch by three expert planners to get expert consensus contours (EC). EC was compared to clinically used (CU) contours using DSC. Student's t-test and Levene's statistic were used to test statistical significance of differences in mean and standard deviation, respectively. Finally, the dosimetric significance of the contour differences were assessed by comparing the difference in bladder and rectum maximum point doses between EC and CU before and after the introduction of automation. RESULTS Median (interquartile range) DSC for the retrospective test data were 0.92(0.02), 0.92(0.06), 0.93(0.06), 0.87(0.04) for the post-processed contours for the right and left femoral heads, bladder, and rectum, respectively. Post-implementation median DSC were 1.0(0.0), 1.0(0.0), 0.98(0.04), and 0.98(0.06), respectively. For each organ, 96.2, 95.4, 59.5, and 68.21 percent of slices were used unmodified by the planner. DSC between EC and pre-implementation CU contours were 0.91(0.05*), 0.91*(0.05*), 0.95(0.04), and 0.88(0.04) for right and left femoral heads, bladder, and rectum, respectively. The corresponding DSC for post-implementation CU contours were 0.93(0.02*), 0.93*(0.01*), 0.96(0.01), and 0.85(0.02) (asterisks indicate statistically significant difference). In a retrospective review of contours used for planning, a total of four deviating slices in two patients were graded as low potential clinical significance. No deviations were graded as moderate or high. Mean differences between EC and CU rectum max-doses were 0.1 ± 2.6 Gy and -0.9 ± 2.5 Gy for pre- and post-implementation, respectively. Mean differences between EC and CU bladder/bladder wall max-doses were -0.9 ± 4.1 Gy and 0.0 ± 0.6 Gy for pre- and post-implementation, respectively. These differences were not statistically significant according to Student's t-test. CONCLUSION We have presented an analysis of the clinical implementation of a novel auto-contouring workflow. Substantial workflow savings were obtained. The introduction of auto-contouring into the clinical workflow changed the contouring behavior of planners. Automation bias was observed, but it had little deleterious effect on treatment planning.
Collapse
Affiliation(s)
- Yasin Abdulkadir
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Dishane Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Eric Morris
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Phillip Chow
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Amar U Kishan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Argin Mikaeilian
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| |
Collapse
|
6
|
Bhandary S, Kuhn D, Babaiee Z, Fechter T, Benndorf M, Zamboglou C, Grosu AL, Grosu R. Investigation and benchmarking of U-Nets on prostate segmentation tasks. Comput Med Imaging Graph 2023; 107:102241. [PMID: 37201475 DOI: 10.1016/j.compmedimag.2023.102241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 05/03/2023] [Accepted: 05/03/2023] [Indexed: 05/20/2023]
Abstract
In healthcare, a growing number of physicians and support staff are striving to facilitate personalized radiotherapy regimens for patients with prostate cancer. This is because individual patient biology is unique, and employing a single approach for all is inefficient. A crucial step for customizing radiotherapy planning and gaining fundamental information about the disease, is the identification and delineation of targeted structures. However, accurate biomedical image segmentation is time-consuming, requires considerable experience and is prone to observer variability. In the past decade, the use of deep learning models has significantly increased in the field of medical image segmentation. At present, a vast number of anatomical structures can be demarcated on a clinician's level with deep learning models. These models would not only unload work, but they can offer unbiased characterization of the disease. The main architectures used in segmentation are the U-Net and its variants, that exhibit outstanding performances. However, reproducing results or directly comparing methods is often limited by closed source of data and the large heterogeneity among medical images. With this in mind, our intention is to provide a reliable source for assessing deep learning models. As an example, we chose the challenging task of delineating the prostate gland in multi-modal images. First, this paper provides a comprehensive review of current state-of-the-art convolutional neural networks for 3D prostate segmentation. Second, utilizing public and in-house CT and MR datasets of varying properties, we created a framework for an objective comparison of automatic prostate segmentation algorithms. The framework was used for rigorous evaluations of the models, highlighting their strengths and weaknesses.
Collapse
Affiliation(s)
- Shrajan Bhandary
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria.
| | - Dejan Kuhn
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Zahra Babaiee
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria
| | - Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, Medical Center University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany
| | - Constantinos Zamboglou
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany; German Oncology Center, European University, Limassol, 4108, Cyprus
| | - Anca-Ligia Grosu
- Faculty of Medicine, University of Freiburg, Freiburg, 79106, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, 79106, Germany; Department of Radiation Oncology, Medical Center University of Freiburg, Freiburg, 79106, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering, Faculty of Informatics, Technische Universität Wien, Vienna, 1040, Austria; Department of Computer Science, State University of New York at Stony Brook, NY, 11794, USA
| |
Collapse
|
7
|
Zhao T, Sun Z, Guo Y, Sun Y, Zhang Y, Wang X. Automatic renal mass segmentation and classification on CT images based on 3D U-Net and ResNet algorithms. Front Oncol 2023; 13:1169922. [PMID: 37274226 PMCID: PMC10233136 DOI: 10.3389/fonc.2023.1169922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 05/09/2023] [Indexed: 06/06/2023] Open
Abstract
Purpose To automatically evaluate renal masses in CT images by using a cascade 3D U-Net- and ResNet-based method to accurately segment and classify focal renal lesions. Material and Methods We used an institutional dataset comprising 610 CT image series from 490 patients from August 2009 to August 2021 to train and evaluate the proposed method. We first determined the boundaries of the kidneys on the CT images utilizing a 3D U-Net-based method to be used as a region of interest to search for renal mass. An ensemble learning model based on 3D U-Net was then used to detect and segment the masses, followed by a ResNet algorithm for classification. Our algorithm was evaluated with an external validation dataset and kidney tumor segmentation (KiTS21) challenge dataset. Results The algorithm achieved a Dice similarity coefficient (DSC) of 0.99 for bilateral kidney boundary segmentation in the test set. The average DSC for renal mass delineation using the 3D U-Net was 0.75 and 0.83. Our method detected renal masses with recalls of 84.54% and 75.90%. The classification accuracy in the test set was 86.05% for masses (<5 mm) and 91.97% for masses (≥5 mm). Conclusion We developed a deep learning-based method for fully automated segmentation and classification of renal masses in CT images. Testing of this algorithm showed that it has the capability of accurately localizing and classifying renal masses.
Collapse
Affiliation(s)
- Tongtong Zhao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Ying Guo
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yumeng Sun
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Yaofeng Zhang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
8
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
9
|
Khalal DM, Azizi H, Maalej N. Automatic segmentation of kidneys in computed tomography images using U-Net. Cancer Radiother 2023; 27:109-114. [PMID: 36739197 DOI: 10.1016/j.canrad.2022.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 08/03/2022] [Accepted: 08/12/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE Accurate segmentation of target volumes and organs at risk from computed tomography (CT) images is essential for treatment planning in radiation therapy. The segmentation task is often done manually making it time-consuming. Besides, it is biased to the clinician experience and subject to inter-observer variability. Therefore, and due to the development of artificial intelligence tools and particularly deep learning (DL) algorithms, automatic segmentation has been proposed as an alternative. The purpose of this work is to use a DL-based method to segment the kidneys on CT images for radiotherapy treatment planning. MATERIALS AND METHODS In this contribution, we used the CT scans of 20 patients. Segmentation of the kidneys was performed using the U-Net model. The Dice similarity coefficient (DSC), the Matthews correlation coefficient (MCC), the Hausdorff distance (HD), the sensitivity and the specificity were used to quantitatively evaluate this delineation. RESULTS This model was able to segment the organs with a good accuracy. The obtained values of the used metrics for the kidneys segmentation, were presented. Our results were also compared to those obtained recently by other authors. CONCLUSION Fully automated DL-based segmentation of CT images has the potential to improve both the speed and the accuracy of radiotherapy organs contouring.
Collapse
Affiliation(s)
- D M Khalal
- Laboratory of dosing, analysis and characterization in high resolution, Department of Physics, Faculty of Sciences, Ferhat Abbas Sétif 1 University, El Baz campus 19137, Sétif, Algeria.
| | - H Azizi
- Laboratory of dosing, analysis and characterization in high resolution, Department of Physics, Faculty of Sciences, Ferhat Abbas Sétif 1 University, El Baz campus 19137, Sétif, Algeria
| | - N Maalej
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
10
|
Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering (Basel) 2023; 10:181. [PMID: 36829675 PMCID: PMC9952534 DOI: 10.3390/bioengineering10020181] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 02/04/2023] Open
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Collapse
Affiliation(s)
- Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sajid Hossain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Visage Imaging, Inc., San Diego, CA 92130, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Division of Cardiovascular Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06510, USA
| |
Collapse
|
11
|
Kikuchi T, Hanaoka S, Nakao T, Nomura Y, Yoshikawa T, Alam A, Mori H, Hayashi N. Significance of FDG-PET standardized uptake values in predicting thyroid disease. Eur Thyroid J 2023; 12:ETJ-22-0165. [PMID: 36562641 PMCID: PMC9986380 DOI: 10.1530/etj-22-0165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 12/23/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVE This study aimed to determine a standardized cut-off value for abnormal 18F-fluorodeoxyglucose (FDG) accumulation in the thyroid gland. METHODS Herein, 7013 FDG-PET/CT scans were included. An automatic thyroid segmentation method using two U-nets (2D- and 3D-U-net) was constructed; mean FDG standardized uptake value (SUV), CT value, and volume of the thyroid gland were obtained from each participant. The values were categorized by thyroid function into three groups based on serum thyroid-stimulating hormone levels. Thyroid function and mean SUV with increments of 1 were analyzed, and risk for thyroid dysfunction was calculated. Thyroid dysfunction detection ability was examined using a machine learning method (LightGBM, Microsoft) with age, sex, height, weight, CT value, volume, and mean SUV as explanatory variables. RESULTS Mean SUV was significantly higher in females with hypothyroidism. Almost 98.9% of participants in the normal group had mean SUV < 2 and 93.8% participants with mean SUV < 2 had normal thyroid function. The hypothyroidism group had more cases with mean SUV ≥ 2. The relative risk of having abnormal thyroid function was 4.6 with mean SUV ≥ 2. The sensitivity and specificity for detecting thyroid dysfunction using LightGBM (Microsoft) were 14.5 and 99%, respectively. CONCLUSIONS Mean SUV ≥ 2 was strongly associated with abnormal thyroid function in this large cohort, indicating that mean SUV with FDG-PET/CT can be used as a criterion for thyroid evaluation. Preliminarily, this study shows the potential utility of detecting thyroid dysfunction based on imaging findings.
Collapse
Affiliation(s)
- Tomohiro Kikuchi
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
- Department of Radiology, Jichi Medical University, School of Medicine, Shimotsuke, Tochigi, Japan
- Correspondence should be addressed to Tomohiro Kikuchi:
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
- Center for Frontier Medical Engineering, Chiba University, Yayoicho, Inage–ku, Chiba, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Ashraful Alam
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| | - Harushi Mori
- Department of Radiology, Jichi Medical University, School of Medicine, Shimotsuke, Tochigi, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, the University of Tokyo Hospital, Hongo, Bunkyo–ku, Tokyo, Japan
| |
Collapse
|
12
|
Li X, Bagher-Ebadian H, Gardner S, Kim J, Elshaikh M, Movsas B, Zhu D, Chetty IJ. An uncertainty-aware deep learning architecture with outlier mitigation for prostate gland segmentation in radiotherapy treatment planning. Med Phys 2023; 50:311-322. [PMID: 36112996 DOI: 10.1002/mp.15982] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/24/2022] [Accepted: 08/25/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Task automation is essential for efficient and consistent image segmentation in radiation oncology. We report on a deep learning architecture, comprising a U-Net and a variational autoencoder (VAE) for automatic contouring of the prostate gland incorporating interobserver variation for radiotherapy treatment planning. The U-Net/VAE generates an ensemble set of segmentations for each image CT slice. A novel outlier mitigation (OM) technique was implemented to enhance the model segmentation accuracy. METHODS The primary source dataset (source_prim) consisted of 19 200 CT slices (from 300 patient planning CT image datasets) with manually contoured prostate glands. A smaller secondary source dataset (source_sec) comprised 640 CT slices (from 10 patient CT datasets), where prostate glands were segmented by 5 independent physicians on each dataset to account for interobserver variability. Data augmentation via random rotation (<5 degrees), cropping, and horizontal flipping was applied to each dataset to increase sample size by a factor of 100. A probabilistic hierarchical U-Net with VAE was implemented and pretrained using the augmented source_prim dataset for 30 epochs. Model parameters of the U-Net/VAE were fine-tuned using the augmented source_sec dataset for 100 epochs. After the first round of training, outlier contours in the training dataset were automatically detected and replaced by the most accurate contours (based on Dice similarity coefficient, DSC) generated by the model. The U-Net/OM-VAE was retrained using the revised training dataset. Metrics for comparison included DSC, Hausdorff distance (HD, mm), normalized cross-correlation (NCC) coefficient, and center-of-mass (COM) distance (mm). RESULTS Results for U-Net/OM-VAE with outliers replaced in the training dataset versus U-Net/VAE without OM were as follows: DSC = 0.82 ± 0.01 versus 0.80 ± 0.02 (p = 0.019), HD = 9.18 ± 1.22 versus 10.18 ± 1.35 mm (p = 0.043), NCC = 0.59 ± 0.07 versus 0.62 ± 0.06, and COM = 3.36 ± 0.81 versus 4.77 ± 0.96 mm over the average of 15 contours. For the average of 15 highest accuracy contours, values were as follows: DSC = 0.90 ± 0.02 versus 0.85 ± 0.02, HD = 5.47 ± 0.02 versus 7.54 ± 1.36 mm, and COM = 1.03 ± 0.58 versus 1.46 ± 0.68 mm (p < 0.03 for all metrics). Results for the U-Net/OM-VAE with outliers removed were as follows: DSC = 0.78 ± 0.01, HD = 10.65 ± 1.95 mm, NCC = 0.46 ± 0.10, COM = 4.17 ± 0.79 mm for the average of 15 contours, and DSC = 0.88 ± 0.02, HD = 7.00 ± 1.17 mm, COM = 1.58 ± 0.63 mm for the average of 15 highest accuracy contours. All metrics for U-Net/VAE trained on the source_prim and source_sec datasets via pretraining, followed by fine-tuning, show statistically significant improvement over that trained on the source_sec dataset only. Finally, all metrics for U-Net/VAE with or without OM showed statistically significant improvement over those for the standard U-Net. CONCLUSIONS A VAE combined with a hierarchical U-Net and an OM strategy (U-Net/OM-VAE) demonstrates promise toward capturing interobserver variability and produces accurate prostate auto-contours for radiotherapy planning. The availability of multiple contours for each CT slice enables clinicians to determine trade-offs in selecting the "best fitting" contour on each CT slice. Mitigation of outlier contours in the training dataset improves prediction accuracy, but one must be wary of reduction in variability in the training dataset.
Collapse
Affiliation(s)
- Xin Li
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Stephen Gardner
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Benjamin Movsas
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Dongxiao Zhu
- Department of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| |
Collapse
|
13
|
Kanno J, Shoji T, Ishii H, Ibuki H, Yoshikawa Y, Sasaki T, Shinoda K. Deep Learning with a Dataset Created Using Kanno Saitama Macro, a Self-Made Automatic Foveal Avascular Zone Extraction Program. J Clin Med 2022; 12:jcm12010183. [PMID: 36614984 PMCID: PMC9821090 DOI: 10.3390/jcm12010183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/08/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The extraction of the foveal avascular zone (FAZ) from optical coherence tomography angiography (OCTA) images has been used in many studies in recent years due to its association with various ophthalmic diseases. In this study, we investigated the utility of a dataset for deep learning created using Kanno Saitama Macro (KSM), a program that automatically extracts the FAZ using swept-source OCTA. The test data included 40 eyes of 20 healthy volunteers. For training and validation, we used 257 eyes from 257 patients. The FAZ of the retinal surface image was extracted using KSM, and a dataset for FAZ extraction was created. Based on that dataset, we conducted a training test using a typical U-Net. Two examiners manually extracted the FAZ of the test data, and the results were used as gold standards to compare the Jaccard coefficients between examiners, and between each examiner and the U-Net. The Jaccard coefficient was 0.931 between examiner 1 and examiner 2, 0.951 between examiner 1 and the U-Net, and 0.933 between examiner 2 and the U-Net. The Jaccard coefficients were significantly better between examiner 1 and the U-Net than between examiner 1 and examiner 2 (p < 0.001). These data indicated that the dataset generated by KSM was as good as, if not better than, the agreement between examiners using the manual method. KSM may contribute to reducing the burden of annotation in deep learning.
Collapse
Affiliation(s)
- Junji Kanno
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| | - Takuhei Shoji
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
- Koedo Eye Institute, Kawagoe 350-1123, Japan
- Correspondence: ; Tel.: +81-49-276-1250
| | - Hirokazu Ishii
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| | - Hisashi Ibuki
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| | - Yuji Yoshikawa
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| | - Takanori Sasaki
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| | - Kei Shinoda
- Department of Ophthalmology, Saitama Medical University School of Medicine, Iruma 350-0495, Japan
| |
Collapse
|
14
|
Costea M, Zlate A, Durand M, Baudier T, Grégoire V, Sarrut D, Biston MC. Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system. Radiother Oncol 2022; 177:61-70. [PMID: 36328093 DOI: 10.1016/j.radonc.2022.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/21/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND PURPOSE To investigate the performance of head-and-neck (HN) organs-at-risk (OAR) automatic segmentation (AS) using four atlas-based (ABAS) and two deep learning (DL) solutions. MATERIAL AND METHODS All patients underwent iodine contrast-enhanced planning CT. Fourteen OAR were manually delineated. DL.1 and DL.2 solutions were trained with 63 mono-centric patients and > 1000 multi-centric patients, respectively. Ten and 15 patients with varied anatomies were selected for the atlas library and for testing, respectively. The evaluation was based on geometric indices (DICE coefficient and 95th percentile-Hausdorff Distance (HD95%)), time needed for manual corrections and clinical dosimetric endpoints obtained using automated treatment planning. RESULTS Both DICE and HD95% results indicated that DL algorithms generally performed better compared with ABAS algorithms for automatic segmentation of HN OAR. However, the hybrid-ABAS (ABAS.3) algorithm sometimes provided the highest agreement to the reference contours compared with the 2 DL. Compared with DL.2 and ABAS.3, DL.1 contours were the fastest to correct. For the 3 solutions, the differences in dose distributions obtained using AS contours and AS + manually corrected contours were not statistically significant. High dose differences could be observed when OAR contours were at short distances to the targets. However, this was not always interrelated. CONCLUSION DL methods generally showed higher delineation accuracy compared with ABAS methods for AS segmentation of HN OAR. Most ABAS contours had high conformity to the reference but were more time consuming than DL algorithms, especially when considering the computing time and the time spent on manual corrections.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - Morgane Durand
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France
| | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
15
|
Shao J, Zhou K, Cai YH, Geng DY. Application of an Improved U2-Net Model in Ultrasound Median Neural Image Segmentation. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2512-2520. [PMID: 36167742 DOI: 10.1016/j.ultrasmedbio.2022.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 08/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
To investigate whether an improved U2-Net model could be used to segment the median nerve and improve segmentation performance, we performed a retrospective study with 402 nerve images from patients who visited Huashan Hospital from October 2018 to July 2020; 249 images were from patients with carpal tunnel syndrome, and 153 were from healthy volunteers. From these, 320 cases were selected as training sets, and 82 cases were selected as test sets. The improved U2-Net model was used to segment each image. Dice coefficients (Dice), pixel accuracy (PA), mean intersection over union (MIoU) and average Hausdorff distance (AVD) were used to evaluate segmentation performance. Results revealed that the Dice, MIoU, PA and AVD values of our improved U2-Net were 72.85%, 79.66%, 95.92% and 51.37 mm, respectively, which were comparable to the actual ground truth; the ground truth came from the labeling of clinicians. However, the Dice, MIoU, PA and AVD values of U-Net were 43.19%, 65.57%, 86.22% and 74.82 mm, and those of Res-U-Net were 58.65%, 72.53%, 88.98% and 57.30 mm. Overall, our data suggest our improved U2-Net model might be used for segmentation of ultrasound median neural images.
Collapse
Affiliation(s)
- Jie Shao
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Kun Zhou
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Ye-Hua Cai
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Dao-Ying Geng
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China; Greater Bay Area Institute of Precision Medicine (Guangzhou), Fudan University, Guangzhou, China.
| |
Collapse
|
16
|
Sample C, Jung N, Rahmim A, Uribe C, Clark H. Development of a CT-Based Auto-Segmentation Model for Prostate-Specific Membrane Antigen (PSMA) Positron Emission Tomography-Delineated Tubarial Glands. Cureus 2022; 14:e31060. [DOI: 10.7759/cureus.31060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2022] [Indexed: 11/06/2022] Open
|
17
|
Bhattacharyya D, Thirupathi Rao N, Joshua ESN, Hu YC. A bi-directional deep learning architecture for lung nodule semantic segmentation. THE VISUAL COMPUTER 2022; 39:1-17. [PMID: 36097497 PMCID: PMC9453728 DOI: 10.1007/s00371-022-02657-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Lung nodules are abnormal growths and lesions may exist. Both lungs may have nodules. Most lung nodules are harmless (not cancerous/malignant). Pulmonary nodules are rare in lung cancer. X-rays and CT scans identify the lung nodules. Doctors may term the growth a lung spot, coin lesion, or shadow. It is necessary to obtain properly computed tomography (CT) scans of the lungs to get an accurate diagnosis and a good estimate of the severity of lung cancer. This study aims to design and evaluate a deep learning (DL) algorithm for identifying pulmonary nodules (PNs) using the LUNA-16 dataset and examine the prevalence of PNs using DB-Net. The paper states that a new, resource-efficient deep learning architecture is called for, and it has been given the name of DB-NET. When a physician orders a CT scan, they need to employ an accurate and efficient lung nodule segmentation method because they need to detect lung cancer at an early stage. However, segmentation of lung nodules is a difficult task because of the nodules' characteristics on the CT image as well as the nodules' concealed shape, visual quality, and context. The DB-NET model architecture is presented as a resource-efficient deep learning solution for handling the challenge at hand in this paper. Furthermore, it incorporates the Mish nonlinearity function and the mask class weights to improve segmentation effectiveness. In addition to the LUNA-16 dataset, which contained 1200 lung nodules collected during the LUNA-16 test, the LUNA-16 dataset was extensively used to train and assess the proposed model. The DB-NET architecture surpasses the existing U-NET model by a dice coefficient index of 88.89%, and it also achieves a similar level of accuracy to that of human experts.
Collapse
Affiliation(s)
- Debnath Bhattacharyya
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Guntur, 522 502 India
| | - N. Thirupathi Rao
- Department of Computer Science and Engineering, Vignan’s Institute of Information Technology (A), Visakhapatnam, 530049 AP India
| | - Eali Stephen Neal Joshua
- Department of Computer Science and Engineering, Vignan’s Institute of Information Technology (A), Visakhapatnam, 530049 AP India
| | - Yu-Chen Hu
- Department of Computer Science and Information Management, Providence University, 200, Sec. 7, Taiwan Boulevard, Shalu Dist., Taichung City, 43301 Taiwan R.O.C
| |
Collapse
|
18
|
Ding Q. Evaluation of the Efficacy of Artificial Neural Network-Based Music Therapy for Depression. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9208607. [PMID: 36045957 PMCID: PMC9420578 DOI: 10.1155/2022/9208607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 07/16/2022] [Accepted: 07/23/2022] [Indexed: 11/17/2022]
Abstract
In order to evaluate the therapeutic effect of music therapy on patients with depression, this paper proposes a CNN-based noise detection method with the combination of HHT and FastICA for noise removal, with good data support from the DBN model. DBN-based feature extraction and classification are completed. As the training process of DBN itself requires a large number of training samples, there are also disadvantages such as slow convergence speed and easy to fall into local minima, which lead to a large amount of effort and time, and the learning efficiency is relatively low. A DBN optimization algorithm based on artificial neural network was proposed to evaluate the efficacy of music therapy. First of all, through the comparison of music therapy experimental group and control group, to verify that music therapy is effective for the treatment of depressed patients. Secondly, we propose to optimize the selection of features based on the frequency band energy ratio and the sliding average sample entropy, respectively, and then to classify the EEG of depressed patients under different music perceptions by training the DBN model and continuously adjusting the parameters, combined with the surtax classifier, and the classification accuracy is high. In particular, it can detect the different effects of different music styles, which is of great significance for the selection of appropriate music for the treatment of depressed patients.
Collapse
Affiliation(s)
- Qian Ding
- College of International Exchange, Shandong Management University, Jinan 250357, China
| |
Collapse
|
19
|
Khalal DM, Behouch A, Azizi H, Maalej N. Automatic segmentation of thoracic CT images using three deep learning models. Cancer Radiother 2022; 26:1008-1015. [PMID: 35803861 DOI: 10.1016/j.canrad.2022.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 01/10/2022] [Accepted: 02/09/2022] [Indexed: 11/18/2022]
Abstract
PURPOSE Deep learning (DL) techniques are widely used in medical imaging and in particular for segmentation. Indeed, manual segmentation of organs at risk (OARs) is time-consuming and suffers from inter- and intra-observer segmentation variability. Image segmentation using DL has given very promising results. In this work, we present and compare the results of segmentation of OARs and a clinical target volume (CTV) in thoracic CT images using three DL models. MATERIALS AND METHODS We used CT images of 52 patients with breast cancer from a public dataset. Automatic segmentation of the lungs, the heart and a CTV was performed using three models based on the U-Net architecture. Three metrics were used to quantify and compare the segmentation results obtained with these models: the Dice similarity coefficient (DSC), the Jaccard coefficient (J) and the Hausdorff distance (HD). RESULTS The obtained values of DSC, J and HD were presented for each segmented organ and for the three models. Examples of automatic segmentation were presented and compared to the corresponding ground truth delineations. Our values were also compared to recent results obtained by other authors. CONCLUSION The performance of three DL models was evaluated for the delineation of the lungs, the heart and a CTV. This study showed clearly that these 2D models based on the U-Net architecture can be used to delineate organs in CT images with a good performance compared to other models. Generally, the three models present similar performances. Using a dataset with more CT images, the three models should give better results.
Collapse
Affiliation(s)
- D M Khalal
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria.
| | - A Behouch
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria
| | - H Azizi
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria
| | - N Maalej
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
20
|
Li J, Anne R. Comparison of Eclipse Smart Segmentation and MIM Atlas Segment for liver delineation for yttrium-90 selective internal radiation therapy. J Appl Clin Med Phys 2022; 23:e13668. [PMID: 35702944 PMCID: PMC9359022 DOI: 10.1002/acm2.13668] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/12/2022] [Accepted: 05/19/2022] [Indexed: 11/09/2022] Open
Abstract
Purpose The aim was to compare Smart Segmentation of Eclipse treatment planning system and Atlas Segment of MIM software for liver delineation for resin yttrium‐90 (Y‐90) procedures. Materials and methods CT images of 20 patients treated with resin Y‐90 selective internal radiation therapy (SIRT) were tested. Liver contours generated with Smart Segmentation and Atlas Segment were compared with physician manually delineated contours. Dice similarity coefficient (DSC), mean distance to agreement (MDA), and ratio of volume (RV) were calculated. The contours were evaluated with activity calculations and ratio of activity (RA) was calculated. Results Mean DSCs were 0.77 and 0.83, MDAs were 0.88 and 0.71 cm, mean RVs were 0.95 and 1.02, and mean RAs were 1.00 and 1.00, for Eclipse and MIM results, respectively. Conclusion MIM outperformed Eclipse in both DSC and MDA, whereas the differences in liver volumes and calculated activities were statistically insignificant between the Eclipse and MIM results. Both auto‐segmentation tools can be used to generate initial liver contours for resin Y‐90 SIRT, which need to be reviewed and edited by physicians.
Collapse
Affiliation(s)
- Jun Li
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Rani Anne
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
21
|
De Asis-Cruz J, Krishnamurthy D, Jose C, Cook KM, Limperopoulos C. FetalGAN: Automated Segmentation of Fetal Functional Brain MRI Using Deep Generative Adversarial Learning and Multi-Scale 3D U-Net. Front Neurosci 2022; 16:887634. [PMID: 35747213 PMCID: PMC9209698 DOI: 10.3389/fnins.2022.887634] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 05/16/2022] [Indexed: 01/02/2023] Open
Abstract
An important step in the preprocessing of resting state functional magnetic resonance images (rs-fMRI) is the separation of brain from non-brain voxels. Widely used imaging tools such as FSL's BET2 and AFNI's 3dSkullStrip accomplish this task effectively in children and adults. In fetal functional brain imaging, however, the presence of maternal tissue around the brain coupled with the non-standard position of the fetal head limit the usefulness of these tools. Accurate brain masks are thus generated manually, a time-consuming and tedious process that slows down preprocessing of fetal rs-fMRI. Recently, deep learning-based segmentation models such as convolutional neural networks (CNNs) have been increasingly used for automated segmentation of medical images, including the fetal brain. Here, we propose a computationally efficient end-to-end generative adversarial neural network (GAN) for segmenting the fetal brain. This method, which we call FetalGAN, yielded whole brain masks that closely approximated the manually labeled ground truth. FetalGAN performed better than 3D U-Net model and BET2: FetalGAN, Dice score = 0.973 ± 0.013, precision = 0.977 ± 0.015; 3D U-Net, Dice score = 0.954 ± 0.054, precision = 0.967 ± 0.037; BET2, Dice score = 0.856 ± 0.084, precision = 0.758 ± 0.113. FetalGAN was also faster than 3D U-Net and the manual method (7.35 s vs. 10.25 s vs. ∼5 min/volume). To the best of our knowledge, this is the first successful implementation of 3D CNN with GAN on fetal fMRI brain images and represents a significant advance in fully automating processing of rs-MRI images.
Collapse
Affiliation(s)
- Josepheen De Asis-Cruz
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Dhineshvikram Krishnamurthy
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Chris Jose
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Kevin M. Cook
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| | - Catherine Limperopoulos
- Developing Brain Institute, Department of Diagnostic Radiology, Children’s National Hospital, Washington, DC, United States
| |
Collapse
|
22
|
Zhou J, Xin H. Emerging artificial intelligence methods for fighting lung cancer: a survey. CLINICAL EHEALTH 2022. [DOI: 10.1016/j.ceh.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
23
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
24
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and "motivate" the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
25
|
Tian Y, Wang J, Yang W, Wang J, Qian D. Deep multi-instance transfer learning for pneumothorax classification in chest X-ray images. Med Phys 2021; 49:231-243. [PMID: 34802144 DOI: 10.1002/mp.15328] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 10/17/2021] [Accepted: 10/18/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Pneumothorax is a life-threatening emergency that requires immediate treatment. Frontal-view chest X-ray images are typically used for pneumothorax detection in clinical practice. However, manual review of radiographs is time-consuming, labor-intensive, and highly dependent on the experience of radiologists, which may lead to misdiagnosis. Here, we aim to develop a reliable automatic classification method to assist radiologists in rapidly and accurately diagnosing pneumothorax in frontal chest radiographs. METHODS A novel residual neural network (ResNet)-based two-stage deep-learning strategy is proposed for pneumothorax identification: local feature learning (LFL) followed by global multi-instance learning (GMIL). Most of the nonlesion regions in the images are removed for learning discriminative features. Two datasets are used for large-scale validation: a private dataset (27 955 frontal-view chest X-ray images) and a public dataset (the National Institutes of Health [NIH] ChestX-ray14; 112 120 frontal-view X-ray images). The model performance of the identification was evaluated using the accuracy, precision, recall, specificity, F1-score, receiver operating characteristic (ROC), and area under ROC curve (AUC). Fivefold cross-validation is conducted on the datasets, and then the mean and standard deviation of the above-mentioned metrics are calculated to assess the overall performance of the model. RESULTS The experimental results demonstrate that the proposed learning strategy can achieve state-of-the-art performance on the NIH dataset with an accuracy, AUC, precision, recall, specificity, and F1-score of 94.4% ± 0.7%, 97.3% ± 0.5%, 94.2% ± 0.3%, 94.6% ± 1.5%, 94.2% ± 0.4%, and 94.4% ± 0.7%, respectively. CONCLUSIONS The experimental results demonstrate that our proposed CAD system is an efficient assistive tool in the identification of pneumothorax.
Collapse
Affiliation(s)
- Yuchi Tian
- Academy of Engineering and Technology, Fudan University, Shanghai, China
| | - Jiawei Wang
- Department of Radiology, The Second Affiliated Hospital Zhejiang University School of Medicine, Hangzhou, China
| | - Wenjie Yang
- Department of Radiology, Ruijin Hospital Affiliated to School of Medicine, Shanghai Jiao Tong University, China
| | - Jun Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
26
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
27
|
Gan W, Wang H, Gu H, Duan Y, Shao Y, Chen H, Feng A, Huang Y, Fu X, Ying Y, Quan H, Xu Z. Automatic segmentation of lung tumors on CT images based on a 2D & 3D hybrid convolutional neural network. Br J Radiol 2021; 94:20210038. [PMID: 34347535 PMCID: PMC9328064 DOI: 10.1259/bjr.20210038] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 06/22/2021] [Accepted: 07/25/2021] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE A stable and accurate automatic tumor delineation method has been developed to facilitate the intelligent design of lung cancer radiotherapy process. The purpose of this paper is to introduce an automatic tumor segmentation network for lung cancer on CT images based on deep learning. METHODS In this paper, a hybrid convolution neural network (CNN) combining 2D CNN and 3D CNN was implemented for the automatic lung tumor delineation using CT images. 3D CNN used V-Net model for the extraction of tumor context information from CT sequence images. 2D CNN used an encoder-decoder structure based on dense connection scheme, which could expand information flow and promote feature propagation. Next, 2D features and 3D features were fused through a hybrid module. Meanwhile, the hybrid CNN was compared with the individual 3D CNN and 2D CNN, and three evaluation metrics, Dice, Jaccard and Hausdorff distance (HD), were used for quantitative evaluation. The relationship between the segmentation performance of hybrid network and the GTV volume size was also explored. RESULTS The newly introduced hybrid CNN was trained and tested on a dataset of 260 cases, and could achieve a median value of 0.73, with mean and stand deviation of 0.72 ± 0.10 for the Dice metric, 0.58 ± 0.13 and 21.73 ± 13.30 mm for the Jaccard and HD metrics, respectively. The hybrid network significantly outperformed the individual 3D CNN and 2D CNN in the three examined evaluation metrics (p < 0.001). A larger GTV present a higher value for the Dice metric, but its delineation at the tumor boundary is unstable. CONCLUSIONS The implemented hybrid CNN was able to achieve good lung tumor segmentation performance on CT images. ADVANCES IN KNOWLEDGE The hybrid CNN has valuable prospect with the ability to segment lung tumor.
Collapse
Affiliation(s)
| | - Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yanchen Ying
- Department of Radiation Physics, Zhejiang Cancer Hospital, University of Chinese Academy of Sciences, Zhejiang, China
| | - Hong Quan
- School of Physics and Technology, University of Wuhan, Wuhan, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
28
|
Using Convolutional Encoder Networks to Determine the Optimal Magnetic Resonance Image for the Automatic Segmentation of Multiple Sclerosis. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11188335] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multiple Sclerosis (MS) is a neuroinflammatory demyelinating disease that affects over 2,000,000 individuals worldwide. It is characterized by white matter lesions that are identified through the segmentation of magnetic resonance images (MRIs). Manual segmentation is very time-intensive because radiologists spend a great amount of time labeling T1-weighted, T2-weighted, and FLAIR MRIs. In response, deep learning models have been created to reduce segmentation time by automatically detecting lesions. These models often use individual MRI sequences as well as combinations, such as FLAIR2, which is the multiplication of FLAIR and T2 sequences. Unlike many other studies, this seeks to determine an optimal MRI sequence, thus reducing even more time by not having to obtain other MRI sequences. With this consideration in mind, four Convolutional Encoder Networks (CENs) with different network architectures (U-Net, U-Net++, Linknet, and Feature Pyramid Network) were used to ensure that the optimal MRI applies to a wide array of deep learning models. Each model had used a pretrained ResNeXt-50 encoder in order to conserve memory and to train faster. Training and testing had been performed using two public datasets with 30 and 15 patients. Fisher’s exact test was used to evaluate statistical significance, and the automatic segmentation times were compiled for the top two models. This work determined that FLAIR is the optimal sequence based on Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). By using FLAIR, the U-Net++ with the ResNeXt-50 achieved a high DSC of 0.7159.
Collapse
|
29
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
30
|
Nemoto T, Futakami N, Kunieda E, Yagi M, Takeda A, Akiba T, Mutu E, Shigematsu N. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs. Radiol Phys Technol 2021; 14:318-327. [PMID: 34254251 DOI: 10.1007/s12194-021-00630-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 07/01/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Deep learning has demonstrated high efficacy for automatic segmentation in contour delineation, which is crucial in radiation therapy planning. However, the collection, labeling, and management of medical imaging data can be challenging. This study aims to elucidate the effects of sample size and data augmentation on the automatic segmentation of computed tomography images using U-Net, a deep learning method. For the chest and pelvic regions, 232 and 556 cases are evaluated, respectively. We investigate multiple conditions by changing the sum of the training and validation datasets across a broad range of values: 10-200 and 10-500 cases for the chest and pelvic regions, respectively. A U-Net is constructed, and horizontal-flip data augmentation, which produces left and right inverse images resulting in twice the number of images, is compared with no augmentation for each training session. All lung cases and more than 100 prostate, bladder, and rectum cases indicate that adding horizontal-flip data augmentation is almost as effective as doubling the number of cases. The slope of the Dice similarity coefficient (DSC) in all organs decreases rapidly until approximately 100 cases, stabilizes after 200 cases, and shows minimal changes as the number of cases is increased further. The DSCs stabilize at a smaller sample size with the incorporation of data augmentation in all organs except the heart. This finding is applicable to the automation of radiation therapy for rare cancers, where large datasets may be difficult to obtain.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Etsuo Kunieda
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.,Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- Platform Technical Engineer Division, HPC and AI Business Department, System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura-shi, Kanagawa, 247-0056, Japan
| | - Takeshi Akiba
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Eride Mutu
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
31
|
Alis D, Yergin M, Alis C, Topel C, Asmakutlu O, Bagcilar O, Senli YD, Ustundag A, Salt V, Dogan SN, Velioglu M, Selcuk HH, Kara B, Oksuz I, Kizilkilic O, Karaarslan E. Inter-vendor performance of deep learning in segmenting acute ischemic lesions on diffusion-weighted imaging: a multicenter study. Sci Rep 2021; 11:12434. [PMID: 34127692 PMCID: PMC8203621 DOI: 10.1038/s41598-021-91467-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 05/10/2021] [Indexed: 11/09/2022] Open
Abstract
There is little evidence on the applicability of deep learning (DL) in the segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) between magnetic resonance imaging (MRI) scanners of different manufacturers. We retrospectively included DWI data of patients with acute ischemic lesions from six centers. Dataset A (n = 2986) and B (n = 3951) included data from Siemens and GE MRI scanners, respectively. The datasets were split into the training (80%), validation (10%), and internal test (10%) sets, and six neuroradiologists created ground-truth masks. Models A and B were the proposed neural networks trained on datasets A and B. The models subsequently fine-tuned across the datasets using their validation data. Another radiologist performed the segmentation on the test sets for comparisons. The median Dice scores of models A and B were 0.858 and 0.857 for the internal tests, which were non-inferior to the radiologist’s performance, but demonstrated lower performance than the radiologist on the external tests. Fine-tuned models A and B achieved median Dice scores of 0.832 and 0.846, which were non-inferior to the radiologist's performance on the external tests. The present work shows that the inter-vendor operability of deep learning for the segmentation of ischemic lesions on DWI might be enhanced via transfer learning; thereby, their clinical applicability and generalizability could be improved.
Collapse
Affiliation(s)
- Deniz Alis
- Department of Radiology, Acibadem Mehmet Ali Aydinlar University School of Medicine, Istanbul, Turkey.
| | - Mert Yergin
- Department of Software Engineering and Applied Sciences, Bahcesehir University, Istanbul, Turkey
| | - Ceren Alis
- Cerrahpaşa Medical Faculty, Neurology Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Cagdas Topel
- Department of Radiology, Istanbul Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Halkali/Istanbul, Turkey
| | - Ozan Asmakutlu
- Department of Radiology, Istanbul Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, Halkali/Istanbul, Turkey
| | - Omer Bagcilar
- Radiology Department, Istanbul Silivri State Hospital, Istanbul, Turkey
| | - Yeseren Deniz Senli
- Cerrahpaşa Medical Faculty, Radiology Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Ahmet Ustundag
- Cerrahpaşa Medical Faculty, Radiology Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Vefa Salt
- Cerrahpaşa Medical Faculty, Radiology Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Sebahat Nacar Dogan
- Radiology Department, Istanbul Gaziosmanpasa Training and Research Hospital, Istanbul, Turkey
| | - Murat Velioglu
- Radiology Department, Istanbul Fatih Sultan Mehmet Training and Research Hospital, Istanbul, Turkey
| | - Hakan Hatem Selcuk
- Radiology Department, Istanbul Bakırköy Sadi Konuk Training and Research Hospital, Istanbul, Turkey
| | - Batuhan Kara
- Radiology Department, Istanbul Bakırköy Sadi Konuk Training and Research Hospital, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Software Engineering and Applied Sciences, Istanbul Technical University, Istanbul, Turkey
| | - Osman Kizilkilic
- Cerrahpaşa Medical Faculty, Radiology Department, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, Acibadem Mehmet Ali Aydinlar University School of Medicine, Istanbul, Turkey
| |
Collapse
|
32
|
肖 汉, 冉 智, 黄 金, 任 慧, 刘 畅, 张 邦, 张 勃, 党 军. [Research progress in lung parenchyma segmentation based on computed tomography]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2021; 38:379-386. [PMID: 33913299 PMCID: PMC9927687 DOI: 10.7507/1001-5515.202008032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/31/2021] [Indexed: 11/03/2022]
Abstract
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields.
Collapse
Affiliation(s)
- 汉光 肖
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 智强 冉
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 金锋 黄
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 慧娇 任
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 畅 刘
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 邦林 张
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 勃龙 张
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 军 党
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| |
Collapse
|
33
|
Pu J, Leader JK, Bandos A, Ke S, Wang J, Shi J, Du P, Guo Y, Wenzel SE, Fuhrman CR, Wilson DO, Sciurba FC, Jin C. Automated quantification of COVID-19 severity and progression using chest CT images. Eur Radiol 2021; 31:436-446. [PMID: 32789756 PMCID: PMC7755837 DOI: 10.1007/s00330-020-07156-2] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/23/2020] [Accepted: 08/05/2020] [Indexed: 12/20/2022]
Abstract
OBJECTIVE To develop and test computer software to detect, quantify, and monitor progression of pneumonia associated with COVID-19 using chest CT scans. METHODS One hundred twenty chest CT scans from subjects with lung infiltrates were used for training deep learning algorithms to segment lung regions and vessels. Seventy-two serial scans from 24 COVID-19 subjects were used to develop and test algorithms to detect and quantify the presence and progression of infiltrates associated with COVID-19. The algorithm included (1) automated lung boundary and vessel segmentation, (2) registration of the lung boundary between serial scans, (3) computerized identification of the pneumonitis regions, and (4) assessment of disease progression. Agreement between radiologist manually delineated regions and computer-detected regions was assessed using the Dice coefficient. Serial scans were registered and used to generate a heatmap visualizing the change between scans. Two radiologists, using a five-point Likert scale, subjectively rated heatmap accuracy in representing progression. RESULTS There was strong agreement between computer detection and the manual delineation of pneumonic regions with a Dice coefficient of 81% (CI 76-86%). In detecting large pneumonia regions (> 200 mm3), the algorithm had a sensitivity of 95% (CI 94-97%) and specificity of 84% (CI 81-86%). Radiologists rated 95% (CI 72 to 99) of heatmaps at least "acceptable" for representing disease progression. CONCLUSION The preliminary results suggested the feasibility of using computer software to detect and quantify pneumonic regions associated with COVID-19 and to generate heatmaps that can be used to visualize and assess progression. KEY POINTS • Both computer vision and deep learning technology were used to develop computer software to quantify the presence and progression of pneumonia associated with COVID-19 depicted on CT images. • The computer software was tested using both quantitative experiments and subjective assessment. • The computer software has the potential to assist in the detection of the pneumonic regions, monitor disease progression, and assess treatment efficacy related to COVID-19.
Collapse
Affiliation(s)
- Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| | - Joseph K Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Andriy Bandos
- Department of Biostatistics, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Shi Ke
- Department of Radiology, Xi'an Jiaotong University The First Affiliated Hospital, Xi'an, Shaanxi, China
| | - Jing Wang
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Junli Shi
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Pang Du
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Youmin Guo
- Department of Radiology, Xi'an Jiaotong University The First Affiliated Hospital, Xi'an, Shaanxi, China
| | - Sally E Wenzel
- Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Carl R Fuhrman
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - David O Wilson
- Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Frank C Sciurba
- Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Chenwang Jin
- Department of Radiology, Xi'an Jiaotong University The First Affiliated Hospital, Xi'an, Shaanxi, China.
| |
Collapse
|
34
|
Esaki T, Furukawa R. [Volume Measurements of Post-transplanted Liver of Pediatric Recipients Using Workstations and Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1133-1142. [PMID: 33229843 DOI: 10.6009/jjrt.2020_jsrt_76.11.1133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE The purpose of this study was to propose a method for segmentation and volume measurement of graft liver and spleen of pediatric transplant recipients on digital imaging and communications in medicine (DICOM) -format images using U-Net and three-dimensional (3-D) workstations (3DWS) . METHOD For segmentation accuracy assessments, Dice coefficients were calculated for the graft liver and spleen. After verifying that the created DICOM-format images could be imported using the existing 3DWS, accuracy rates between the ground truth and segmentation images were calculated via mask processing. RESULT As per the verification results, Dice coefficients for the test data were as follows: graft liver, 0.758 and spleen, 0.577. All created DICOM-format images were importable using the 3DWS, with accuracy rates of 87.10±4.70% and 80.27±11.29% for the graft liver and spleen, respectively. CONCLUSION The U-Net could be used for graft liver and spleen segmentations, and volume measurement using 3DWS was simplified by this method.
Collapse
Affiliation(s)
- Toru Esaki
- Department of Radiologic Technology, Jichi Medical University Hospital
| | - Rieko Furukawa
- Department of Pediatric Medical Imaging, Jichi Children's Medical Center Tochigi
| |
Collapse
|
35
|
Nemoto T, Futakami N, Yagi M, Kunieda E, Akiba T, Takeda A, Shigematsu N. Simple low-cost approaches to semantic segmentation in radiation therapy planning for prostate cancer using deep learning with non-contrast planning CT images. Phys Med 2020; 78:93-100. [DOI: 10.1016/j.ejmp.2020.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 07/24/2020] [Accepted: 09/01/2020] [Indexed: 10/23/2022] Open
|