1
|
Yasaka K, Abe O. Impact of rapid iodine contrast agent infusion on tracheal diameter and lung volume in CT pulmonary angiography measured with deep learning-based algorithm. Jpn J Radiol 2024; 42:1003-1011. [PMID: 38733470 PMCID: PMC11364558 DOI: 10.1007/s11604-024-01591-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 05/04/2024] [Indexed: 05/13/2024]
Abstract
PURPOSE To compare computed tomography (CT) pulmonary angiography and unenhanced CT to determine the effect of rapid iodine contrast agent infusion on tracheal diameter and lung volume. MATERIAL AND METHODS This retrospective study included 101 patients who underwent CT pulmonary angiography and unenhanced CT, for which the time interval between them was within 365 days. CT pulmonary angiography was scanned 20 s after starting the contrast agent injection at the end-inspiratory level. Commercial software, which was developed based on deep learning technique, was used to segment the lung, and its volume was automatically evaluated. The tracheal diameter at the thoracic inlet level was also measured. Then, the ratios for the CT pulmonary angiography to unenhanced CT of the tracheal diameter (TDPAU) and both lung volumes (BLVPAU) were calculated. RESULTS Tracheal diameter and both lung volumes were significantly smaller in CT pulmonary angiography (17.2 ± 2.6 mm and 3668 ± 1068 ml, respectively) than those in unenhanced CT (17.7 ± 2.5 mm and 3887 ± 1086 ml, respectively) (p < 0.001 for both). A statistically significant correlation was found between TDPAU and BLVPAU with a correlation coefficient of 0.451 (95% confidence interval, 0.280-0.594) (p < 0.001). No factor showed a significant association with TDPAU. The type of contrast agent had a significant association for BLVPAU (p = 0.042). CONCLUSIONS Rapid infusion of iodine contrast agent reduced the tracheal diameter and both lung volumes in CT pulmonary angiography, which was scanned at end-inspiratory level, compared with those in unenhanced CT.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
2
|
Cai J, Zhu H, Liu S, Qi Y, Chen R. Lung image segmentation via generative adversarial networks. Front Physiol 2024; 15:1408832. [PMID: 39219839 PMCID: PMC11365075 DOI: 10.3389/fphys.2024.1408832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 08/01/2024] [Indexed: 09/04/2024] Open
Abstract
Introduction Lung image segmentation plays an important role in computer-aid pulmonary disease diagnosis and treatment. Methods This paper explores the lung CT image segmentation method by generative adversarial networks. We employ a variety of generative adversarial networks and used their capability of image translation to perform image segmentation. The generative adversarial network is employed to translate the original lung image into the segmented image. Results The generative adversarial networks-based segmentation method is tested on real lung image data set. Experimental results show that the proposed method outperforms the state-of-the-art method. Discussion The generative adversarial networks-based method is effective for lung image segmentation.
Collapse
Affiliation(s)
- Jiaxin Cai
- School of Mathematics and Statistics, Xiamen University of Technology, Xiamen, China
| | - Hongfeng Zhu
- School of Mathematics and Statistics, Xiamen University of Technology, Xiamen, China
| | - Siyu Liu
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Yang Qi
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Rongshang Chen
- School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
3
|
Yasaka K, Saigusa H, Abe O. Effects of Intravenous Infusion of Iodine Contrast Media on the Tracheal Diameter and Lung Volume Measured with Deep Learning-Based Algorithm. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1609-1617. [PMID: 38448759 PMCID: PMC11300755 DOI: 10.1007/s10278-024-01071-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/06/2024] [Accepted: 02/22/2024] [Indexed: 03/08/2024]
Abstract
This study aimed to investigate the effects of intravenous injection of iodine contrast agent on the tracheal diameter and lung volume. In this retrospective study, a total of 221 patients (71.1 ± 12.4 years, 174 males) who underwent vascular dynamic CT examination including chest were included. Unenhanced, arterial phase, and delayed-phase images were scanned. The tracheal luminal diameters at the level of the thoracic inlet and both lung volumes were evaluated by a radiologist using a commercial software, which allows automatic airway and lung segmentation. The tracheal diameter and both lung volumes were compared between the unenhanced vs. arterial and delayed phase using a paired t-test. The Bonferroni correction was performed for multiple group comparisons. The tracheal diameter in the arterial phase (18.6 ± 2.4 mm) was statistically significantly smaller than those in the unenhanced CT (19.1 ± 2.5 mm) (p < 0.001). No statistically significant difference was found in the tracheal diameter between the delayed phase (19.0 ± 2.4 mm) and unenhanced CT (p = 0.077). Both lung volumes in the arterial phase were 4131 ± 1051 mL which was significantly smaller than those in the unenhanced CT (4332 ± 1076 mL) (p < 0.001). No statistically significant difference was found in both lung volumes between the delayed phase (4284 ± 1054 mL) and unenhanced CT (p = 0.068). In conclusion, intravenous infusion of iodine contrast agent transiently decreased the tracheal diameter and both lung volumes.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Hiroyuki Saigusa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
4
|
Choe J, Hwang HJ, Lee SM, Yoon J, Kim N, Seo JB. CT Quantification of Interstitial Lung Abnormality and Interstitial Lung Disease: From Technical Challenges to Future Directions. Invest Radiol 2024:00004424-990000000-00233. [PMID: 39008898 DOI: 10.1097/rli.0000000000001103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
ABSTRACT Interstitial lung disease (ILD) encompasses a variety of lung disorders with varying degrees of inflammation or fibrosis, requiring a combination of clinical, imaging, and pathologic data for evaluation. Imaging is essential for the noninvasive diagnosis of the disease, as well as for assessing disease severity, monitoring its progression, and evaluating treatment response. However, traditional visual assessments of ILD with computed tomography (CT) suffer from reader variability. Automated quantitative CT offers a more objective approach by using computer-based analysis to consistently evaluate and measure ILD. Advancements in technology have significantly improved the accuracy and reliability of these measurements. Recently, interstitial lung abnormalities (ILAs), which represent potential preclinical ILD incidentally found on CT scans and are characterized by abnormalities in over 5% of any lung zone, have gained attention and clinical importance. The challenge lies in the accurate and consistent identification of ILA, given that its definition relies on a subjective threshold, making quantitative tools crucial for precise ILA evaluation. This review highlights the state of CT quantification of ILD and ILA, addressing clinical and research disparities while emphasizing how machine learning or deep learning in quantitative imaging can improve diagnosis and management by providing more accurate assessments, and finally, suggests the future directions of quantitative CT in this area.
Collapse
Affiliation(s)
- Jooae Choe
- From the Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.C., H.J.H., S.M.L., J.Y., N.K., J.B.S.); and Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.Y. and N.K.)
| | | | | | | | | | | |
Collapse
|
5
|
Chang JY, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Thoracic Imaging. Diagnostics (Basel) 2024; 14:1456. [PMID: 39001346 PMCID: PMC11240935 DOI: 10.3390/diagnostics14131456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of artificial intelligence (AI) is revolutionizing medicine, particularly radiology. With the development of newer models, AI applications are demonstrating improved performance and versatile utility in the clinical setting. Thoracic imaging is an area of profound interest, given the prevalence of chest imaging and the significant health implications of thoracic diseases. This review aims to highlight the promising applications of AI within thoracic imaging. It examines the role of AI, including its contributions to improving diagnostic evaluation and interpretation, enhancing workflow, and aiding in invasive procedures. Next, it further highlights the current challenges and limitations faced by AI, such as the necessity of 'big data', ethical and legal considerations, and bias in representation. Lastly, it explores the potential directions for the application of AI in thoracic radiology.
Collapse
Affiliation(s)
- Jin Y Chang
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
- Division of Vascular and Interventional Radiology, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
6
|
Shafi SM, Chinnappan SK. Segmenting and classifying lung diseases with M-Segnet and Hybrid Squeezenet-CNN architecture on CT images. PLoS One 2024; 19:e0302507. [PMID: 38753712 PMCID: PMC11098347 DOI: 10.1371/journal.pone.0302507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 04/07/2024] [Indexed: 05/18/2024] Open
Abstract
Diagnosing lung diseases accurately and promptly is essential for effectively managing this significant public health challenge on a global scale. This paper introduces a new framework called Modified Segnet-based Lung Disease Segmentation and Severity Classification (MSLDSSC). The MSLDSSC model comprises four phases: "preprocessing, segmentation, feature extraction, and classification." Initially, the input image undergoes preprocessing using an improved Wiener filter technique. This technique estimates the power spectral density of the noisy and original images and computes the SNR assisted by PSNR to evaluate image quality. Next, the preprocessed image undergoes Segmentation to identify and separate the RoI from the background objects in the lung image. We employ a Modified Segnet mechanism that utilizes a proposed hard tanh-Softplus activation function for effective Segmentation. Following Segmentation, features such as MLDN, entropy with MRELBP, shape features, and deep features are extracted. Following the feature extraction phase, the retrieved feature set is input into a hybrid severity classification model. This hybrid model comprises two classifiers: SDPA-Squeezenet and DCNN. These classifiers train on the retrieved feature set and effectively classify the severity level of lung diseases.
Collapse
Affiliation(s)
- Syed Mohammed Shafi
- School of Computer Science and Engineering Vellore Institute of Technology, Vellore, India
| | | |
Collapse
|
7
|
Luu N, Van N, Shojazadeh A, Zhao Y, Molloi S. Reproducibility of a semiautomatic lobar lung tissue assignment technique on noncontrast CT scans: a study on swine animal model. Eur Radiol Exp 2024; 8:55. [PMID: 38705940 PMCID: PMC11070405 DOI: 10.1186/s41747-024-00453-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/04/2024] [Indexed: 05/07/2024] Open
Abstract
BACKGROUND To evaluate the reproducibility of a vessel-specific minimum cost path (MCP) technique used for lobar segmentation on noncontrast computed tomography (CT). METHODS Sixteen Yorkshire swine (49.9 ± 4.7 kg, mean ± standard deviation) underwent a total of 46 noncontrast helical CT scans from November 2020 to May 2022 using a 320-slice scanner. A semiautomatic algorithm was employed by three readers to segment the lung tissue and pulmonary arterial tree. The centerline of the arterial tree was extracted and partitioned into six subtrees for lobar assignment. The MCP technique was implemented to assign lobar territories by assigning lung tissue voxels to the nearest arterial tree segment. MCP-derived lobar mass and volume were then compared between two acquisitions, using linear regression, root mean square error (RMSE), and paired sample t-tests. An interobserver and intraobserver analysis of the lobar measurements was also performed. RESULTS The average whole lung mass and volume was 663.7 ± 103.7 g and 1,444.22 ± 309.1 mL, respectively. The lobar mass measurements from the initial (MLobe1) and subsequent (MLobe2) acquisitions were correlated by MLobe1 = 0.99 MLobe2 + 1.76 (r = 0.99, p = 0.120, RMSE = 7.99 g). The lobar volume measurements from the initial (VLobe1) and subsequent (VLobe2) acquisitions were correlated by VLobe1 = 0.98VLobe2 + 2.66 (r = 0.99, p = 0.160, RSME = 15.26 mL). CONCLUSIONS The lobar mass and volume measurements showed excellent reproducibility through a vessel-specific assignment technique. This technique may serve for automated lung lobar segmentation, facilitating clinical regional pulmonary analysis. RELEVANCE STATEMENT Assessment of lobar mass or volume in the lung lobes using noncontrast CT may allow for efficient region-specific treatment strategies for diseases such as pulmonary embolism and chronic thromboembolic pulmonary hypertension. KEY POINTS • Lobar segmentation is essential for precise disease assessment and treatment planning. • Current methods for segmentation using fissure lines are problematic. • The minimum-cost-path technique here is proposed and a swine model showed excellent reproducibility for lobar mass measurements. • Interobserver agreement was excellent, with intraclass correlation coefficients greater than 0.90.
Collapse
Affiliation(s)
- Nile Luu
- Department of Radiological Sciences, Medical Sciences I, B-140, University of California, Irvine, Irvine, CA, 92697, USA
| | - Nathan Van
- Department of Radiological Sciences, Medical Sciences I, B-140, University of California, Irvine, Irvine, CA, 92697, USA
| | - Alireza Shojazadeh
- Department of Radiological Sciences, Medical Sciences I, B-140, University of California, Irvine, Irvine, CA, 92697, USA
| | - Yixiao Zhao
- Department of Radiological Sciences, Medical Sciences I, B-140, University of California, Irvine, Irvine, CA, 92697, USA
| | - Sabee Molloi
- Department of Radiological Sciences, Medical Sciences I, B-140, University of California, Irvine, Irvine, CA, 92697, USA.
| |
Collapse
|
8
|
Dwivedi K, Sharkey M, Alabed S, Langlotz CP, Swift AJ, Bluethgen C. External validation, radiological evaluation, and development of deep learning automatic lung segmentation in contrast-enhanced chest CT. Eur Radiol 2024; 34:2727-2737. [PMID: 37775589 PMCID: PMC10957646 DOI: 10.1007/s00330-023-10235-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/25/2023] [Accepted: 07/24/2023] [Indexed: 10/01/2023]
Abstract
OBJECTIVES There is a need for CT pulmonary angiography (CTPA) lung segmentation models. Clinical translation requires radiological evaluation of model outputs, understanding of limitations, and identification of failure points. This multicentre study aims to develop an accurate CTPA lung segmentation model, with evaluation of outputs in two diverse patient cohorts with pulmonary hypertension (PH) and interstitial lung disease (ILD). METHODS This retrospective study develops an nnU-Net-based segmentation model using data from two specialist centres (UK and USA). Model was trained (n = 37), tested (n = 12), and clinically evaluated (n = 176) on a diverse 'real-world' cohort of 225 PH patients with volumetric CTPAs. Dice score coefficient (DSC) and normalised surface distance (NSD) were used for testing. Clinical evaluation of outputs was performed by two radiologists who assessed clinical significance of errors. External validation was performed on heterogenous contrast and non-contrast scans from 28 ILD patients. RESULTS A total of 225 PH and 28 ILD patients with diverse demographic and clinical characteristics were evaluated. Mean accuracy, DSC, and NSD scores were 0.998 (95% CI 0.9976, 0.9989), 0.990 (0.9840, 0.9962), and 0.983 (0.9686, 0.9972) respectively. There were no segmentation failures. On radiological review, 82% and 71% of internal and external cases respectively had no errors. Eighteen percent and 25% respectively had clinically insignificant errors. Peripheral atelectasis and consolidation were common causes for suboptimal segmentation. One external case (0.5%) with patulous oesophagus had a clinically significant error. CONCLUSION State-of-the-art CTPA lung segmentation model provides accurate outputs with minimal clinical errors on evaluation across two diverse cohorts with PH and ILD. CLINICAL RELEVANCE Clinical translation of artificial intelligence models requires radiological review and understanding of model limitations. This study develops an externally validated state-of-the-art model with robust radiological review. Intended clinical use is in techniques such as lung volume or parenchymal disease quantification. KEY POINTS • Accurate, externally validated CT pulmonary angiography (CTPA) lung segmentation model tested in two large heterogeneous clinical cohorts (pulmonary hypertension and interstitial lung disease). • No segmentation failures and robust review of model outputs by radiologists found 1 (0.5%) clinically significant segmentation error. • Intended clinical use of this model is a necessary step in techniques such as lung volume, parenchymal disease quantification, or pulmonary vessel analysis.
Collapse
Affiliation(s)
- Krit Dwivedi
- Department of Infection, Immunity & Cardiovascular Disease, Medical School, University of Sheffield, Sheffield, UK.
- Academic Department of Radiology, Royal Hallamshire Hospital, Glossop Road, Sheffield, S10 2JF, USA.
| | - Michael Sharkey
- 3DLab, Sheffield Teaching Hospitals NHS Trust, Sheffield, UK
| | - Samer Alabed
- Department of Infection, Immunity & Cardiovascular Disease, Medical School, University of Sheffield, Sheffield, UK
| | - Curtis P Langlotz
- Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford University, Sheffield, USA
| | - Andy J Swift
- Department of Infection, Immunity & Cardiovascular Disease, Medical School, University of Sheffield, Sheffield, UK
| | - Christian Bluethgen
- Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford University, Sheffield, USA
| |
Collapse
|
9
|
Quanyang W, Yao H, Sicong W, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shijun Z. Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis. Cancer Med 2024; 13:e7140. [PMID: 38581113 PMCID: PMC10997848 DOI: 10.1002/cam4.7140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis. METHODOLOGY This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening. RESULTS AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing. CONCLUSIONS AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Sicong
- Magnetic Resonance Imaging ResearchGeneral Electric Healthcare (China)BeijingChina
| | - Qi Linlin
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
10
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01062-5. [PMID: 38491236 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
11
|
Prabhu NK, Wong MK, Klapper JA, Haney JC, Mazurowski MA, Mammarappallil JG, Hartwig MG. Computed Tomography Volumetrics for Size Matching in Lung Transplantation for Restrictive Disease. Ann Thorac Surg 2024; 117:413-421. [PMID: 37031770 DOI: 10.1016/j.athoracsur.2023.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/08/2023] [Accepted: 03/26/2023] [Indexed: 04/11/2023]
Abstract
BACKGROUND There is no consensus on the optimal allograft sizing strategy for lung transplantation in restrictive lung disease. Current methods that are based on predicted total lung capacity (pTLC) ratios do not account for the diminutive recipient chest size. The study investigators hypothesized that a new sizing ratio incorporating preoperative recipient computed tomographic lung volumes (CTVol) would be associated with postoperative outcomes. METHODS A retrospective single-institution study was conducted of adults undergoing primary bilateral lung transplantation between January 2016 and July 2020 for restrictive lung disease. CTVol was computed for recipients by using advanced segmentation software. Two sizing ratios were calculated: pTLC ratio (pTLCdonor/pTLCrecipient) and a new volumetric ratio (pTLCdonor/CTVolrecipient). Patients were divided into reference, oversized, and undersized groups on the basis of ratio quintiles, and multivariable models were used to assess the effect of the ratios on primary graft dysfunction and survival. RESULTS CTVol was successfully acquired in 218 of 220 (99.1%) patients. In adjusted analysis, undersizing on the basis of the volumetric ratio was independently associated with decreased primary graft dysfunction grade 2 or 3 within 72 hours (odds ratio, 0.42; 95% CI, 0.20-0.87; P =.02). The pTLC ratio was not significantly associated with primary graft dysfunction. Oversizing on the basis of the volumetric ratio was independently associated with an increased risk of death (hazard ratio, 2.27; 95% CI, 1.04-4.99; P =.04], whereas the pTLC ratio did not have a significant survival association. CONCLUSIONS Using computed tomography-acquired lung volumes for donor-recipient size matching in lung transplantation is feasible with advanced segmentation software. This method may be more predictive of outcome compared with current sizing methods, which use gender and height only.
Collapse
Affiliation(s)
- Neel K Prabhu
- Duke University School of Medicine, Durham, North Carolina.
| | - Megan K Wong
- Duke University School of Medicine, Durham, North Carolina
| | - Jacob A Klapper
- Duke University School of Medicine, Durham, North Carolina; Division of Thoracic and Cardiovascular Surgery, Department of Surgery, Duke University Medical Center, Durham, North Carolina
| | - John C Haney
- Duke University School of Medicine, Durham, North Carolina; Division of Thoracic and Cardiovascular Surgery, Department of Surgery, Duke University Medical Center, Durham, North Carolina
| | - Maciej A Mazurowski
- Duke University School of Medicine, Durham, North Carolina; Department of Computer Science, Duke University, Durham, North Carolina; Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina; Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina; Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Joseph G Mammarappallil
- Duke University School of Medicine, Durham, North Carolina; Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Matthew G Hartwig
- Duke University School of Medicine, Durham, North Carolina; Division of Thoracic and Cardiovascular Surgery, Department of Surgery, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
12
|
Dadras AA, Jaziri A, Frodl E, Vogl TJ, Dietz J, Bucher AM. Lightweight Techniques to Improve Generalization and Robustness of U-Net Based Networks for Pulmonary Lobe Segmentation. Bioengineering (Basel) 2023; 11:21. [PMID: 38247898 PMCID: PMC10813310 DOI: 10.3390/bioengineering11010021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/10/2023] [Accepted: 12/14/2023] [Indexed: 01/23/2024] Open
Abstract
Lung lobe segmentation in chest CT is relevant to a wide range of clinical applications. However, existing segmentation pipelines often exhibit vulnerabilities and performance degradations when applied to external datasets. This is usually attributed to the size of the available dataset or model. We show that it is possible to enhance generalizability without huge resources by carefully curating the dataset and combining machine learning with medical expertise. Multiple machine learning techniques (self-supervision (SSL), attention (A), and data augmentation (DA)) are used to train a fast and fully-automated lung lobe segmentation model based on 2D U-Net. Our study involved evaluating these techniques on a diverse dataset collected under the RACOON project, encompassing 100 CT chest scans from patients with bacterial, viral, or SARS-CoV2 infections. We compare our model to a baseline U-Net trained on the same dataset. Our approach significantly improved segmentation accuracy (Dice score of 92.8% vs. 82.3%, p < 0.001). Moreover, our model achieved state-of-the-art performance (Dice score of 92.8% vs. 90.8% for the literature's state-of-the-art, p = 0.102) with reduced training examples (69 vs. 231 CT Scans). Among the techniques, data augmentation with expert knowledge displayed the most significant impact, enhancing the Dice score by +0.056. Notably, these enhancements are not limited to lobe segmentation but can be seamlessly integrated into various medical imaging segmentation tasks, demonstrating their versatility and potential for broader applications.
Collapse
Affiliation(s)
- Armin A. Dadras
- Division of Phoniatrics-Logopedics, Department of Otorhinolaryngology, Medical University of Vienna, Währinger Gürtel 18-20, 1090 Vienna, Austria
| | - Achref Jaziri
- Center for Cognition and Computation, Goethe University Frankfurt, Robert Meyer Str. 10-12, 60323 Frankfurt am Main, Germany
| | - Eric Frodl
- Institute for Diagnostic and Interventional Radiology, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt, Germany (J.D.)
| | - Thomas J. Vogl
- Institute for Diagnostic and Interventional Radiology, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt, Germany (J.D.)
| | - Julia Dietz
- Institute for Diagnostic and Interventional Radiology, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt, Germany (J.D.)
- Department of Medicine, Medical Clinic 1, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt, Germany
| | - Andreas M. Bucher
- Institute for Diagnostic and Interventional Radiology, University Hospital, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt, Germany (J.D.)
| |
Collapse
|
13
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
14
|
Metz C, Weng AM, Heidenreich JF, Slawig A, Benkert T, Köstler H, Veldhoen S. Reproducibility of non-contrast enhanced multi breath-hold ultrashort echo time functional lung MRI. Magn Reson Imaging 2023; 98:149-154. [PMID: 36681313 DOI: 10.1016/j.mri.2023.01.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 11/14/2022] [Accepted: 01/14/2023] [Indexed: 01/20/2023]
Abstract
PURPOSE To evaluate the intraindividual reproducibility of functional lung imaging using non-contrast enhanced multi breath-hold 3D-UTE MRI. METHODS Ten healthy volunteers underwent non-contrast enhanced 3D-UTE MRI at three time points for same-day and different-day measurements employing a stack-of-spirals trajectory at 3 T. At each time point, inspiratory and expiratory breathing states were acquired for tidal and deep breathing, each within a single breath-hold. For functional image analysis, fractional ventilation (FV) was calculated pixelwise after image registration from the MR signal change. To decouple FV from breathing depth, the individual lung volume was used for volume adjustment (rFV). Reproducibility evaluation was performed in eight lung segments. Statistical analyses included two way mixed intraclass correlation (ICC), sign-test, Friedman-test and modified Bland-Altman analyses. RESULTS FV from tidal breathing showed an ICC of 0.81, a bias of 1.3% and an interval of confidence (CI) ranging from -67.1 to 69.6%. FV from deep breathing was higher reproducible with an ICC of 0.92 (bias, -0.2%; CI, -34.2 to 33.7%). Following volume adjustment, reproducibility of rFV for tidal breathing improved (ICC, 0,86; bias, 2.0%; CI, -34.3 to 38.3%), whereas it did not bear significant benefits for deep breathing (ICC, 0.89; bias, 2.8%; CI, -24.9 to 30.5%). Reproducibility was independent from the examination day. CONCLUSION Non-contrast-enhanced multi breath-hold 3D-UTE MRI allows for highly reproducible ventilation imaging.
Collapse
Affiliation(s)
- C Metz
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany.
| | - A M Weng
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany
| | - J F Heidenreich
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany
| | - A Slawig
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany
| | - T Benkert
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - H Köstler
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany
| | - S Veldhoen
- Department of Diagnostic and Interventional Radiology, University Hospital of Würzburg, Würzburg, Germany
| |
Collapse
|
15
|
Scapicchio C, Chincarini A, Ballante E, Berta L, Bicci E, Bortolotto C, Brero F, Cabini RF, Cristofalo G, Fanni SC, Fantacci ME, Figini S, Galia M, Gemma P, Grassedonio E, Lascialfari A, Lenardi C, Lionetti A, Lizzi F, Marrale M, Midiri M, Nardi C, Oliva P, Perillo N, Postuma I, Preda L, Rastrelli V, Rizzetto F, Spina N, Talamonti C, Torresin A, Vanzulli A, Volpi F, Neri E, Retico A. A multicenter evaluation of a deep learning software (LungQuant) for lung parenchyma characterization in COVID-19 pneumonia. Eur Radiol Exp 2023; 7:18. [PMID: 37032383 PMCID: PMC10083148 DOI: 10.1186/s41747-023-00334-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/27/2023] [Indexed: 04/11/2023] Open
Abstract
BACKGROUND The role of computed tomography (CT) in the diagnosis and characterization of coronavirus disease 2019 (COVID-19) pneumonia has been widely recognized. We evaluated the performance of a software for quantitative analysis of chest CT, the LungQuant system, by comparing its results with independent visual evaluations by a group of 14 clinical experts. The aim of this work is to evaluate the ability of the automated tool to extract quantitative information from lung CT, relevant for the design of a diagnosis support model. METHODS LungQuant segments both the lungs and lesions associated with COVID-19 pneumonia (ground-glass opacities and consolidations) and computes derived quantities corresponding to qualitative characteristics used to clinically assess COVID-19 lesions. The comparison was carried out on 120 publicly available CT scans of patients affected by COVID-19 pneumonia. Scans were scored for four qualitative metrics: percentage of lung involvement, type of lesion, and two disease distribution scores. We evaluated the agreement between the LungQuant output and the visual assessments through receiver operating characteristics area under the curve (AUC) analysis and by fitting a nonlinear regression model. RESULTS Despite the rather large heterogeneity in the qualitative labels assigned by the clinical experts for each metric, we found good agreement on the metrics compared to the LungQuant output. The AUC values obtained for the four qualitative metrics were 0.98, 0.85, 0.90, and 0.81. CONCLUSIONS Visual clinical evaluation could be complemented and supported by computer-aided quantification, whose values match the average evaluation of several independent clinical experts. KEY POINTS We conducted a multicenter evaluation of the deep learning-based LungQuant automated software. We translated qualitative assessments into quantifiable metrics to characterize coronavirus disease 2019 (COVID-19) pneumonia lesions. Comparing the software output to the clinical evaluations, results were satisfactory despite heterogeneity of the clinical evaluations. An automatic quantification tool may contribute to improve the clinical workflow of COVID-19 pneumonia.
Collapse
Affiliation(s)
- Camilla Scapicchio
- Physics Department, University of Pisa, Pisa, Italy.
- Pisa Division, National Institute for Nuclear Physics, Pisa, Italy.
| | - Andrea Chincarini
- Genova Division, National Institute for Nuclear Physics, Genova, Italy
| | - Elena Ballante
- Department of Political and Social Sciences, University of Pavia, Pavia, Italy
- Pavia Division, National Institute for Nuclear Physics, Pavia, Italy
| | - Luca Berta
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Milano Division, National Institute for Nuclear Physics, Milan, Italy
| | - Eleonora Bicci
- Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero-Universitaria Careggi, Florence, Italy
| | - Chandra Bortolotto
- Unit of Imaging and Radiotherapy, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Institute of Radiology, Department of Diagnostic and Imaging Services, Fondazione IRCCS Policlinico San Matteo, Pavia, Italy
| | - Francesca Brero
- Pavia Division, National Institute for Nuclear Physics, Pavia, Italy
| | - Raffaella Fiamma Cabini
- Pavia Division, National Institute for Nuclear Physics, Pavia, Italy
- Department of Mathematics, University of Pavia, Pavia, Italy
| | - Giuseppe Cristofalo
- Department of Biomedicine, Neuroscience and Advanced Diagnostic (BiND), University of Palermo, Palermo, Italy
| | | | - Maria Evelina Fantacci
- Physics Department, University of Pisa, Pisa, Italy
- Pisa Division, National Institute for Nuclear Physics, Pisa, Italy
| | - Silvia Figini
- Department of Political and Social Sciences, University of Pavia, Pavia, Italy
- Pavia Division, National Institute for Nuclear Physics, Pavia, Italy
| | - Massimo Galia
- Department of Biomedicine, Neuroscience and Advanced Diagnostic (BiND), University of Palermo, Palermo, Italy
| | - Pietro Gemma
- Post-graduate School in Radiodiagnostics, University of Milan, Milan, Italy
| | - Emanuele Grassedonio
- Department of Biomedicine, Neuroscience and Advanced Diagnostic (BiND), University of Palermo, Palermo, Italy
| | | | - Cristina Lenardi
- Milano Division, National Institute for Nuclear Physics, Milan, Italy
- Department of Physics "Aldo Pontremoli", University of Milan, Milan, Italy
| | - Alice Lionetti
- Unit of Imaging and Radiotherapy, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
| | - Francesca Lizzi
- Physics Department, University of Pisa, Pisa, Italy
- Pisa Division, National Institute for Nuclear Physics, Pisa, Italy
| | - Maurizio Marrale
- Department of Physics and Chemistry "Emilio Segrè", University of Palermo, Palermo, Italy
- Catania Division, National Institute for Nuclear Physics, Catania, Italy
| | - Massimo Midiri
- Department of Biomedicine, Neuroscience and Advanced Diagnostic (BiND), University of Palermo, Palermo, Italy
| | - Cosimo Nardi
- Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero-Universitaria Careggi, Florence, Italy
| | - Piernicola Oliva
- Cagliari Division, National Institute for Nuclear Physics, Monserrato, Cagliari, Italy
- Department of Chemical, Physical, Mathematical and Natural Sciences, University of Sassari, Sassari, Italy
| | - Noemi Perillo
- Post-graduate School in Radiodiagnostics, University of Milan, Milan, Italy
| | - Ian Postuma
- Pavia Division, National Institute for Nuclear Physics, Pavia, Italy
| | - Lorenzo Preda
- Unit of Imaging and Radiotherapy, Department of Clinical-Surgical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Institute of Radiology, Department of Diagnostic and Imaging Services, Fondazione IRCCS Policlinico San Matteo, Pavia, Italy
| | - Vieri Rastrelli
- Department of Experimental and Clinical Biomedical Sciences, Radiodiagnostic Unit n. 2, University of Florence-Azienda Ospedaliero-Universitaria Careggi, Florence, Italy
| | - Francesco Rizzetto
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Postgraduate School of Diagnostic and Interventional Radiology, University of Milan, Milan, Italy
| | - Nicola Spina
- Department of Translational Research, Academic Radiology, University of Pisa, Pisa, Italy
| | - Cinzia Talamonti
- Department Biomedical Experimental and Clinical Science "Mario Serio", University of Florence, Florence, Italy
- Florence Division, National Institute for Nuclear Physics, Sesto Fiorentino, Firenze, Italy
| | - Alberto Torresin
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Milano Division, National Institute for Nuclear Physics, Milan, Italy
- Department of Physics "Aldo Pontremoli", University of Milan, Milan, Italy
| | - Angelo Vanzulli
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Federica Volpi
- Department of Translational Research, Academic Radiology, University of Pisa, Pisa, Italy
| | - Emanuele Neri
- Department of Translational Research, Academic Radiology, University of Pisa, Pisa, Italy
- Italian Society of Medical and Interventional Radiology, SIRM Foundation, Milan, Italy
| | | |
Collapse
|
16
|
Ke J, Lv Y, Ma F, Du Y, Xiong S, Wang J, Wang J. Deep learning-based approach for the automatic segmentation of adult and pediatric temporal bone computed tomography images. Quant Imaging Med Surg 2023; 13:1577-1591. [PMID: 36915310 PMCID: PMC10006112 DOI: 10.21037/qims-22-658] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 12/15/2022] [Indexed: 02/25/2023]
Abstract
Background Automatic segmentation of temporal bone computed tomography (CT) images is fundamental to image-guided otologic surgery and the intelligent analysis of CT images in the field of otology. This study was conducted to test a convolutional neural network (CNN) model that can automatically segment almost all temporal bone anatomy structures in adult and pediatric CT images. Methods A dataset comprising 80 annotated CT volumes was collected, of which 40 samples were obtained from adults and 40 from children. A further 60 annotated CT volumes (30 from adults and 30 from children) were used to train the model. The remaining 20 annotated CT volumes were employed to determine the model's generalizability for automatic segmentation. Finally, the Dice coefficient (DC) and average symmetric surface distance (ASSD) were utilized as metrics to evaluate the performance of the CNN model. Two independent-sample t-tests were used to compare the test set results of adults and children. Results In the adult test set, the mean DC values of all the structures ranged from 0.714 to 0.912, and the ASSD values were less than 0.24 mm for 11 structures. In the pediatric test set, the mean DC values of all the structures ranged from 0.658 to 0.915, and the ASSD values were less than 0.18 mm for 11 structures. There was no statistically significant difference between the adult and child test sets in most temporal bone structures. Conclusions Our CNN model shows excellent automatic segmentation performance and good generalizability for both adult and pediatric temporal bone CT images, which can help to advance otologist education, intelligent imaging diagnosis, surgery simulation, application of augmented reality, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China.,North China Research Institute of Electro-optics, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Shan Xiong
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, Beijing, China.,Department of Otorhinolaryngology, First Affiliated Hospital, Nanjing Medical University, Nanjing, China
| |
Collapse
|
17
|
Choe J, Lee SM, Hwang HJ, Lee SM, Yun J, Kim N, Seo JB. Artificial Intelligence in Lung Imaging. Semin Respir Crit Care Med 2022; 43:946-960. [PMID: 36174647 DOI: 10.1055/s-0042-1755571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Recently, interest and advances in artificial intelligence (AI) including deep learning for medical images have surged. As imaging plays a major role in the assessment of pulmonary diseases, various AI algorithms have been developed for chest imaging. Some of these have been approved by governments and are now commercially available in the marketplace. In the field of chest radiology, there are various tasks and purposes that are suitable for AI: initial evaluation/triage of certain diseases, detection and diagnosis, quantitative assessment of disease severity and monitoring, and prediction for decision support. While AI is a powerful technology that can be applied to medical imaging and is expected to improve our current clinical practice, some obstacles must be addressed for the successful implementation of AI in workflows. Understanding and becoming familiar with the current status and potential clinical applications of AI in chest imaging, as well as remaining challenges, would be essential for radiologists and clinicians in the era of AI. This review introduces the potential clinical applications of AI in chest imaging and also discusses the challenges for the implementation of AI in daily clinical practice and future directions in chest imaging.
Collapse
Affiliation(s)
- Jooae Choe
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Hye Jeon Hwang
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jihye Yun
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.,Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
18
|
Xing H, Zhang X, Nie Y, Wang S, Wang T, Jing H, Li F. A deep learning-based post-processing method for automated pulmonary lobe and airway trees segmentation using chest CT images in PET/CT. Quant Imaging Med Surg 2022; 12:4747-4757. [PMID: 36185049 PMCID: PMC9511416 DOI: 10.21037/qims-21-1116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 07/17/2022] [Indexed: 11/30/2022]
Abstract
Background The proposed algorithm could support accurate localization of lung disease. To develop and validate an automated deep learning model combined with a post-processing algorithm to segment six pulmonary anatomical regions in chest computed tomography (CT) images acquired during positron emission tomography/computed tomography (PET/CT) scans. The pulmonary regions have five pulmonary lobes and airway trees. Methods Patients who underwent both PET/CT imaging with an extra chest CT scan were retrospectively enrolled. The pulmonary segmentation of six regions in CT was performed via a convolutional neural network (CNN) of DenseVNet architecture with some post-processing algorithms. Three evaluation metrics were used to assess the performance of this method, which combined deep learning and the post-processing method. The agreement between the combined model and ground truth segmentations in the test set was analyzed. Results A total of 640 cases were enrolled. The combined model, which involved deep learning and post-processing methods, had a higher performance than the single deep learning model. In the test set, the all-lobes overall Dice coefficient, Hausdorff distance, and Jaccard coefficient were 0.972, 12.025 mm, and 0.948, respectively. The airway-tree Dice coefficient, Hausdorff distance, and Jaccard coefficient were 0.849, 32.076 mm, and 0.815, respectively. A good agreement was observed between our segmentation in every plot. Conclusions The proposed model combining two methods can automatically segment five pulmonary lobes and airway trees on chest CT imaging in PET/CT. The performance of the combined model was higher than the single deep learning model in each region in the test set.
Collapse
Affiliation(s)
- Haiqun Xing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | | | | | | | - Tong Wang
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | - Hongli Jing
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| | - Fang Li
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Beijing, China
| |
Collapse
|
19
|
Bhattacharyya D, Thirupathi Rao N, Joshua ESN, Hu YC. A bi-directional deep learning architecture for lung nodule semantic segmentation. THE VISUAL COMPUTER 2022; 39:1-17. [PMID: 36097497 PMCID: PMC9453728 DOI: 10.1007/s00371-022-02657-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/20/2022] [Indexed: 06/15/2023]
Abstract
Lung nodules are abnormal growths and lesions may exist. Both lungs may have nodules. Most lung nodules are harmless (not cancerous/malignant). Pulmonary nodules are rare in lung cancer. X-rays and CT scans identify the lung nodules. Doctors may term the growth a lung spot, coin lesion, or shadow. It is necessary to obtain properly computed tomography (CT) scans of the lungs to get an accurate diagnosis and a good estimate of the severity of lung cancer. This study aims to design and evaluate a deep learning (DL) algorithm for identifying pulmonary nodules (PNs) using the LUNA-16 dataset and examine the prevalence of PNs using DB-Net. The paper states that a new, resource-efficient deep learning architecture is called for, and it has been given the name of DB-NET. When a physician orders a CT scan, they need to employ an accurate and efficient lung nodule segmentation method because they need to detect lung cancer at an early stage. However, segmentation of lung nodules is a difficult task because of the nodules' characteristics on the CT image as well as the nodules' concealed shape, visual quality, and context. The DB-NET model architecture is presented as a resource-efficient deep learning solution for handling the challenge at hand in this paper. Furthermore, it incorporates the Mish nonlinearity function and the mask class weights to improve segmentation effectiveness. In addition to the LUNA-16 dataset, which contained 1200 lung nodules collected during the LUNA-16 test, the LUNA-16 dataset was extensively used to train and assess the proposed model. The DB-NET architecture surpasses the existing U-NET model by a dice coefficient index of 88.89%, and it also achieves a similar level of accuracy to that of human experts.
Collapse
Affiliation(s)
- Debnath Bhattacharyya
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Guntur, 522 502 India
| | - N. Thirupathi Rao
- Department of Computer Science and Engineering, Vignan’s Institute of Information Technology (A), Visakhapatnam, 530049 AP India
| | - Eali Stephen Neal Joshua
- Department of Computer Science and Engineering, Vignan’s Institute of Information Technology (A), Visakhapatnam, 530049 AP India
| | - Yu-Chen Hu
- Department of Computer Science and Information Management, Providence University, 200, Sec. 7, Taiwan Boulevard, Shalu Dist., Taichung City, 43301 Taiwan R.O.C
| |
Collapse
|
20
|
Yousefzadeh M, Hasanpour M, Zolghadri M, Salimi F, Yektaeian Vaziri A, Mahmoudi Aqeel Abadi A, Jafari R, Esfahanian P, Nazem-Zadeh MR. Deep learning framework for prediction of infection severity of COVID-19. Front Med (Lausanne) 2022; 9:940960. [PMID: 36059818 PMCID: PMC9428758 DOI: 10.3389/fmed.2022.940960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
With the onset of the COVID-19 pandemic, quantifying the condition of positively diagnosed patients is of paramount importance. Chest CT scans can be used to measure the severity of a lung infection and the isolate involvement sites in order to increase awareness of a patient's disease progression. In this work, we developed a deep learning framework for lung infection severity prediction. To this end, we collected a dataset of 232 chest CT scans and involved two public datasets with an additional 59 scans for our model's training and used two external test sets with 21 scans for evaluation. On an input chest Computer Tomography (CT) scan, our framework, in parallel, performs a lung lobe segmentation utilizing a pre-trained model and infection segmentation using three distinct trained SE-ResNet18 based U-Net models, one for each of the axial, coronal, and sagittal views. By having the lobe and infection segmentation masks, we calculate the infection severity percentage in each lobe and classify that percentage into 6 categories of infection severity score using a k-nearest neighbors (k-NN) model. The lobe segmentation model achieved a Dice Similarity Score (DSC) in the range of [0.918, 0.981] for different lung lobes and our infection segmentation models gained DSC scores of 0.7254 and 0.7105 on our two test sets, respectfully. Similarly, two resident radiologists were assigned the same infection segmentation tasks, for which they obtained a DSC score of 0.7281 and 0.6693 on the two test sets. At last, performance on infection severity score over the entire test datasets was calculated, for which the framework's resulted in a Mean Absolute Error (MAE) of 0.505 ± 0.029, while the resident radiologists' was 0.571 ± 0.039.
Collapse
Affiliation(s)
- Mehdi Yousefzadeh
- Department of Physics, Shahid Beheshti University, Tehran, Iran
- School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Masoud Hasanpour
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Mozhdeh Zolghadri
- Department of Medical Physics, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Fatemeh Salimi
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
| | - Ava Yektaeian Vaziri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Abolfazl Mahmoudi Aqeel Abadi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Ramezan Jafari
- Department of Radiology, Health Research Center, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Parsa Esfahanian
- School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Mohammad-Reza Nazem-Zadeh
- Research Center for Molecular and Cellular Imaging, Tehran University of Medical Sciences, Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
21
|
Boubnovski MM, Chen M, Linton-Reid K, Posma JM, Copley SJ, Aboagye EO. Development of a multi-task learning V-Net for pulmonary lobar segmentation on CT and application to diseased lungs. Clin Radiol 2022; 77:e620-e627. [PMID: 35636974 DOI: 10.1016/j.crad.2022.04.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 04/21/2022] [Indexed: 02/08/2023]
Abstract
AIM To develop a multi-task learning (MTL) V-Net for pulmonary lobar segmentation on computed tomography (CT) and application to diseased lungs. MATERIALS AND METHODS The described methodology utilises tracheobronchial tree information to enhance segmentation accuracy through the algorithm's spatial familiarity to define lobar extent more accurately. The method undertakes parallel segmentation of lobes and auxiliary tissues simultaneously by employing MTL in conjunction with V-Net-attention, a popular convolutional neural network in the imaging realm. Its performance was validated by an external dataset of patients with four distinct lung conditions: severe lung cancer, COVID-19 pneumonitis, collapsed lungs, and chronic obstructive pulmonary disease (COPD), even though the training data included none of these cases. RESULTS The following Dice scores were achieved on a per-segment basis: normal lungs 0.97, COPD 0.94, lung cancer 0.94, COVID-19 pneumonitis 0.94, and collapsed lung 0.92, all at p<0.05. CONCLUSION Despite severe abnormalities, the model provided good performance at segmenting lobes, demonstrating the benefit of tissue learning. The proposed model is poised for adoption in the clinical setting as a robust tool for radiologists and researchers to define the lobar distribution of lung diseases and aid in disease treatment planning.
Collapse
Affiliation(s)
- M M Boubnovski
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK
| | - M Chen
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK; Department of Radiology, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London W12 0HS, UK
| | - K Linton-Reid
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK
| | - J M Posma
- Department of Metabolism, Digestion and Reproduction, South Kensington, London SW7 2AZ, UK
| | - S J Copley
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK; Department of Radiology, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London W12 0HS, UK
| | - E O Aboagye
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK.
| |
Collapse
|
22
|
Orhan K, Shamshiev M, Ezhov M, Plaksin A, Kurbanova A, Ünsal G, Gusarev M, Golitsyna M, Aksoy S, Mısırlı M, Rasmussen F, Shumilov E, Sanders A. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci Rep 2022; 12:11863. [PMID: 35831451 PMCID: PMC9279304 DOI: 10.1038/s41598-022-15920-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 07/01/2022] [Indexed: 11/21/2022] Open
Abstract
This study aims to generate and also validate an automatic detection algorithm for pharyngeal airway on CBCT data using an AI software (Diagnocat) which will procure a measurement method. The second aim is to validate the newly developed artificial intelligence system in comparison to commercially available software for 3D CBCT evaluation. A Convolutional Neural Network-based machine learning algorithm was used for the segmentation of the pharyngeal airways in OSA and non-OSA patients. Radiologists used semi-automatic software to manually determine the airway and their measurements were compared with the AI. OSA patients were classified as minimal, mild, moderate, and severe groups, and the mean airway volumes of the groups were compared. The narrowest points of the airway (mm), the field of the airway (mm2), and volume of the airway (cc) of both OSA and non-OSA patients were also compared. There was no statistically significant difference between the manual technique and Diagnocat measurements in all groups (p > 0.05). Inter-class correlation coefficients were 0.954 for manual and automatic segmentation, 0.956 for Diagnocat and automatic segmentation, 0.972 for Diagnocat and manual segmentation. Although there was no statistically significant difference in total airway volume measurements between the manual measurements, automatic measurements, and DC measurements in non-OSA and OSA patients, we evaluated the output images to understand why the mean value for the total airway was higher in DC measurement. It was seen that the DC algorithm also measures the epiglottis volume and the posterior nasal aperture volume due to the low soft-tissue contrast in CBCT images and that leads to higher values in airway volume measurement.
Collapse
Affiliation(s)
- Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey. .,Medical Design Application, and Research Center (MEDITAM), Ankara University, Ankara, Turkey. .,Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, Lublin, Poland.
| | | | | | | | - Aida Kurbanova
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus.,Research Center of Experimental Health Science (DESAM), Near East University, Nicosia, Cyprus
| | | | | | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Melis Mısırlı
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, Nicosia, Cyprus
| | - Finn Rasmussen
- Internal Medicine Department Lunge Section, SVS Esbjerg, Esbjerg, Denmark.,Life Lung Health Center, Nicosia, Cyprus
| | | | | |
Collapse
|
23
|
Young LH, Kim J, Yakin M, Lin H, Dao DT, Kodati S, Sharma S, Lee AY, Lee CS, Sen HN. Automated Detection of Vascular Leakage in Fluorescein Angiography - A Proof of Concept. Transl Vis Sci Technol 2022; 11:19. [PMID: 35877095 PMCID: PMC9339697 DOI: 10.1167/tvst.11.7.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose The purpose of this paper was to develop a deep learning algorithm to detect retinal vascular leakage (leakage) in fluorescein angiography (FA) of patients with uveitis and use the trained algorithm to determine clinically notable leakage changes. Methods An algorithm was trained and tested to detect leakage on a set of 200 FA images (61 patients) and evaluated on a separate 50-image test set (21 patients). The ground truth was leakage segmentation by two clinicians. The Dice Similarity Coefficient (DSC) was used to measure concordance. Results During training, the algorithm achieved a best average DSC of 0.572 (95% confidence interval [CI] = 0.548–0.596). The trained algorithm achieved a DSC of 0.563 (95% CI = 0.543–0.582) when tested on an additional set of 50 images. The trained algorithm was then used to detect leakage on pairs of FA images from longitudinal patient visits. Longitudinal leakage follow-up showed a >2.21% change in the visible retina area covered by leakage (as detected by the algorithm) had a sensitivity and specificity of 90% (area under the curve [AUC] = 0.95) of detecting a clinically notable change compared to the gold standard, an expert clinician's assessment. Conclusions This deep learning algorithm showed modest concordance in identifying vascular leakage compared to ground truth but was able to aid in identifying vascular FA leakage changes over time. Translational Relevance This is a proof-of-concept study that vascular leakage can be detected in a more standardized way and that tools can be developed to help clinicians more objectively compare vascular leakage between FAs.
Collapse
Affiliation(s)
- LeAnne H Young
- National Eye Institute, Bethesda, MD, USA.,Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Jongwoo Kim
- National Library of Medicine, Bethesda, MD, USA
| | | | - Henry Lin
- National Eye Institute, Bethesda, MD, USA
| | | | | | - Sumit Sharma
- Cole Eye Institute, Cleveland Clinic, Cleveland, OH, USA
| | | | | | - H Nida Sen
- National Eye Institute, Bethesda, MD, USA
| |
Collapse
|
24
|
Pang H, Wu Y, Qi S, Li C, Shen J, Yue Y, Qian W, Wu J. A fully automatic segmentation pipeline of pulmonary lobes before and after lobectomy from computed tomography images. Comput Biol Med 2022; 147:105792. [PMID: 35780601 DOI: 10.1016/j.compbiomed.2022.105792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/18/2022] [Accepted: 06/26/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND AND OBJECTIVE Lobectomy is a curative treatment for localized lung cancer. The study aims to construct an automatic pipeline for segmenting pulmonary lobes before and after lobectomy from CT images. MATERIALS AND METHODS Six datasets (D1 to D6) of 865 CT scans were collected from two hospitals and public resources. Four nnU-Net-based segmentation models were trained. A lobectomy classification was proposed to automatically recognize the category of the input CT images: before lobectomy or one of five types after lobectomy. Finally, the lobe segmentation before and after lobectomy was realized by integrating the four models and lobectomy classification. The dice similarity coefficient (DSC), 95% Hausdorff distance (HD95) and average symmetric surface distance (ASSD) were used to evaluate the segmentations. RESULTS The pre-operative model achieved an average DSC of 0.964, 0.929, 0.934, and 0.891 in the four datasets. In D1 and D2, the average HD95 was 4.18 and 7.74 mm and the average ASSD was 0.86 and 1.32 mm, respectively. The lobectomy classification achieved an accuracy of 100%. After lobectomy, an average DSC of 0.973 and 0.936, an average HD95 of 2.70 and 6.92 mm, an average ASSD of 0.57 and 1.78 mm were obtained in D1 and D2, respectively. The postoperative segmentation pipeline outperformed other counterparts and training strategies. CONCLUSIONS The proposed pipeline can automatically segment pulmonary lobes before and after lobectomy from CT images and be applied to manage patients with lung cancer after lobectomy.
Collapse
Affiliation(s)
- Haowen Pang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China.
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China.
| |
Collapse
|
25
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
26
|
Punn NS, Agarwal S. Modality specific U-Net variants for biomedical image segmentation: a survey. Artif Intell Rev 2022; 55:5845-5889. [PMID: 35250146 PMCID: PMC8886195 DOI: 10.1007/s10462-022-10152-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2022] [Indexed: 02/06/2023]
Abstract
With the advent of advancements in deep learning approaches, such as deep convolution neural network, residual neural network, adversarial network; U-Net architectures are most widely utilized in biomedical image segmentation to address the automation in identification and detection of the target regions or sub-regions. In recent studies, U-Net based approaches have illustrated state-of-the-art performance in different applications for the development of computer-aided diagnosis systems for early diagnosis and treatment of diseases such as brain tumor, lung cancer, alzheimer, breast cancer, etc., using various modalities. This article contributes in presenting the success of these approaches by describing the U-Net framework, followed by the comprehensive analysis of the U-Net variants by performing (1) inter-modality, and (2) intra-modality categorization to establish better insights into the associated challenges and solutions. Besides, this article also highlights the contribution of U-Net based frameworks in the ongoing pandemic, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19. Finally, the strengths and similarities of these U-Net variants are analysed along with the challenges involved in biomedical image segmentation to uncover promising future research directions in this area.
Collapse
|
27
|
Kumar I, Bhatt C, Vimal V, Qamar S. Automated white corpuscles nucleus segmentation using deep neural network from microscopic blood smear. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-189773] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
The white corpuscles nucleus segmentation from microscopic blood images is major steps to diagnose blood-related diseases. The perfect and speedy segmentation system assists the hematologists to identify the diseases and take appropriate decision for better treatment. Therefore, fully automated white corpuscles nucleus segmentation model using deep convolution neural network, is proposed in the present study. The proposed model uses the combination of ‘binary_cross_entropy’ and ‘adam’ for maintaining learning rate in each network weight. To validate the potential and capability of the above proposed solution, ALL-IDB2 dataset is used. The complete set of images is partitioned into training and testing set and tedious experimentations have been performed. The best performing model is selected and the obtained training and testing accuracy of best performing model is reported as 98.69 % and 99.02 %, respectively. The staging analysis of proposed model is evaluated using sensitivity, specificity, Jaccard index, dice coefficient, accuracy and structure similarity index. The capability of proposed model is compared with performance of the region-based contour and fuzzy-based level-set method for same set of images and concluded that proposed model method is more accurate and effective for clinical purpose.
Collapse
Affiliation(s)
- Indrajeet Kumar
- Graphic Era Hill University, CSE Department, Dehradun, India
| | | | - Vrince Vimal
- Graphic Era Hill University, CSE Department, Dehradun, India
| | - Shamimul Qamar
- College of Science and Arts Dhahran Al Janub King Khalid University ABHA, Saudi Arabia
| |
Collapse
|
28
|
Chen S, Zhong X, Dorn S, Ravikumar N, Tao Q, Huang X, Lell M, Kachelriess M, Maier A. Improving Generalization Capability of Multiorgan Segmentation Models Using Dual-Energy CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3055199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
Liu X, Han C, Lin Z, Sun Z, Zhang Y, Wang X, Zhang X, Wang X. Semi-automatic quantitative analysis of the pelvic bony structures on apparent diffusion coefficient maps based on deep learning: establishment of reference ranges. Quant Imaging Med Surg 2022; 12:576-591. [PMID: 34993103 DOI: 10.21037/qims-21-123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 07/30/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND Apparent diffusion coefficient (ADC) maps provide quantitative information on both normal and abnormal tissues. However, it is difficult to distinguish between these tissues unless consistent and precise ADC values can be obtained from normal tissues. For this study we developed a deep learning-based convolutional neural network (CNN) for pelvic bony structure segmentation and established the reference ranges of ADC parameters for normal pelvic bony structures. METHODS We retrospectively enrolled 767 prostate cancer (PCa) patients for quantitative ADC analyses of normal pelvic bony structures. A subset of 288 patients who did not receive treatment for PCa (S1) were used to develop a CNN model for the segmentation of 8 pelvic bony structures (lumbar vertebra, sacrococcyx, ilium, acetabulum, femoral head, femoral neck, ischium, and pubis). The proposed CNN was used for the automated segmentation of these pelvic bony structures from a subset of 405 patients who did not receive treatment (S2) and 74 patients who received treatment [radiotherapy (S3) or endocrine therapy (S4)]. The 95% confidence interval (CI) was used to establish reference ranges for the ADC values from the normal pelvic bony structures of S1 and S2. RESULTS The Dice scores (Sørensen-Dice coefficient) for the CNN segmentation of the 8 pelvic bones on the ADC maps ranged from 0.90±0.02 (ilium) to 0.95±0.03 (femoral head) in the S1 testing set. In the S2 data set, the Dice scores showed no significant difference among the different scanners (P>0.05), and no significant differences were found among the S2, S3, and S4 data sets. The correlation analysis revealed that the b value and field strength were significantly correlated with ADC values (all P<0.001), while age and treatment were not significant variables (all P>0.05). The ADC reference ranges (95% CI) were as follows: lumbar vertebra, 1.11 (0.90-1.54); sacrococcyx, 0.82 (0.61-1.15); ilium, 0.57 (0.45-0.62); acetabulum, 0.59 (0.40-0.69); femoral head, 0.46 (0.25-0.58); femoral neck, 0.43 (0.25-0.48); ischium, 0.45 (0.26-0.55); and pubis, 0.57 (0.45-0.65). CONCLUSIONS This study preliminarily established reference ranges for the ADC values of normal pelvic bony structures. The image acquisition parameters had an influence on the ADC values.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Ziying Lin
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
30
|
Lim HK, Jung SK, Kim SH, Cho Y, Song IS. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 2021; 21:630. [PMID: 34876105 PMCID: PMC8650351 DOI: 10.1186/s12903-021-01983-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored prior to surgery. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery. Methods A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the 50 datasets. The accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step were compared. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation. Results After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation. Conclusions The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.
Collapse
Affiliation(s)
- Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seung-Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, 46, Gaeunsa 2-gil, Seongbuk-gu, Seoul, 02842, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI Center, Korea University College of Medicine, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
31
|
Kim Y, Park JY, Hwang EJ, Lee SM, Park CM. Applications of artificial intelligence in the thorax: a narrative review focusing on thoracic radiology. J Thorac Dis 2021; 13:6943-6962. [PMID: 35070379 PMCID: PMC8743417 DOI: 10.21037/jtd-21-1342] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 12/14/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE This review will focus on how AI-and, specifically, deep learning-can be applied to complement aspects of the current healthcare system. We describe how AI-based tools can augment existing clinical workflows by discussing the applications of AI to worklist prioritization and patient triage, the performance-boosting effects of AI as a second reader, and the use of AI to facilitate complex quantifications. We also introduce prominent examples of recent AI applications, such as tuberculosis screening in resource-constrained environments, the detection of lung cancer with screening CT, and the diagnosis of COVID-19. We also provide examples of prognostic predictions and new discoveries beyond existing clinical practices. BACKGROUND Artificial intelligence (AI) has shown promising performance for thoracic diseases, particularly in the field of thoracic radiology. However, it has not yet been established how AI-based image analysis systems can help physicians in clinical practice. METHODS This review included peer-reviewed research articles on AI in the thorax published in English between 2015 and 2021. CONCLUSIONS With advances in technology and appropriate preparation of physicians, AI could address various clinical problems that have not been solved due to a lack of clinical resources or technological limitations. KEYWORDS Artificial intelligence (AI); deep learning (DL); computer aided diagnosis (CAD); thoracic radiology; pulmonary medicine.
Collapse
Affiliation(s)
- Yisak Kim
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Korea
| | - Ji Yoon Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Sang Min Lee
- Departments of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Chang Min Park
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul, Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea
| |
Collapse
|
32
|
Wang J, Lv Y, Wang J, Ma F, Du Y, Fan X, Wang M, Ke J. Fully automated segmentation in temporal bone CT with neural network: a preliminary assessment study. BMC Med Imaging 2021; 21:166. [PMID: 34753454 PMCID: PMC8576911 DOI: 10.1186/s12880-021-00698-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 10/26/2021] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Segmentation of important structures in temporal bone CT is the basis of image-guided otologic surgery. Manual segmentation of temporal bone CT is time- consuming and laborious. We assessed the feasibility and generalization ability of a proposed deep learning model for automated segmentation of critical structures in temporal bone CT scans. METHODS Thirty-nine temporal bone CT volumes including 58 ears were divided into normal (n = 20) and abnormal groups (n = 38). Ossicular chain disruption (n = 10), facial nerve covering vestibular window (n = 10), and Mondini dysplasia (n = 18) were included in abnormal group. All facial nerves, auditory ossicles, and labyrinths of the normal group were manually segmented. For the abnormal group, aberrant structures were manually segmented. Temporal bone CT data were imported into the network in unmarked form. The Dice coefficient (DC) and average symmetric surface distance (ASSD) were used to evaluate the accuracy of automatic segmentation. RESULTS In the normal group, the mean values of DC and ASSD were respectively 0.703, and 0.250 mm for the facial nerve; 0.910, and 0.081 mm for the labyrinth; and 0.855, and 0.107 mm for the ossicles. In the abnormal group, the mean values of DC and ASSD were respectively 0.506, and 1.049 mm for the malformed facial nerve; 0.775, and 0.298 mm for the deformed labyrinth; and 0.698, and 1.385 mm for the aberrant ossicles. CONCLUSIONS The proposed model has good generalization ability, which highlights the promise of this approach for otologist education, disease diagnosis, and preoperative planning for image-guided otology surgery.
Collapse
Affiliation(s)
- Jiang Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yi Lv
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Yali Du
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Xin Fan
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Menglin Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Jia Ke
- Department of Otorhinolaryngology-Head and Neck Surgery, Peking University Third Hospital, Peking University, NO. 49 North Garden Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
33
|
Kho DH, Cho BK, Choi SM. Midterm Outcomes of Unstable Ankle Fractures in Young Patients Treated by Closed Reduction and Fixation With an Intramedullary Fibular Nail vs Open Reduction Internal Fixation Using a Lateral Locking Plate. Foot Ankle Int 2021; 42:1469-1481. [PMID: 34184908 DOI: 10.1177/10711007211017470] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND We aimed to compare midterm radiological and clinical outcomes between closed reduction and internal fixation (CRIF) using the fibular intramedullary nail (IMN) and open reduction and internal fixation (ORIF) using the locking plate for the treatment of unstable ankle fractures in active young patients. METHODS In this retrospective cohort study, 204 patients treated with CRIF using the fibular IMN (94 patients) or ORIF using the locking plate (110 patients) were included after at least 3 years of follow-up. The mean patient age was 41.4 years. Radiographic evaluation included the quality of reduction assessed by plain radiography and 3-dimensional (3D)-reconstructed computed tomography as well as the development of posttraumatic osteoarthritis (PTOA) of the ankle assessed by weightbearing plain radiography. Clinical evaluation included the American Orthopaedic Foot & Ankle Society hindfoot score, Olerud and Molander Score, the Foot and Ankle Outcome Score, and visual analog scale pain score as well as complications. RESULTS At median follow-up greater than 4 years, we found no significant differences in measured clinical outcomes between the 2 groups. There were significantly fewer postoperative complications in the IMN group than in the ORIF group (9.5% vs 39%, P < .001). However, we did find a greater proportion of radiographically fair or poor reductions in the IMN group than in the ORIF group (P < .001). The poor reductions in the IMN group were primarily related to Weber type C, pronation-type injury, and comminuted fibular and trimalleolar fractures (P < .001). PTOA was also more frequently observed in the IMN group than in the ORIF group (21.3% vs 9.1%, P = .024). CONCLUSION Given the current prevailing technologies for fracture fixation, this study suggests that surgeons should consider ORIF for unstable ankle fractures in active young patients with Weber type C, pronation-type injury, and comminuted fibular and trimalleolar fractures. LEVEL OF EVIDENCE Level III, retrospective comparative study.
Collapse
Affiliation(s)
- Duk-Hwan Kho
- Department of Orthopaedic Surgery, Konkuk University Chungju Hospital, Konkuk University School of Medicine, Chungju, Korea
| | - Byung-Ki Cho
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea
| | - Seung-Myung Choi
- Department of Orthopedic Surgery, Eulji University School of Medicine, Gyeonggi-do, Korea
| |
Collapse
|
34
|
Lee S, Summers RM. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol Clin North Am 2021; 59:987-1002. [PMID: 34689882 DOI: 10.1016/j.rcl.2021.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Organ segmentation, chest radiograph classification, and lung and liver nodule detections are some of the popular artificial intelligence (AI) tasks in chest and abdominal radiology due to the wide availability of public datasets. AI algorithms have achieved performance comparable to humans in less time for several organ segmentation tasks, and some lesion detection and classification tasks. This article introduces the current published articles of AI applied to chest and abdominal radiology, including organ segmentation, lesion detection, classification, and predicting prognosis.
Collapse
Affiliation(s)
- Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
35
|
Herrmann P, Busana M, Cressoni M, Lotz J, Moerer O, Saager L, Meissner K, Quintel M, Gattinoni L. Using Artificial Intelligence for Automatic Segmentation of CT Lung Images in Acute Respiratory Distress Syndrome. Front Physiol 2021; 12:676118. [PMID: 34594233 PMCID: PMC8476971 DOI: 10.3389/fphys.2021.676118] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 08/17/2021] [Indexed: 01/17/2023] Open
Abstract
Knowledge of gas volume, tissue mass and recruitability measured by the quantitative CT scan analysis (CT-qa) is important when setting the mechanical ventilation in acute respiratory distress syndrome (ARDS). Yet, the manual segmentation of the lung requires a considerable workload. Our goal was to provide an automatic, clinically applicable and reliable lung segmentation procedure. Therefore, a convolutional neural network (CNN) was used to train an artificial intelligence (AI) algorithm on 15 healthy subjects (1,302 slices), 100 ARDS patients (12,279 slices), and 20 COVID-19 (1,817 slices). Eighty percent of this populations was used for training, 20% for testing. The AI and manual segmentation at slice level were compared by intersection over union (IoU). The CT-qa variables were compared by regression and Bland Altman analysis. The AI-segmentation of a single patient required 5–10 s vs. 1–2 h of the manual. At slice level, the algorithm showed on the test set an IOU across all CT slices of 91.3 ± 10.0, 85.2 ± 13.9, and 84.7 ± 14.0%, and across all lung volumes of 96.3 ± 0.6, 88.9 ± 3.1, and 86.3 ± 6.5% for normal lungs, ARDS and COVID-19, respectively, with a U-shape in the performance: better in the lung middle region, worse at the apex and base. At patient level, on the test set, the total lung volume measured by AI and manual segmentation had a R2 of 0.99 and a bias −9.8 ml [CI: +56.0/−75.7 ml]. The recruitability measured with manual and AI-segmentation, as change in non-aerated tissue fraction had a bias of +0.3% [CI: +6.2/−5.5%] and −0.5% [CI: +2.3/−3.3%] expressed as change in well-aerated tissue fraction. The AI-powered lung segmentation provided fast and clinically reliable results. It is able to segment the lungs of seriously ill ARDS patients fully automatically.
Collapse
Affiliation(s)
- Peter Herrmann
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Mattia Busana
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | | | - Joachim Lotz
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Onnen Moerer
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Leif Saager
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Konrad Meissner
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Quintel
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany.,Department of Anesthesiology, DONAUISAR Klinikum Deggendorf, Deggendorf, Germany
| | - Luciano Gattinoni
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
36
|
Gu H, Gan W, Zhang C, Feng A, Wang H, Huang Y, Chen H, Shao Y, Duan Y, Xu Z. A 2D-3D hybrid convolutional neural network for lung lobe auto-segmentation on standard slice thickness computed tomography of patients receiving radiotherapy. Biomed Eng Online 2021; 20:94. [PMID: 34556141 PMCID: PMC8461922 DOI: 10.1186/s12938-021-00932-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 09/13/2021] [Indexed: 11/26/2022] Open
Abstract
Background Accurate segmentation of lung lobe on routine computed tomography (CT) images of locally advanced stage lung cancer patients undergoing radiotherapy can help radiation oncologists to implement lobar-level treatment planning, dose assessment and efficacy prediction. We aim to establish a novel 2D–3D hybrid convolutional neural network (CNN) to provide reliable lung lobe auto-segmentation results in the clinical setting. Methods We retrospectively collected and evaluated thorax CT scans of 105 locally advanced non-small-cell lung cancer (NSCLC) patients treated at our institution from June 2019 to August 2020. The CT images were acquired with 5 mm slice thickness. Two CNNs were used for lung lobe segmentation, a 3D CNN for extracting 3D contextual information and a 2D CNN for extracting texture information. Contouring quality was evaluated using six quantitative metrics and visual evaluation was performed to assess the clinical acceptability. Results For the 35 cases in the test group, Dice Similarity Coefficient (DSC) of all lung lobes contours exceeded 0.75, which met the pass criteria of the segmentation result. Our model achieved high performances with DSC as high as 0.9579, 0.9479, 0.9507, 0.9484, and 0.9003 for left upper lobe (LUL), left lower lobe (LLL), right upper lobe (RUL), right lower lobe (RLL), and right middle lobe (RML), respectively. The proposed model resulted in accuracy, sensitivity, and specificity of 99.57, 98.23, 99.65 for LUL; 99.6, 96.14, 99.76 for LLL; 99.67, 96.13, 99.81 for RUL; 99.72, 92.38, 99.83 for RML; 99.58, 96.03, 99.78 for RLL, respectively. Clinician's visual assessment showed that 164/175 lobe contours met the requirements for clinical use, only 11 contours need manual correction. Conclusions Our 2D–3D hybrid CNN model achieved accurate automatic segmentation of lung lobes on conventional slice-thickness CT of locally advanced lung cancer patients, and has good clinical practicability.
Collapse
Affiliation(s)
- Hengle Gu
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wutian Gan
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Chenchen Zhang
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Aihui Feng
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Wang
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Huang
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hua Chen
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yan Shao
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yanhua Duan
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhiyong Xu
- Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
37
|
Kano Y, Ikushima H, Sasaki M, Haga A. Automatic contour segmentation of cervical cancer using artificial intelligence. JOURNAL OF RADIATION RESEARCH 2021; 62:934-944. [PMID: 34401914 PMCID: PMC8438257 DOI: 10.1093/jrr/rrab070] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 05/11/2021] [Accepted: 07/17/2021] [Indexed: 05/10/2023]
Abstract
In cervical cancer treatment, radiation therapy is selected based on the degree of tumor progression, and radiation oncologists are required to delineate tumor contours. To reduce the burden on radiation oncologists, an automatic segmentation of the tumor contours would prove useful. To the best of our knowledge, automatic tumor contour segmentation has rarely been applied to cervical cancer treatment. In this study, diffusion-weighted images (DWI) of 98 patients with cervical cancer were acquired. We trained an automatic tumor contour segmentation model using 2D U-Net and 3D U-Net to investigate the possibility of applying such a model to clinical practice. A total of 98 cases were employed for the training, and they were then predicted by swapping the training and test images. To predict tumor contours, six prediction images were obtained after six training sessions for one case. The six images were then summed and binarized to output a final image through automatic contour segmentation. For the evaluation, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) was applied to analyze the difference between tumor contour delineation by radiation oncologists and the output image. The DSC ranged from 0.13 to 0.93 (median 0.83, mean 0.77). The cases with DSC <0.65 included tumors with a maximum diameter < 40 mm and heterogeneous intracavitary concentration due to necrosis. The HD ranged from 2.7 to 9.6 mm (median 4.7 mm). Thus, the study confirmed that the tumor contours of cervical cancer can be automatically segmented with high accuracy.
Collapse
Affiliation(s)
- Yosuke Kano
- Department of Radiological Technology, Tokushima Prefecture Naruto Hospital, 32 Kotani, Muyacho, Kurosaki, Naruto-shi, Tokushima 772-8503, Japan
| | - Hitoshi Ikushima
- Department of Therapeutic Radiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan
| | - Motoharu Sasaki
- Corresponding author. Department of Therapeutic Radiology, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan. Tel: +81-88-633-9053; Fax: +81-88-633-9051; E-mail:
| | - Akihiro Haga
- Department of Medical Image Informatics, Institute of Biomedical Sciences, Tokushima University Graduate School, 3-18-15 Kuramoto-Cho, Tokushima, Tokushima 770-8503, Japan
| |
Collapse
|
38
|
Nemoto T, Futakami N, Kunieda E, Yagi M, Takeda A, Akiba T, Mutu E, Shigematsu N. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs. Radiol Phys Technol 2021; 14:318-327. [PMID: 34254251 DOI: 10.1007/s12194-021-00630-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 07/01/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Deep learning has demonstrated high efficacy for automatic segmentation in contour delineation, which is crucial in radiation therapy planning. However, the collection, labeling, and management of medical imaging data can be challenging. This study aims to elucidate the effects of sample size and data augmentation on the automatic segmentation of computed tomography images using U-Net, a deep learning method. For the chest and pelvic regions, 232 and 556 cases are evaluated, respectively. We investigate multiple conditions by changing the sum of the training and validation datasets across a broad range of values: 10-200 and 10-500 cases for the chest and pelvic regions, respectively. A U-Net is constructed, and horizontal-flip data augmentation, which produces left and right inverse images resulting in twice the number of images, is compared with no augmentation for each training session. All lung cases and more than 100 prostate, bladder, and rectum cases indicate that adding horizontal-flip data augmentation is almost as effective as doubling the number of cases. The slope of the Dice similarity coefficient (DSC) in all organs decreases rapidly until approximately 100 cases, stabilizes after 200 cases, and shows minimal changes as the number of cases is increased further. The DSCs stabilize at a smaller sample size with the incorporation of data augmentation in all organs except the heart. This finding is applicable to the automation of radiation therapy for rare cancers, where large datasets may be difficult to obtain.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Etsuo Kunieda
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.,Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- Platform Technical Engineer Division, HPC and AI Business Department, System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura-shi, Kanagawa, 247-0056, Japan
| | - Takeshi Akiba
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Eride Mutu
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
39
|
Montgomery MK, David J, Zhang H, Ram S, Deng S, Premkumar V, Manzuk L, Jiang ZK, Giddabasappa A. Mouse lung automated segmentation tool for quantifying lung tumors after micro-computed tomography. PLoS One 2021; 16:e0252950. [PMID: 34138905 PMCID: PMC8211241 DOI: 10.1371/journal.pone.0252950] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 05/25/2021] [Indexed: 12/14/2022] Open
Abstract
Unlike the majority of cancers, survival for lung cancer has not shown much improvement since the early 1970s and survival rates remain low. Genetically engineered mice tumor models are of high translational relevance as we can generate tissue specific mutations which are observed in lung cancer patients. Since these tumors cannot be detected and quantified by traditional methods, we use micro-computed tomography imaging for longitudinal evaluation and to measure response to therapy. Conventionally, we analyze microCT images of lung cancer via a manual segmentation. Manual segmentation is time-consuming and sensitive to intra- and inter-analyst variation. To overcome the limitations of manual segmentation, we set out to develop a fully-automated alternative, the Mouse Lung Automated Segmentation Tool (MLAST). MLAST locates the thoracic region of interest, thresholds and categorizes the lung field into three tissue categories: soft tissue, intermediate, and lung. An increase in the tumor burden was measured by a decrease in lung volume with a simultaneous increase in soft and intermediate tissue quantities. MLAST segmentation was validated against three methods: manual scoring, manual segmentation, and histology. MLAST was applied in an efficacy trial using a Kras/Lkb1 non-small cell lung cancer model and demonstrated adequate precision and sensitivity in quantifying tumor growth inhibition after drug treatment. Implementation of MLAST has considerably accelerated the microCT data analysis, allowing for larger study sizes and mid-study readouts. This study illustrates how automated image analysis tools for large datasets can be used in preclinical imaging to deliver high throughput and quantitative results.
Collapse
Affiliation(s)
| | - John David
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Haikuo Zhang
- Oncology Research Unit, Pfizer Inc., La Jolla, CA, United States of America
| | - Sripad Ram
- Drug Safety Research Unit, Pfizer Inc., La Jolla, CA, United States of America
| | - Shibing Deng
- Early Clinical Development, Pfizer Inc., La Jolla, CA, United States of America
| | - Vidya Premkumar
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Lisa Manzuk
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Ziyue Karen Jiang
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
| | - Anand Giddabasappa
- Comparative Medicine, Pfizer Inc., La Jolla, CA, United States of America
- * E-mail:
| |
Collapse
|
40
|
Weng AM, Heidenreich JF, Metz C, Veldhoen S, Bley TA, Wech T. Deep learning-based segmentation of the lung in MR-images acquired by a stack-of-spirals trajectory at ultra-short echo-times. BMC Med Imaging 2021; 21:79. [PMID: 33964892 PMCID: PMC8106126 DOI: 10.1186/s12880-021-00608-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 04/26/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. METHODS In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen-Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson's correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. RESULTS The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson's correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. CONCLUSIONS Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.
Collapse
Affiliation(s)
- Andreas M Weng
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany.
| | - Julius F Heidenreich
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany
| | - Corona Metz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany
| | - Simon Veldhoen
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany
| | - Thorsten A Bley
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany
| | - Tobias Wech
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Oberdürrbacher Str. 6, 97080, Würzburg, Germany
| |
Collapse
|
41
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 73] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
42
|
肖 汉, 冉 智, 黄 金, 任 慧, 刘 畅, 张 邦, 张 勃, 党 军. [Research progress in lung parenchyma segmentation based on computed tomography]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2021; 38:379-386. [PMID: 33913299 PMCID: PMC9927687 DOI: 10.7507/1001-5515.202008032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/31/2021] [Indexed: 11/03/2022]
Abstract
Lung diseases such as lung cancer and COVID-19 seriously endanger human health and life safety, so early screening and diagnosis are particularly important. computed tomography (CT) technology is one of the important ways to screen lung diseases, among which lung parenchyma segmentation based on CT images is the key step in screening lung diseases, and high-quality lung parenchyma segmentation can effectively improve the level of early diagnosis and treatment of lung diseases. Automatic, fast and accurate segmentation of lung parenchyma based on CT images can effectively compensate for the shortcomings of low efficiency and strong subjectivity of manual segmentation, and has become one of the research hotspots in this field. In this paper, the research progress in lung parenchyma segmentation is reviewed based on the related literatures published at domestic and abroad in recent years. The traditional machine learning methods and deep learning methods are compared and analyzed, and the research progress of improving the network structure of deep learning model is emphatically introduced. Some unsolved problems in lung parenchyma segmentation were discussed, and the development prospect was prospected, providing reference for researchers in related fields.
Collapse
Affiliation(s)
- 汉光 肖
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 智强 冉
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 金锋 黄
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 慧娇 任
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 畅 刘
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 邦林 张
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 勃龙 张
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| | - 军 党
- 重庆理工大学 两江人工智能学院 智能科学系(重庆 401135)Department of Intelligent Science, School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, P.R.China
| |
Collapse
|
43
|
RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning. Int J Comput Assist Radiol Surg 2021; 16:895-904. [PMID: 33846890 DOI: 10.1007/s11548-021-02360-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 03/25/2021] [Indexed: 02/05/2023]
Abstract
PURPOSE The robust and automatic segmentation of the pulmonary lobe is vital to surgical planning and regional image analysis of pulmonary related diseases in real-time Computer Aided Diagnosis systems. While a number of studies have examined this issue, the segmentation of unclear borders of the five lobes of the lung remains challenging because of incomplete fissures, the diversity of anatomical pulmonary information, and obstructive lesions caused by pulmonary diseases. This study proposes a model called Regularized Pulmonary Lobe Segmentation Network to accurately predict the lobes as well as the borders. METHODS First, a 3D fully convolutional network is constructed to extract contextual features from computed tomography images. Second, multi-task learning is employed to learn the segmentations of the lobes and the borders between them to train the neural network to better predict the borders via shared representation. Third, a 3D depth-wise separable de-convolution block is proposed for deep supervision to efficiently train the network. We also propose a hybrid loss function by combining cross-entropy loss with focal loss using adaptive parameters to focus on the tissues and the borders of the lobes. RESULTS Experiments are conducted on a dataset annotated by experienced clinical radiologists. A 4-fold cross-validation result demonstrates that the proposed approach can achieve a mean dice coefficient of 0.9421 and average symmetric surface distance of 1.3546 mm, which is comparable to state of the art methods. The proposed approach has the capability to accurately segment voxels that are near the lung wall and fissure. CONCLUSION In this paper, a 3D fully convolutional networks framework is proposed to segment pulmonary lobes in chest CT images accurately. Experimental results show the effectiveness of the proposed approach in segmenting the tissues as well as the borders of the lobes.
Collapse
|
44
|
Hasenstab KA, Yuan N, Retson T, Conrad DJ, Kligerman S, Lynch DA, Hsiao A. Automated CT Staging of Chronic Obstructive Pulmonary Disease Severity for Predicting Disease Progression and Mortality with a Deep Learning Convolutional Neural Network. Radiol Cardiothorac Imaging 2021; 3:e200477. [PMID: 33969307 DOI: 10.1148/ryct.2021200477] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/29/2021] [Accepted: 02/05/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop a deep learning-based algorithm to stage the severity of chronic obstructive pulmonary disease (COPD) through quantification of emphysema and air trapping on CT images and to assess the ability of the proposed stages to prognosticate 5-year progression and mortality. Materials and Methods In this retrospective study, an algorithm using co-registration and lung segmentation was developed in-house to automate quantification of emphysema and air trapping from inspiratory and expiratory CT images. The algorithm was then tested in a separate group of 8951 patients from the COPD Genetic Epidemiology study (date range, 2007-2017). With measurements of emphysema and air trapping, bivariable thresholds were determined to define CT stages of severity (mild, moderate, severe, and very severe) and were evaluated for their ability to prognosticate disease progression and mortality using logistic regression and Cox regression. Results On the basis of CT stages, the odds of disease progression were greatest among patients with very severe disease (odds ratio [OR], 2.67; 95% CI: 2.02, 3.53; P < .001) and were elevated in patients with moderate disease (OR, 1.50; 95% CI: 1.22, 1.84; P = .001). The hazard ratio of mortality for very severe disease at CT was 2.23 times the normal ratio (95% CI: 1.93, 2.58; P < .001). When combined with Global Initiative for Chronic Obstructive Lung Disease (GOLD) staging, patients with GOLD stage 2 disease had the greatest odds of disease progression when the CT stage was severe (OR, 4.48; 95% CI: 3.18, 6.31; P < .001) or very severe (OR, 4.72; 95% CI: 3.13, 7.13; P < .001). Conclusion Automated CT algorithms can facilitate staging of COPD severity, have diagnostic performance comparable with that of spirometric GOLD staging, and provide further prognostic value when used in conjunction with GOLD staging.Supplemental material is available for this article.© RSNA, 2021See also commentary by Kalra and Ebrahimian in this issue.
Collapse
Affiliation(s)
- Kyle A Hasenstab
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Nancy Yuan
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Tara Retson
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Douglas J Conrad
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Seth Kligerman
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - David A Lynch
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Albert Hsiao
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | | |
Collapse
|
45
|
Ohno Y, Seo JB, Parraga G, Lee KS, Gefter WB, Fain SB, Schiebler ML, Hatabu H. Pulmonary Functional Imaging: Part 1-State-of-the-Art Technical and Physiologic Underpinnings. Radiology 2021; 299:508-523. [PMID: 33825513 DOI: 10.1148/radiol.2021203711] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Over the past few decades, pulmonary imaging technologies have advanced from chest radiography and nuclear medicine methods to high-spatial-resolution or low-dose chest CT and MRI. It is currently possible to identify and measure pulmonary pathologic changes before these are obvious even to patients or depicted on conventional morphologic images. Here, key technological advances are described, including multiparametric CT image processing methods, inhaled hyperpolarized and fluorinated gas MRI, and four-dimensional free-breathing CT and MRI methods to measure regional ventilation, perfusion, gas exchange, and biomechanics. The basic anatomic and physiologic underpinnings of these pulmonary functional imaging techniques are explained. In addition, advances in image analysis and computational and artificial intelligence (machine learning) methods pertinent to functional lung imaging are discussed. The clinical applications of pulmonary functional imaging, including both the opportunities and challenges for clinical translation and deployment, will be discussed in part 2 of this review. Given the technical advances in these sophisticated imaging methods and the wealth of information they can provide, it is anticipated that pulmonary functional imaging will be increasingly used in the care of patients with lung disease. © RSNA, 2021 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Yoshiharu Ohno
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Joon Beom Seo
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Grace Parraga
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Kyung Soo Lee
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Warren B Gefter
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Sean B Fain
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Mark L Schiebler
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| | - Hiroto Hatabu
- From the Department of Radiology, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan (Y.O.); Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan (Y.O.); Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea (J.B.S.); Department of Medicine, Robarts Research Institute, and Department of Medical Biophysics, Western University, London, Canada (G.P.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, Korea (K.S.L.); Department of Radiology, Penn Medicine, University of Pennsylvania, Philadelphia, Pa (W.B.G.); Departments of Medical Physics and Radiology (S.B.F., M.L.S.), UW-Madison School of Medicine and Public Health, Madison, Wis; and Center for Pulmonary Functional Imaging, Brigham and Women's Hospital and Harvard Medical School, 75 Francis St, Boston, MA 02215 (H.H.)
| |
Collapse
|
46
|
Park JK, Choi SM, Kang SW, Kim KJ, Min KT. Three-dimensional measurement of the course of the radial nerve at the posterior humeral shaft: An in vivo anatomical study. J Orthop Surg (Hong Kong) 2021; 28:2309499020930828. [PMID: 32627674 DOI: 10.1177/2309499020930828] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
PURPOSE Iatrogenic radial nerve injury caused by surgical exposure of the humerus is a serious complication. We aimed to describe the course of the radial nerve at the posterior humeral shaft using a three-dimensional (3D) reconstruction technique by utilizing computed tomography (CT) images of living subjects. We hypothesized that the course of the radial nerve in the posterior aspect of the humeral shaft would be reliably established using this technique and the measurements would have satisfactory intraobserver/interobserver reliabilities. METHODS This in vivo anatomical study utilized 652 upper extremity CT angiography images from 326 patients. A 3D modeling of the humerus and radial nerve was performed. We evaluated the segment of the radial nerve that lays directly on the posterior aspect of the humeral shaft and measured its proximal point, mid, and distal points. The shortest distances from the olecranon fossa to these points were defined as R1, R2, and R3, respectively. The relationships between these parameters and humeral length (HL) and transcondylar length (TL) were evaluated, and the intraobserver/interobserver reliabilities of these parameters were measured. RESULTS The HL was 293.6 mm, and TL was 58.64 mm on average. The R1 measured 159.2 (range 127.1-198.2) mm, R2 was 136.6 (105.7-182.5), and R3 was 112.8 (76.8-150.0) mm on average (p < .001). The intraobserver/interobserver reliabilities ranged from 0.90 to 0.98. CONCLUSION The course of the radial nerve at the posterior aspect of the humeral shaft can be reliably established using the 3D reconstruction technique, and all measurements had excellent intraobserver/interobserver reliability.
Collapse
Affiliation(s)
- Ji-Kang Park
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea
| | - Seung-Myung Choi
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea.,Department of Orthopedic Surgery, Konkuk University Chungju Hospital, Konkuk University School of Medicine, Chungju, Korea.,Department of Orthopedic Surgery, Graduate School of Medicine, Chungbuk National University, Cheongju, Korea
| | - Sang-Woo Kang
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea
| | - Kook-Jong Kim
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea
| | - Kyoung-Tae Min
- Department of Orthopaedic Surgery, School of Medicine, Chungbuk National University Hospital, Cheongju, Korea
| |
Collapse
|
47
|
Kellogg RT, Vargas J, Barros G, Sen R, Bass D, Mason JR, Levitt M. Segmentation of Chronic Subdural Hematomas Using 3D Convolutional Neural Networks. World Neurosurg 2020; 148:e58-e65. [PMID: 33359736 DOI: 10.1016/j.wneu.2020.12.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/30/2020] [Accepted: 12/01/2020] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Chronic subdural hematomas (cSDHs) are an increasingly prevalent neurologic disease that often requires surgical intervention to alleviate compression of the brain. Management of cSDHs relies heavily on computed tomography (CT) imaging, and serial imaging is frequently obtained to help direct management. The volume of hematoma provides critical information in guiding therapy and evaluating new methods of management. We set out to develop an automated program to compute the volume of hematoma on CT scans for both pre- and postoperative images. METHODS A total of 21,710 images (128 CT scans) were manually segmented and used to train a convolutional neural network to automatically segment cSDHs. We included both pre- and postoperative coronal head CTs from patients undergoing surgical management of cSDHs. RESULTS Our best model achieved a DICE score of 0.8351 on the testing dataset, and an average DICE score of 0.806 ± 0.06 on the validation set. This model was trained on the full dataset with reduced volumes, a network depth of 4, and postactivation residual blocks within the context modules of the encoder pathway. Patch trained models did not perform as well and decreasing the network depth from 5 to 4 did not appear to significantly improve performance. CONCLUSIONS We successfully trained a convolutional neural network on a dataset of pre- and postoperative head CTs containing cSDH. This tool could assist with automated, accurate measurements for evaluating treatment efficacy.
Collapse
Affiliation(s)
- Ryan T Kellogg
- Department of Neurological Surgery, University of Washington, Seattle, Washington, USA.
| | - Jan Vargas
- Division of Neurosurgery, Prisma Health, Greenville, South Carolina, USA
| | - Guilherme Barros
- Department of Neurological Surgery, University of Washington, Seattle, Washington, USA
| | - Rajeev Sen
- Department of Neurological Surgery, University of Washington, Seattle, Washington, USA
| | - David Bass
- Department of Neurological Surgery, University of Washington, Seattle, Washington, USA
| | - J Ryan Mason
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Michael Levitt
- Department of Neurological Surgery, University of Washington, Seattle, Washington, USA
| |
Collapse
|
48
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
49
|
Hwang EJ, Park CM. Clinical Implementation of Deep Learning in Thoracic Radiology: Potential Applications and Challenges. Korean J Radiol 2020; 21:511-525. [PMID: 32323497 PMCID: PMC7183830 DOI: 10.3348/kjr.2019.0821] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 01/31/2020] [Indexed: 12/25/2022] Open
Abstract
Chest X-ray radiography and computed tomography, the two mainstay modalities in thoracic radiology, are under active investigation with deep learning technology, which has shown promising performance in various tasks, including detection, classification, segmentation, and image synthesis, outperforming conventional methods and suggesting its potential for clinical implementation. However, the implementation of deep learning in daily clinical practice is in its infancy and facing several challenges, such as its limited ability to explain the output results, uncertain benefits regarding patient outcomes, and incomplete integration in daily workflow. In this review article, we will introduce the potential clinical applications of deep learning technology in thoracic radiology and discuss several challenges for its implementation in daily clinical practice.
Collapse
Affiliation(s)
- Eui Jin Hwang
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.
| |
Collapse
|
50
|
Nemoto T, Futakami N, Yagi M, Kumabe A, Takeda A, Kunieda E, Shigematsu N. Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi. JOURNAL OF RADIATION RESEARCH 2020; 61:257-264. [PMID: 32043528 PMCID: PMC7246058 DOI: 10.1093/jrr/rrz086] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/23/2019] [Accepted: 12/28/2019] [Indexed: 05/29/2023]
Abstract
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 × 128 × 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart SegmentationⓇ Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Division of Radiation Oncology, Saiseikai Yokohamashi Tobu-Hospital, Shimosueyoshi 3-6-1, Tsurumi-ku, Yokohama-shi, Kanagawa, 230-8765, Japan
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- HPC&AI Business Dept., Platform Technical Engineer Div., System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuhiro Kumabe
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura, 247-0056, Japan
| | - Etsuo Kunieda
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|