1
|
Zhang J, Fu T, Xiao D, Fan J, Song H, Ai D, Yang J. Bi-Fusion of Structure and Deformation at Multi-Scale for Joint Segmentation and Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3676-3691. [PMID: 38837936 DOI: 10.1109/tip.2024.3407657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
Medical image segmentation and registration are two fundamental and highly related tasks. However, current works focus on the mutual promotion between the two at the loss function level, ignoring the feature information generated by the encoder-decoder network during the task-specific feature mapping process and the potential inter-task feature relationship. This paper proposes a unified multi-task joint learning framework based on bi-fusion of structure and deformation at multi-scale, called BFM-Net, which simultaneously achieves the segmentation results and deformation field in a single-step estimation. BFM-Net consists of a segmentation subnetwork (SegNet), a registration subnetwork (RegNet), and the multi-task connection module (MTC). The MTC module is used to transfer the latent feature representation between segmentation and registration at multi-scale and link different tasks at the network architecture level, including the spatial attention fusion module (SAF), the multi-scale spatial attention fusion module (MSAF) and the velocity field fusion module (VFF). Extensive experiments on MR, CT and ultrasound images demonstrate the effectiveness of our approach. The MTC module can increase the Dice scores of segmentation and registration by 3.2%, 1.6%, 2.2%, and 6.2%, 4.5%, 3.0%, respectively. Compared with six state-of-the-art algorithms for segmentation and registration, BFM-Net can achieve superior performance in various modal images, fully demonstrating its effectiveness and generalization.
Collapse
|
2
|
Abd El-Khalek AA, Balaha HM, Alghamdi NS, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A concentrated machine learning-based classification system for age-related macular degeneration (AMD) diagnosis using fundus images. Sci Rep 2024; 14:2434. [PMID: 38287062 PMCID: PMC10825213 DOI: 10.1038/s41598-024-52131-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/14/2024] [Indexed: 01/31/2024] Open
Abstract
The increase in eye disorders among older individuals has raised concerns, necessitating early detection through regular eye examinations. Age-related macular degeneration (AMD), a prevalent condition in individuals over 45, is a leading cause of vision impairment in the elderly. This paper presents a comprehensive computer-aided diagnosis (CAD) framework to categorize fundus images into geographic atrophy (GA), intermediate AMD, normal, and wet AMD categories. This is crucial for early detection and precise diagnosis of age-related macular degeneration (AMD), enabling timely intervention and personalized treatment strategies. We have developed a novel system that extracts both local and global appearance markers from fundus images. These markers are obtained from the entire retina and iso-regions aligned with the optical disc. Applying weighted majority voting on the best classifiers improves performance, resulting in an accuracy of 96.85%, sensitivity of 93.72%, specificity of 97.89%, precision of 93.86%, F1 of 93.72%, ROC of 95.85%, balanced accuracy of 95.81%, and weighted sum of 95.38%. This system not only achieves high accuracy but also provides a detailed assessment of the severity of each retinal region. This approach ensures that the final diagnosis aligns with the physician's understanding of AMD, aiding them in ongoing treatment and follow-up for AMD patients.
Collapse
Affiliation(s)
- Aya A Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Hossam Magdy Balaha
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Depatrment, Abu Dhabi University, Abu Dhabi, UAE
| | - Abeer T Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mohy Eldin A Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- BioImaging Lab, Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
3
|
Farahat IS, Sharafeldeen A, Ghazal M, Alghamdi NS, Mahmoud A, Connelly J, van Bogaert E, Zia H, Tahtouh T, Aladrousy W, Tolba AE, Elmougy S, El-Baz A. An AI-based novel system for predicting respiratory support in COVID-19 patients through CT imaging analysis. Sci Rep 2024; 14:851. [PMID: 38191606 PMCID: PMC10774502 DOI: 10.1038/s41598-023-51053-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 01/10/2024] Open
Abstract
The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.
Collapse
Affiliation(s)
- Ibrahim Shawky Farahat
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | | | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ali Mahmoud
- Department of Bioengineering, University of Louisville, Louisville, USA
| | - James Connelly
- Department of Radiology, University of Louisville, Louisville, USA
| | - Eric van Bogaert
- Department of Radiology, University of Louisville, Louisville, USA
| | - Huma Zia
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Tania Tahtouh
- College of Health Sciences, Abu Dhabi University, Abu Dhabi, UAE
| | - Waleed Aladrousy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ahmed Elsaid Tolba
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
- The Higher Institute of Engineering and Automotive Technology and Energy, Kafr El Sheikh, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, USA.
| |
Collapse
|
4
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
5
|
Cellina M, Cacioppa LM, Cè M, Chiarpenello V, Costa M, Vincenzo Z, Pais D, Bausano MV, Rossini N, Bruno A, Floridi C. Artificial Intelligence in Lung Cancer Screening: The Future Is Now. Cancers (Basel) 2023; 15:4344. [PMID: 37686619 PMCID: PMC10486721 DOI: 10.3390/cancers15174344] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/27/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
Lung cancer has one of the worst morbidity and fatality rates of any malignant tumour. Most lung cancers are discovered in the middle and late stages of the disease, when treatment choices are limited, and patients' survival rate is low. The aim of lung cancer screening is the identification of lung malignancies in the early stage of the disease, when more options for effective treatments are available, to improve the patients' outcomes. The desire to improve the efficacy and efficiency of clinical care continues to drive multiple innovations into practice for better patient management, and in this context, artificial intelligence (AI) plays a key role. AI may have a role in each process of the lung cancer screening workflow. First, in the acquisition of low-dose computed tomography for screening programs, AI-based reconstruction allows a further dose reduction, while still maintaining an optimal image quality. AI can help the personalization of screening programs through risk stratification based on the collection and analysis of a huge amount of imaging and clinical data. A computer-aided detection (CAD) system provides automatic detection of potential lung nodules with high sensitivity, working as a concurrent or second reader and reducing the time needed for image interpretation. Once a nodule has been detected, it should be characterized as benign or malignant. Two AI-based approaches are available to perform this task: the first one is represented by automatic segmentation with a consequent assessment of the lesion size, volume, and densitometric features; the second consists of segmentation first, followed by radiomic features extraction to characterize the whole abnormalities providing the so-called "virtual biopsy". This narrative review aims to provide an overview of all possible AI applications in lung cancer screening.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, 20121 Milano, Italy;
| | - Laura Maria Cacioppa
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Vittoria Chiarpenello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Marco Costa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Zakaria Vincenzo
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Daniele Pais
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Maria Vittoria Bausano
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (M.C.); (V.C.); (M.C.); (Z.V.); (D.P.); (M.V.B.)
| | - Nicolò Rossini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Alessandra Bruno
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
| | - Chiara Floridi
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy; (L.M.C.); (N.R.); (A.B.)
- Division of Interventional Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
- Division of Radiology, Department of Radiological Sciences, University Hospital “Azienda Ospedaliera Universitaria delle Marche”, 60126 Ancona, Italy
| |
Collapse
|
6
|
Zheng S, Kong S, Huang Z, Pan L, Zeng T, Zheng B, Yang M, Liu Z. A Lower False Positive Pulmonary Nodule Detection Approach for Early Lung Cancer Screening. Diagnostics (Basel) 2022; 12:2660. [PMID: 36359503 PMCID: PMC9689063 DOI: 10.3390/diagnostics12112660] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 09/25/2024] Open
Abstract
Pulmonary nodule detection with low-dose computed tomography (LDCT) is indispensable in early lung cancer screening. Although existing methods have achieved excellent detection sensitivity, nodule detection still faces challenges such as nodule size variation and uneven distribution, as well as excessive nodule-like false positive candidates in the detection results. We propose a novel two-stage nodule detection (TSND) method. In the first stage, a multi-scale feature detection network (MSFD-Net) is designed to generate nodule candidates. This includes a proposed feature extraction network to learn the multi-scale feature representation of candidates. In the second stage, a candidate scoring network (CS-Net) is built to estimate the score of candidate patches to realize false positive reduction (FPR). Finally, we develop an end-to-end nodule computer-aided detection (CAD) system based on the proposed TSND for LDCT scans. Experimental results on the LUNA16 dataset show that our proposed TSND obtained an excellent average sensitivity of 90.59% at seven predefined false positives (FPs) points: 0.125, 0.25, 0.5, 1, 2, 4, and 8 FPs per scan on the FROC curve introduced in LUNA16. Moreover, comparative experiments indicate that our CS-Net can effectively suppress false positives and improve the detection performance of TSND.
Collapse
Affiliation(s)
- Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Shaohua Kong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Zihan Huang
- School of Future Technology, Harbin Institute of Technology, Harbin 150000, China
| | - Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Taidui Zeng
- Key Laboratory of Cardio-Thoracic Surgery (Fujian Medical University ), Fujian Province University, Fuzhou 350108, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery (Fujian Medical University ), Fujian Province University, Fuzhou 350108, China
| | - Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Zheng Liu
- School of Engineering, Faculty of Applied Science, University of British Columbia, Kelowna, BC V1V 1V7, Canada
| |
Collapse
|
7
|
Artificial Intelligence in Lung Cancer Imaging: Unfolding the Future. Diagnostics (Basel) 2022; 12:diagnostics12112644. [PMID: 36359485 PMCID: PMC9689810 DOI: 10.3390/diagnostics12112644] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/26/2022] [Accepted: 10/29/2022] [Indexed: 11/30/2022] Open
Abstract
Lung cancer is one of the malignancies with higher morbidity and mortality. Imaging plays an essential role in each phase of lung cancer management, from detection to assessment of response to treatment. The development of imaging-based artificial intelligence (AI) models has the potential to play a key role in early detection and customized treatment planning. Computer-aided detection of lung nodules in screening programs has revolutionized the early detection of the disease. Moreover, the possibility to use AI approaches to identify patients at risk of developing lung cancer during their life can help a more targeted screening program. The combination of imaging features and clinical and laboratory data through AI models is giving promising results in the prediction of patients’ outcomes, response to specific therapies, and risk for toxic reaction development. In this review, we provide an overview of the main imaging AI-based tools in lung cancer imaging, including automated lesion detection, characterization, segmentation, prediction of outcome, and treatment response to provide radiologists and clinicians with the foundation for these applications in a clinical scenario.
Collapse
|
8
|
Liao RQ, Li AW, Yan HH, Lin JT, Liu SY, Wang JW, Fang JS, Liu HB, Hou YH, Song C, Yang HF, Li B, Jiang BY, Dong S, Nie Q, Zhong WZ, Wu YL, Yang XN. Deep learning-based growth prediction for sub-solid pulmonary nodules on CT images. Front Oncol 2022; 12:1002953. [PMID: 36313666 PMCID: PMC9597322 DOI: 10.3389/fonc.2022.1002953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
Background Estimating the growth of pulmonary sub-solid nodules (SSNs) is crucial to the successful management of them during follow-up periods. The purpose of this study is to (1) investigate the measurement sensitivity of diameter, volume, and mass of SSNs for identifying growth and (2) seek to establish a deep learning-based model to predict the growth of SSNs. Methods A total of 2,523 patients underwent at least 2-year examination records retrospectively collected with sub-solid nodules. A total of 2,358 patients with 3,120 SSNs from the NLST dataset were randomly divided into training and validation sets. Patients from the Yibicom Health Management Center and Guangdong Provincial People’s Hospital were collected as an external test set (165 patients with 213 SSN). Trained models based on LUNA16 and Lndb19 datasets were employed to automatically obtain the diameter, volume, and mass of SSNs. Then, the increase rate in measurements between cancer and non-cancer groups was studied to evaluate the most appropriate way to identify growth-associated lung cancer. Further, according to the selected measurement, all SSNs were classified into two groups: growth and non-growth. Based on the data, the deep learning-based model (SiamModel) and radiomics model were developed and verified. Results The double time of diameter, volume, and mass were 711 vs. 963 days (P = 0.20), 552 vs. 621 days (P = 0.04) and 488 vs. 623 days (P< 0.001) in the cancer and non-cancer groups, respectively. Our proposed SiamModel performed better than the radiomics model in both the NLST validation set and external test set, with an AUC of 0.858 (95% CI 0.786–0.921) and 0.760 (95% CI 0.646–0.857) in the validation set and 0.862 (95% CI 0.789–0.927) and 0.681 (95% CI 0.506–0.841) in the external test set, respectively. Furthermore, our SiamModel could use the data from first-time CT to predict the growth of SSNs, with an AUC of 0.855 (95% CI 0.793–0.908) in the NLST validation set and 0.821 (95% CI 0.725–0.904) in the external test set. Conclusion Mass increase rate can reflect more sensitively the growth of SSNs associated with lung cancer than diameter and volume increase rates. A deep learning-based model has a great potential to predict the growth of SSNs.
Collapse
Affiliation(s)
- Ri-qiang Liao
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - An-wei Li
- Guangzhou Shiyuan Electronics Co., Ltd, Guangzhou, China
| | - Hong-hong Yan
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jun-tao Lin
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Si-yang Liu
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jing-wen Wang
- Guangzhou Shiyuan Electronics Co., Ltd, Guangzhou, China
| | | | - Hong-bo Liu
- Guangzhou Shiyuan Electronics Co., Ltd, Guangzhou, China
| | - Yong-he Hou
- Yibicom Health Management Center, CVTE, Guangzhou, China
| | - Chao Song
- Yibicom Health Management Center, CVTE, Guangzhou, China
| | - Hui-fang Yang
- Yibicom Health Management Center, CVTE, Guangzhou, China
| | - Bin Li
- Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Ben-yuan Jiang
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Song Dong
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qiang Nie
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen-zhao Zhong
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Yi-long Wu
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Xue-ning Yang, ; Yi-long Wu,
| | - Xue-ning Yang
- Guangdong Lung Cancer Institute, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Xue-ning Yang, ; Yi-long Wu,
| |
Collapse
|
9
|
Li Z, Yang L, Shu L, Yu Z, Huang J, Li J, Chen L, Hu S, Shu T, Yu G. Research on CT Lung Segmentation Method of Preschool Children based on Traditional Image Processing and ResUnet. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7321330. [PMID: 36262868 PMCID: PMC9576440 DOI: 10.1155/2022/7321330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/13/2022] [Accepted: 09/21/2022] [Indexed: 11/22/2022]
Abstract
Lung segmentation using computed tomography (CT) images is important for diagnosing various lung diseases. Currently, no lung segmentation method has been developed for assessing the CT images of preschool children, which may differ from those of adults due to (1) presence of artifacts caused by the shaking of children, (2) loss of a localized lung area due to a failure to hold their breath, and (3) a smaller CT chest area, compared with adults. To solve these unique problems, this study developed an automatic lung segmentation method by combining traditional imaging methods with ResUnet using the CT images of 60 children, aged 0-6 years. First, the CT images were cropped and zoomed through ecological operations to concentrate the segmentation task on the chest area. Then, a ResUnet model was used to improve the loss for lung segmentation, and case-based connected domain operations were performed to filter the segmentation results and improve segmentation accuracy. The proposed method demonstrated promising segmentation results on a test set of 12 cases, with average accuracy, Dice, precision, and recall of 0.9479, 0.9678, 0.9711, and 0.9715, respectively, which achieved the best performance relative to the other six models. This study shows that the proposed method can achieve good segmentation results in CT of preschool children, laying a good foundation for the diagnosis of children's lung diseases.
Collapse
Affiliation(s)
- Zheming Li
- Department of Data and Information, The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
- Sino-Finland Joint AI Laboratory for Child Health of Zhejiang Province, Hangzhou 310052, China
- National Clinical Research Center for Child Health, Hangzhou 310052, China
- Polytechnic Institute, Zhejiang University, 866 Yuhangtang Rd, Hangzhou 310058, China
| | - Li Yang
- National Clinical Research Center for Child Health, Hangzhou 310052, China
- Department of Radiology, Children's Hospital, Zhejiang University School of Medicine, Hangzhou 310052, China
| | - Liqi Shu
- Department of Neurology, The Warren Alpert Medical School of Brown University, USA
| | - Zhuo Yu
- Huiying Medical Technology (Beijing), Beijing 100192, China
| | - Jian Huang
- Department of Data and Information, The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
- Sino-Finland Joint AI Laboratory for Child Health of Zhejiang Province, Hangzhou 310052, China
- National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Jing Li
- Department of Data and Information, The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
- Sino-Finland Joint AI Laboratory for Child Health of Zhejiang Province, Hangzhou 310052, China
- National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Lingdong Chen
- Department of Data and Information, The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
- Sino-Finland Joint AI Laboratory for Child Health of Zhejiang Province, Hangzhou 310052, China
- National Clinical Research Center for Child Health, Hangzhou 310052, China
| | - Shasha Hu
- The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
| | - Ting Shu
- National Institute of Hospital Administration, NHC, Beijing 100044, China
| | - Gang Yu
- Department of Data and Information, The Children's Hospital Zhejiang University School of Medicine, Hangzhou 310052, China
- Sino-Finland Joint AI Laboratory for Child Health of Zhejiang Province, Hangzhou 310052, China
- National Clinical Research Center for Child Health, Hangzhou 310052, China
- Polytechnic Institute, Zhejiang University, 866 Yuhangtang Rd, Hangzhou 310058, China
| |
Collapse
|