1
|
Levi M, Lazebnik T, Kushnir S, Yosef N, Shlomi D. Machine learning computational model to predict lung cancer using electronic medical records. Cancer Epidemiol 2024; 92:102631. [PMID: 39053365 DOI: 10.1016/j.canep.2024.102631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 07/06/2024] [Accepted: 07/19/2024] [Indexed: 07/27/2024]
Abstract
BACKGROUND Lung cancer (LC) screening using low-dose computed tomography (CT) is recommended according to standard risk criteria or personalized risk calculators. Machine learning (ML) models that can predict disease risk are an emerging method in medicine for identifying hidden associations that are personally unique. MATERIALS AND METHODS Using the tree-based pipeline optimization tool (TPOT), we developed an ML-based model, which is an ensemble of the Random Forest and XGboost models, based on known risk factors for LC, as part of a larger trial for ML prediction using electronic medical records and chest CT. We used data from patients with LC vs. controls (1:2) of patients aged ≥ 35 years. We developed a model for all LC patients as well as for patients with and without a smoking background. We included age, gender, body mass index (BMI), smoking history, socioeconomic status (SES), history of chronic obstructive pulmonary disease (COPD)/emphysema/chronic bronchitis (CB), interstitial lung disease (ILD)/pulmonary fibrosis (PF), and family history of LC. RESULTS Of the 4076 patients, 1428 (35 %) were in the LC group and 2648 (65 %) were in the control group. For the entire study population, our model achieved an accuracy of 71.2 %, with a sensitivity of 69 % and a positive predictive value (PPV) of 74 %. Higher accuracy was achieved for the two subgroups. An accuracy of 74.8 % (sensitivity 72 %, PPV 76 %) and 73.0 % (sensitivity 76 %, PPV 72 %) was achieved for the smoking and never-smoking cohorts, respectively. For the entire population and smoker cohort, COPD/emphysema/CB were the most important contributors, followed by BMI and age, while in the never-smoking cohort, BMI, age and SES were the most important contributors. CONCLUSION Known risk factors for LC could be used in ML models to modestly predict LC. Further studies are needed to confirm these results in new patients and to improve them.
Collapse
Affiliation(s)
- Matanel Levi
- Adelson School of Medicine, Ariel University, Ariel, Israel
| | - Teddy Lazebnik
- Department of Mathematics, Ariel University, Ariel, Israel; Department of Cancer Biology, Cancer Institute, University College London, London, UK
| | - Shiri Kushnir
- Research Authority, Rabin Medical Center, Beilinson Campus, Petah-Tiqwa, Israel
| | - Noga Yosef
- Research Unit, Dan, Petah-Tiqwa District, Clalit Health Services Community Division, Ramat-Gan, Israel
| | - Dekel Shlomi
- Adelson School of Medicine, Ariel University, Ariel, Israel; Pulmonary Clinic, Dan, Petah-Tiqwa District, Clalit Health Services Community Division, Ramat-Gan, Israel.
| |
Collapse
|
2
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2024:10.1007/s00330-024-10907-0. [PMID: 38985185 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
3
|
Strong JS, Furube T, Takeuchi M, Kawakubo H, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kitagawa Y. Evaluating surgical expertise with AI-based automated instrument recognition for robotic distal gastrectomy. Ann Gastroenterol Surg 2024; 8:611-619. [PMID: 38957567 PMCID: PMC11216797 DOI: 10.1002/ags3.12784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/11/2023] [Accepted: 02/09/2024] [Indexed: 07/04/2024] Open
Abstract
Introduction Complexities of robotic distal gastrectomy (RDG) give reason to assess physician's surgical skill. Varying levels in surgical skill affect patient outcomes. We aim to investigate how a novel artificial intelligence (AI) model can be used to evaluate surgical skill in RDG by recognizing surgical instruments. Methods Fifty-five consecutive robotic surgical videos of RDG for gastric cancer were analyzed. We used Deeplab, a multi-stage temporal convolutional network, and it trained on 1234 manually annotated images. The model was then tested on 149 annotated images for accuracy. Deep learning metrics such as Intersection over Union (IoU) and accuracy were assessed, and the comparison between experienced and non-experienced surgeons based on usage of instruments during infrapyloric lymph node dissection was performed. Results We annotated 540 Cadiere forceps, 898 Fenestrated bipolars, 359 Suction tubes, 307 Maryland bipolars, 688 Harmonic scalpels, 400 Staplers, and 59 Large clips. The average IoU and accuracy were 0.82 ± 0.12 and 87.2 ± 11.9% respectively. Moreover, the percentage of each instrument's usage to overall infrapyloric lymphadenectomy duration predicted by AI were compared. The use of Stapler and Large clip were significantly shorter in the experienced group compared to the non-experienced group. Conclusions This study is the first to report that surgical skill can be successfully and accurately determined by an AI model for RDG. Our AI gives us a way to recognize and automatically generate instance segmentation of the surgical instruments present in this procedure. Use of this technology allows unbiased, more accessible RDG surgical skill.
Collapse
Affiliation(s)
- James S. Strong
- Department of SurgeryKeio University School of MedicineTokyoJapan
- Harvard CollegeHarvard UniversityCambridgeMassachusettsUSA
| | - Tasuku Furube
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Masashi Takeuchi
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | | | - Yusuke Maeda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Satoru Matsuda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Kazumasa Fukuda
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Rieko Nakamura
- Department of SurgeryKeio University School of MedicineTokyoJapan
| | - Yuko Kitagawa
- Department of SurgeryKeio University School of MedicineTokyoJapan
| |
Collapse
|
4
|
Zhang R, Wei Y, Wang D, Chen B, Sun H, Lei Y, Zhou Q, Luo Z, Jiang L, Qiu R, Shi F, Li W. Deep learning for malignancy risk estimation of incidental sub-centimeter pulmonary nodules on CT images. Eur Radiol 2024; 34:4218-4229. [PMID: 38114849 DOI: 10.1007/s00330-023-10518-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/18/2023] [Accepted: 11/11/2023] [Indexed: 12/21/2023]
Abstract
OBJECTIVES To establish deep learning models for malignancy risk estimation of sub-centimeter pulmonary nodules incidentally detected by chest CT and managed in clinical settings. MATERIALS AND METHODS Four deep learning models were trained using CT images of sub-centimeter pulmonary nodules from West China Hospital, internally tested, and externally validated on three cohorts. The four models respectively learned 3D deep features from the baseline whole lung region, baseline image patch where the nodule located, baseline nodule box, and baseline plus follow-up nodule boxes. All regions of interest were automatically segmented except that the nodule boxes were additionally manually checked. The performance of models was compared with each other and that of three respiratory clinicians. RESULTS There were 1822 nodules (981 malignant) in the training set, 806 (416 malignant) in the testing set, and 357 (253 malignant) totally in the external sets. The area under the curve (AUC) in the testing set was 0.754, 0.855, 0.928, and 0.942, respectively, for models derived from baseline whole lung, image patch, nodule box, and the baseline plus follow-up nodule boxes. When baseline models externally validated (follow-up images not available), the nodule-box model outperformed the other two with AUC being 0.808, 0.848, and 0.939 respectively in the three external datasets. The resident, junior, and senior clinicians achieved an accuracy of 67.0%, 82.5%, and 90.0%, respectively, in the testing set. The follow-up model performed comparably to the senior clinician. CONCLUSION The deep learning algorithms solely mining nodule information can efficiently predict malignancy of incidental sub-centimeter pulmonary nodules. CLINICAL RELEVANCE STATEMENT The established models may be valuable for supporting clinicians in routine clinical practice, potentially reducing the number of unnecessary examinations and also delays in diagnosis. KEY POINTS • According to different regions of interest, four deep learning models were developed and compared to evaluate the malignancy of sub-centimeter pulmonary nodules by CT images. • The models derived from baseline nodule box or baseline plus follow-up nodule boxes demonstrated sufficient diagnostic accuracy (86.4% and 90.4% in the testing set), outperforming the respiratory resident (67.0%) and junior clinician (82.5%). • The proposed deep learning methods may aid clinicians in optimizing follow-up recommendations for sub-centimeter pulmonary nodules and may lead to fewer unnecessary diagnostic interventions.
Collapse
Affiliation(s)
- Rui Zhang
- Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
- General Practice Medical Center, West China Hospital, Sichuan University, Chengdu, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Denian Wang
- Precision Medicine Center, Precision Medicine Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China
| | - Bojiang Chen
- Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yi Lei
- General Practice Medical Center, West China Hospital, Sichuan University, Chengdu, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Zhuang Luo
- Department of Pulmonary and Critical Care Medicine, the First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Li Jiang
- Department of Respiratory and Critical Care Medicine, the Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| | - Rong Qiu
- Department of Respiratory and Critical Care Medicine, Suining Central Hospital, Suining, Sichuan, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China.
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
5
|
Su Y, Xia X, Sun R, Yuan J, Hua Q, Han B, Gong J, Nie S. Res-TransNet: A Hybrid deep Learning Network for Predicting Pathological Subtypes of lung Adenocarcinoma in CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01149-z. [PMID: 38861071 DOI: 10.1007/s10278-024-01149-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 04/30/2024] [Accepted: 05/22/2024] [Indexed: 06/12/2024]
Abstract
This study aims to develop a CT-based hybrid deep learning network to predict pathological subtypes of early-stage lung adenocarcinoma by integrating residual network (ResNet) with Vision Transformer (ViT). A total of 1411 pathologically confirmed ground-glass nodules (GGNs) retrospectively collected from two centers were used as internal and external validation sets for model development. 3D ResNet and ViT were applied to investigate two deep learning frameworks to classify three subtypes of lung adenocarcinoma namely invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma and adenocarcinoma in situ, respectively. To further improve the model performance, four Res-TransNet based models were proposed by integrating ResNet and ViT with different ensemble learning strategies. Two classification tasks involving predicting IAC from Non-IAC (Task1) and classifying three subtypes (Task2) were designed and conducted in this study. For Task 1, the optimal Res-TransNet model yielded area under the receiver operating characteristic curve (AUC) values of 0.986 and 0.933 on internal and external validation sets, which were significantly higher than that of ResNet and ViT models (p < 0.05). For Task 2, the optimal fusion model generated the accuracy and weighted F1 score of 68.3% and 66.1% on the external validation set. The experimental results demonstrate that Res-TransNet can significantly increase the classification performance compared with the two basic models and have the potential to assist radiologists in precision diagnosis.
Collapse
Affiliation(s)
- Yue Su
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xianwu Xia
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Jianjun Yuan
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Qianjin Hua
- Department of Oncology Intervention, Municipal Hospital Affiliated of Taizhou University, Zhejiang, Taizhou, 318000, China
| | - Baosan Han
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
- Department of Breast Surgery, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Jing Gong
- Department of Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Shanghai, 200032, China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
6
|
Liu J, Qi L, Xu Q, Chen J, Cui S, Li F, Wang Y, Cheng S, Tan W, Zhou Z, Wang J. A Self-supervised Learning-Based Fine-Grained Classification Model for Distinguishing Malignant From Benign Subcentimeter Solid Pulmonary Nodules. Acad Radiol 2024:S1076-6332(24)00287-3. [PMID: 38777719 DOI: 10.1016/j.acra.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/02/2024] [Accepted: 05/05/2024] [Indexed: 05/25/2024]
Abstract
RATIONALE AND OBJECTIVES Diagnosing subcentimeter solid pulmonary nodules (SSPNs) remains challenging in clinical practice. Deep learning may perform better than conventional methods in differentiating benign and malignant pulmonary nodules. This study aimed to develop and validate a model for differentiating malignant and benign SSPNs using CT images. MATERIALS AND METHODS This retrospective study included consecutive patients with SSPNs detected between January 2015 and October 2021 as an internal dataset. Malignancy was confirmed pathologically; benignity was confirmed pathologically or via follow-up evaluations. The SSPNs were segmented manually. A self-supervision pre-training-based fine-grained network was developed for predicting SSPN malignancy. The pre-trained model was established using data from the National Lung Screening Trial, Lung Nodule Analysis 2016, and a database of 5478 pulmonary nodules from the previous study, with subsequent fine-tuning using the internal dataset. The model's efficacy was investigated using an external cohort from another center, and its accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were determined. RESULTS Overall, 1276 patients (mean age, 56 ± 10 years; 497 males) with 1389 SSPNs (mean diameter, 7.5 ± 2.0 mm; 625 benign) were enrolled. The internal dataset was specifically enriched for malignancy. The model's performance in the internal testing set (316 SSPNs) was: AUC, 0.964 (95% confidence interval (95%CI): 0.942-0.986); accuracy, 0.934; sensitivity, 0.965; and specificity, 0.908. The model's performance in the external test set (202 SSPNs) was: AUC, 0.945 (95% CI: 0.910-0.979); accuracy, 0.911; sensitivity, 0.977; and specificity, 0.860. CONCLUSION This deep learning model was robust and exhibited good performance in predicting the malignancy of SSPNs, which could help optimize patient management.
Collapse
Affiliation(s)
- Jianing Liu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Linlin Qi
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Qian Xu
- Department of Computed Tomography and Magnetic Resonance, The Fourth Hospital of Hebei Medical University, Shijiazhuang, He Bei, China
| | - Jiaqi Chen
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Shulei Cui
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Fenglan Li
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Yawen Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Sainan Cheng
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China
| | - Weixiong Tan
- Beijing Deepwise & League of PhD Technology Co. Ltd, Beijing, China
| | - Zhen Zhou
- Beijing Deepwise & League of PhD Technology Co. Ltd, Beijing, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 17 Panjiayuan Nanli, Chaoyang District, Beijing 100021, China.
| |
Collapse
|
7
|
Lococo F, Ghaly G, Chiappetta M, Flamini S, Evangelista J, Bria E, Stefani A, Vita E, Martino A, Boldrini L, Sassorossi C, Campanella A, Margaritora S, Mohammed A. Implementation of Artificial Intelligence in Personalized Prognostic Assessment of Lung Cancer: A Narrative Review. Cancers (Basel) 2024; 16:1832. [PMID: 38791910 PMCID: PMC11119930 DOI: 10.3390/cancers16101832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/02/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial Intelligence (AI) has revolutionized the management of non-small-cell lung cancer (NSCLC) by enhancing different aspects, including staging, prognosis assessment, treatment prediction, response evaluation, recurrence/prognosis prediction, and personalized prognostic assessment. AI algorithms may accurately classify NSCLC stages using machine learning techniques and deep imaging data analysis. This could potentially improve precision and efficiency in staging, facilitating personalized treatment decisions. Furthermore, there are data suggesting the potential application of AI-based models in predicting prognosis in terms of survival rates and disease progression by integrating clinical, imaging and molecular data. In the present narrative review, we will analyze the preliminary studies reporting on how AI algorithms could predict responses to various treatment modalities, such as surgery, radiotherapy, chemotherapy, targeted therapy, and immunotherapy. There is robust evidence suggesting that AI also plays a crucial role in predicting the likelihood of tumor recurrence after surgery and the pattern of failure, which has significant implications for tailoring adjuvant treatments. The successful implementation of AI in personalized prognostic assessment requires the integration of different data sources, including clinical, molecular, and imaging data. Machine learning (ML) and deep learning (DL) techniques enable AI models to analyze these data and generate personalized prognostic predictions, allowing for a precise and individualized approach to patient care. However, challenges relating to data quality, interpretability, and the ability of AI models to generalize need to be addressed. Collaboration among clinicians, data scientists, and regulators is critical for the responsible implementation of AI and for maximizing its benefits in providing a more personalized prognostic assessment. Continued research, validation, and collaboration are essential to fully exploit the potential of AI in NSCLC management and improve patient outcomes. Herein, we have summarized the state of the art of applications of AI in lung cancer for predicting staging, prognosis, and pattern of recurrence after treatment in order to provide to the readers a large comprehensive overview of this challenging issue.
Collapse
Affiliation(s)
- Filippo Lococo
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Galal Ghaly
- Faculty of Medicine and Surgery, Thoracic Surgery Unit, Cairo University, Giza 12613, Egypt; (G.G.); (A.M.)
| | - Marco Chiappetta
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Sara Flamini
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Jessica Evangelista
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Emilio Bria
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Medical Oncology, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy
| | - Alessio Stefani
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Medical Oncology, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy
| | - Emanuele Vita
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Medical Oncology, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy
| | - Antonella Martino
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Radiotherapy Unit, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy
| | - Luca Boldrini
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Radiotherapy Unit, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy
| | - Carolina Sassorossi
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Annalisa Campanella
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Stefano Margaritora
- Faculty of Medicine and Surgery, Catholic University of Sacred Heart, 00168 Rome, Italy; (M.C.); (J.E.); (E.B.); (A.S.); (E.V.); (A.M.); (L.B.); (C.S.); (S.M.)
- Thoracic Surgery, A. Gemelli University Hospital Foundation IRCCS, 00168 Rome, Italy; (S.F.); (A.C.)
| | - Abdelrahman Mohammed
- Faculty of Medicine and Surgery, Thoracic Surgery Unit, Cairo University, Giza 12613, Egypt; (G.G.); (A.M.)
| |
Collapse
|
8
|
Pan Z, Hu G, Zhu Z, Tan W, Han W, Zhou Z, Song W, Yu Y, Song L, Jin Z. Predicting Invasiveness of Lung Adenocarcinoma at Chest CT with Deep Learning Ternary Classification Models. Radiology 2024; 311:e232057. [PMID: 38591974 DOI: 10.1148/radiol.232057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.
Collapse
Affiliation(s)
- Zhengsong Pan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Ge Hu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhenchen Zhu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Weixiong Tan
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Han
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhen Zhou
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Wei Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Yizhou Yu
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Lan Song
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| | - Zhengyu Jin
- From the Department of Radiology (Z.P., Z. Zhu, W.S., L.S., Z.J.), Medical Research Center (G.H.), State Key Laboratory of Complex Severe and Rare Disease (G.H.), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuaifuyuan, Dongcheng District, Beijing 100730, China; 4 + 4 Medical Doctor Program (Z.P., Z. Zhu), Department of Epidemiology and Health Statistics (W.H.), Institute of Basic Medicine Sciences (W.H.), Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; Deepwise AI Laboratory, Beijing Deepwise & League of PhD Technology, Beijing, China (W.T., Z. Zhou, Y.Y.); and Department of Computer Science, The University of Hong Kong, Hong Kong, China (Y.Y.)
| |
Collapse
|
9
|
Yang X, Chu XP, Huang S, Xiao Y, Li D, Su X, Qi YF, Qiu ZB, Wang Y, Tang WF, Wu YL, Zhu Q, Liang H, Zhong WZ. A novel image deep learning-based sub-centimeter pulmonary nodule management algorithm to expedite resection of the malignant and avoid over-diagnosis of the benign. Eur Radiol 2024; 34:2048-2061. [PMID: 37658883 DOI: 10.1007/s00330-023-10026-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/08/2023] [Accepted: 06/26/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVES With the popularization of chest computed tomography (CT) screening, there are more sub-centimeter (≤ 1 cm) pulmonary nodules (SCPNs) requiring further diagnostic workup. This area represents an important opportunity to optimize the SCPN management algorithm avoiding "one-size fits all" approach. One critical problem is how to learn the discriminative multi-view characteristics and the unique context of each SCPN. METHODS Here, we propose a multi-view coupled self-attention module (MVCS) to capture the global spatial context of the CT image through modeling the association order of space and dimension. Compared with existing self-attention methods, MVCS uses less memory consumption and computational complexity, unearths dimension correlations that previous methods have not found, and is easy to integrate with other frameworks. RESULTS In total, a public dataset LUNA16 from LIDC-IDRI, 1319 SCPNs from 1069 patients presenting to a major referral center, and 160 SCPNs from 137 patients from three other major centers were analyzed to pre-train, train, and validate the model. Experimental results showed that performance outperforms the state-of-the-art models in terms of accuracy and stability and is comparable to that of human experts in classifying precancerous lesions and invasive adenocarcinoma. We also provide a fusion MVCS network (MVCSN) by combining the CT image with the clinical characteristics and radiographic features of patients. CONCLUSION This tool may ultimately aid in expediting resection of the malignant SCPNs and avoid over-diagnosis of the benign ones, resulting in improved management outcomes. CLINICAL RELEVANCE STATEMENT In the diagnosis of sub-centimeter lung adenocarcinoma, fusion MVCSN can help doctors improve work efficiency and guide their treatment decisions to a certain extent. KEY POINTS • Advances in computed tomography (CT) not only increase the number of nodules detected, but also the nodules that are identified are smaller, such as sub-centimeter pulmonary nodules (SCPNs). • We propose a multi-view coupled self-attention module (MVCS), which could model spatial and dimensional correlations sequentially for learning global spatial contexts, which is better than other attention mechanisms. • MVCS uses fewer huge memory consumption and computational complexity than the existing self-attention methods when dealing with 3D medical image data. Additionally, it reaches promising accuracy for SCPNs' malignancy evaluation and has lower training cost than other models.
Collapse
Affiliation(s)
- Xiongwen Yang
- School of Medicine, South China University of Technology, Guangzhou, China
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China
| | - Xiang-Peng Chu
- School of Medicine, South China University of Technology, Guangzhou, China
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China
| | - Shaohong Huang
- Department of Cardio-Thoracic Surgery, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Yi Xiao
- Department of Cardio-Thoracic Surgery, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, Guangdong, China
| | - Dantong Li
- Medical Big Data Center, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou, China
- Guangdong Cardiovascular Institute, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Xiaoyang Su
- Department of Thoracic Surgery, Maoming City People's Hospital, Maoming, China
| | - Yi-Fan Qi
- School of Medicine, South China University of Technology, Guangzhou, China
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China
| | - Zhen-Bin Qiu
- School of Medicine, South China University of Technology, Guangzhou, China
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China
| | - Yanqing Wang
- Department of Gynecology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Wen-Fang Tang
- Department of Cardio-Thoracic Surgery, Zhongshan City People's Hospital, Zhongshan, China
| | - Yi-Long Wu
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China
| | - Qikui Zhu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106, USA.
| | - Huiying Liang
- Medical Big Data Center, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou, China.
- Guangdong Cardiovascular Institute, Guangzhou, Guangdong, China.
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Wen-Zhao Zhong
- School of Medicine, South China University of Technology, Guangzhou, China.
- Guangdong Lung Cancer Institute, Guangdong Provincial Key Laboratory of Translational Medicine in Lung Cancer, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, 106 Zhongshan Er Rd, Guangzhou, 510080, China.
| |
Collapse
|
10
|
Zhang Y, Sun B, Yu Y, Lu J, Lou Y, Qian F, Chen T, Zhang L, Yang J, Zhong H, Wu L, Han B. Multimodal fusion of liquid biopsy and CT enhances differential diagnosis of early-stage lung adenocarcinoma. NPJ Precis Oncol 2024; 8:50. [PMID: 38409480 PMCID: PMC10897137 DOI: 10.1038/s41698-024-00551-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/15/2024] [Indexed: 02/28/2024] Open
Abstract
This research explores the potential of multimodal fusion for the differential diagnosis of early-stage lung adenocarcinoma (LUAD) (tumor sizes < 2 cm). It combines liquid biopsy biomarkers, specifically extracellular vesicle long RNA (evlRNA) and the computed tomography (CT) attributes. The fusion model achieves an impressive area under receiver operating characteristic curve (AUC) of 91.9% for the four-classification of adenocarcinoma, along with a benign-malignant AUC of 94.8% (sensitivity: 89.1%, specificity: 94.3%). These outcomes outperform the diagnostic capabilities of the single-modal models and human experts. A comprehensive SHapley Additive exPlanations (SHAP) is provided to offer deep insights into model predictions. Our findings reveal the complementary interplay between evlRNA and image-based characteristics, underscoring the significance of integrating diverse modalities in diagnosing early-stage LUAD.
Collapse
Affiliation(s)
- Yanwei Zhang
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Beibei Sun
- Institute for Thoracic Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | | | - Jun Lu
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yuqing Lou
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fangfei Qian
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tianxiang Chen
- Shanghai Lung Cancer Center, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Zhang
- Dianei Technology, Shanghai, China
| | - Jiancheng Yang
- Dianei Technology, Shanghai, China.
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| | - Hua Zhong
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Ligang Wu
- State Key Laboratory of Molecular Biology, Shanghai Key Laboratory of Molecular Andrology, Center for Excellence in Molecular Cell Science, Shanghai Institute of Biochemistry and Cell Biology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, China.
| | - Baohui Han
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
11
|
Boubnovski Martell M, Linton-Reid K, Hindocha S, Chen M, Moreno P, Álvarez-Benito M, Salvatierra Á, Lee R, Posma JM, Calzado MA, Aboagye EO. Deep representation learning of tissue metabolome and computed tomography annotates NSCLC classification and prognosis. NPJ Precis Oncol 2024; 8:28. [PMID: 38310164 PMCID: PMC10838282 DOI: 10.1038/s41698-024-00502-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 01/04/2024] [Indexed: 02/05/2024] Open
Abstract
The rich chemical information from tissue metabolomics provides a powerful means to elaborate tissue physiology or tumor characteristics at cellular and tumor microenvironment levels. However, the process of obtaining such information requires invasive biopsies, is costly, and can delay clinical patient management. Conversely, computed tomography (CT) is a clinical standard of care but does not intuitively harbor histological or prognostic information. Furthermore, the ability to embed metabolome information into CT to subsequently use the learned representation for classification or prognosis has yet to be described. This study develops a deep learning-based framework -- tissue-metabolomic-radiomic-CT (TMR-CT) by combining 48 paired CT images and tumor/normal tissue metabolite intensities to generate ten image embeddings to infer metabolite-derived representation from CT alone. In clinical NSCLC settings, we ascertain whether TMR-CT results in an enhanced feature generation model solving histology classification/prognosis tasks in an unseen international CT dataset of 742 patients. TMR-CT non-invasively determines histological classes - adenocarcinoma/squamous cell carcinoma with an F1-score = 0.78 and further asserts patients' prognosis with a c-index = 0.72, surpassing the performance of radiomics models and deep learning on single modality CT feature extraction. Additionally, our work shows the potential to generate informative biology-inspired CT-led features to explore connections between hard-to-obtain tissue metabolic profiles and routine lesion-derived image data.
Collapse
Affiliation(s)
| | | | - Sumeet Hindocha
- Early Diagnosis and Detection Centre, National Institute for Health and Care Research Biomedical Research Centre at the Royal Marsden and Institute of Cancer Research, London, SW3 6JJ, UK
| | - Mitchell Chen
- Imperial College London Hammersmith Campus, London, SW7 2AZ, UK
| | - Paula Moreno
- Instituto Maimónides de Investigación Biomédica de Córdoba (IMIBIC), Córdoba, 14004, Spain
- Departamento de Cirugía Toráxica y Trasplante de Pulmón, Hospital Universitario Reina Sofía, Córdoba, 14014, Spain
| | - Marina Álvarez-Benito
- Instituto Maimónides de Investigación Biomédica de Córdoba (IMIBIC), Córdoba, 14004, Spain
- Unidad de Radiodiagnóstico y Cáncer de Mama, Hospital Universitario Reina Sofía, Córdoba, 14004, Spain
| | - Ángel Salvatierra
- Instituto Maimónides de Investigación Biomédica de Córdoba (IMIBIC), Córdoba, 14004, Spain
- Unidad de Radiodiagnóstico y Cáncer de Mama, Hospital Universitario Reina Sofía, Córdoba, 14004, Spain
| | - Richard Lee
- Early Diagnosis and Detection Centre, National Institute for Health and Care Research Biomedical Research Centre at the Royal Marsden and Institute of Cancer Research, London, SW3 6JJ, UK
- National Heart and Lung Institute, Imperial College London, Guy Scadding Building, Dovehouse Street, London, SW3 6LY, UK
| | - Joram M Posma
- Imperial College London Hammersmith Campus, London, SW7 2AZ, UK
| | - Marco A Calzado
- Instituto Maimónides de Investigación Biomédica de Córdoba (IMIBIC), Córdoba, 14004, Spain.
- Departamento de Biología Celular, Fisiología e Inmunología, Universidad de Córdoba, Córdoba, 14014, Spain.
| | - Eric O Aboagye
- Imperial College London Hammersmith Campus, London, SW7 2AZ, UK.
| |
Collapse
|
12
|
Aamir A, Iqbal A, Jawed F, Ashfaque F, Hafsa H, Anas Z, Oduoye MO, Basit A, Ahmed S, Abdul Rauf S, Khan M, Mansoor T. Exploring the current and prospective role of artificial intelligence in disease diagnosis. Ann Med Surg (Lond) 2024; 86:943-949. [PMID: 38333305 PMCID: PMC10849462 DOI: 10.1097/ms9.0000000000001700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 12/28/2023] [Indexed: 02/10/2024] Open
Abstract
Artificial intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems, providing assistance in a variety of patient care and health systems. The aim of this review is to contribute valuable insights to the ongoing discourse on the transformative potential of AI in healthcare, providing a nuanced understanding of its current applications, future possibilities, and associated challenges. The authors conducted a literature search on the current role of AI in disease diagnosis and its possible future applications using PubMed, Google Scholar, and ResearchGate within 10 years. Our investigation revealed that AI, encompassing machine-learning and deep-learning techniques, has become integral to healthcare, facilitating immediate access to evidence-based guidelines, the latest medical literature, and tools for generating differential diagnoses. However, our research also acknowledges the limitations of current AI methodologies in disease diagnosis and explores uncertainties and obstacles associated with the complete integration of AI into clinical practice. This review has highlighted the critical significance of integrating AI into the medical healthcare framework and meticulously examined the evolutionary trajectory of healthcare-oriented AI from its inception, delving into the current state of development and projecting the extent of reliance on AI in the future. The authors have found that central to this study is the exploration of how the strategic integration of AI can accelerate the diagnostic process, heighten diagnostic accuracy, and enhance overall operational efficiency, concurrently relieving the burdens faced by healthcare practitioners.
Collapse
Affiliation(s)
- Ali Aamir
- Department of Medicine, Dow University of Health Sciences
| | - Arham Iqbal
- Department of Medicine, Dow International Medical College, Karachi, Pakistan
| | - Fareeha Jawed
- Department of Medicine, Dow University of Health Sciences
| | - Faiza Ashfaque
- Department of Medicine, Dow University of Health Sciences
| | - Hafiza Hafsa
- Department of Medicine, Dow University of Health Sciences
| | - Zahra Anas
- Department of Medicine, Dow University of Health Sciences
| | - Malik Olatunde Oduoye
- Department of Research, Medical Research Circle, Bukavu, Democratic Republic of Congo
| | - Abdul Basit
- Department of Medicine, Dow University of Health Sciences
| | - Shaheer Ahmed
- Department of Medicine, Dow University of Health Sciences
| | | | - Mushkbar Khan
- Liaquat National Hospital and Medical College, Pakistan
| | | |
Collapse
|
13
|
Zhang H, Deng Y, Xiaojie M, Zou Q, Liu H, Tang N, Luo Y, Xiang X. CT radiomics for predicting the prognosis of patients with stage II rectal cancer during the three-year period after surgery, chemotherapy and radiotherapy. Heliyon 2024; 10:e23923. [PMID: 38223741 PMCID: PMC10787243 DOI: 10.1016/j.heliyon.2023.e23923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 11/29/2023] [Accepted: 12/15/2023] [Indexed: 01/16/2024] Open
Abstract
Objective Pre-treatment enhanced CT image data were used to train and build models to predict the efficacy of non-small cell lung cancer after conventional radiotherapy and chemotherapy using two classification algorithms, Logistic Regression (LR) and Gaussian Naive Baye (GNB). Methods In this study, we used pre-treatment enhanced CT image data for region of interest (ROI) sketching and feature extraction. We utilized the least absolute shrinkage and selection operator (LASSO) mutual confidence method for feature screening. We pre-screened logistic regression (LR) and Gaussian naive Bayes (GNB) classification algorithms and trained and modeled the screened features. We plotted 5-fold and 10-fold cross-validated receiver operating characteristic (ROC) curves to calculate the area under the curve (AUC). We performed DeLong's test for validation and plotted calibration curves and decision curves to assess model performance. Results A total of 102 patients were included in this study, and after a comparative analysis of the two models, LR had only slightly lower specificity than GNB, and higher sensitivity, accuracy, AUC value, precision, and F1 value than GNB (training set accuracy: 0.787, AUC value: 0.851; test set accuracy: 0.772, AUC value: 0.849), and the LR model has better performance in both the decision curve and the calibration curve. Conclusion CT can be used for efficacy prediction after radiotherapy and chemotherapy in NSCLC patients. LR is more suitable for predicting whether NSCLC prognosis is in remission without considering the computing speed.
Collapse
Affiliation(s)
- Hanjing Zhang
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Yu Deng
- The Affiliated Cancer Hospital of Guizhou Medical University, GuiYang, Guizhou Province, 550000, China
| | - M.A. Xiaojie
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Qian Zou
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Huanhui Liu
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Ni Tang
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Yuanyuan Luo
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| | - Xuejing Xiang
- Department of Oncology, Affiliated Hospital of Chuanbei Medical College, Nanchong, Sichuan Province, 637000, China
| |
Collapse
|
14
|
Qi K, Wang K, Wang X, Zhang YD, Lin G, Zhang X, Liu H, Huang W, Wu J, Zhao K, Liu J, Li J, Zhang X. Lung-PNet: An Automated Deep Learning Model for the Diagnosis of Invasive Adenocarcinoma in Pure Ground-Glass Nodules on Chest CT. AJR Am J Roentgenol 2024; 222:e2329674. [PMID: 37493322 DOI: 10.2214/ajr.23.29674] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
BACKGROUND. Pure ground-glass nodules (pGGNs) on chest CT representing invasive adenocarcinoma (IAC) warrant lobectomy with lymph node resection. For pGGNs representing other entities, close follow-up or sublobar resection without node dissection may be appropriate. OBJECTIVE. The purpose of this study was to develop and validate an automated deep learning model for differentiation of pGGNs on chest CT representing IAC from those representing atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), and minimally invasive adenocarcinoma (MIA). METHODS. This retrospective study included 402 patients (283 women, 119 men; mean age, 53.2 years) with a total of 448 pGGNs on noncontrast chest CT that were resected from January 2019 to June 2022 and were histologically diagnosed as AAH (n = 29), AIS (n = 83), MIA (n = 235), or IAC (n = 101). Lung-PNet, a 3D deep learning model, was developed for automatic segmentation and classification (probability of IAC vs other entities) of pGGNs on CT. Nodules resected from January 2019 to December 2021 were randomly allocated to training (n = 327) and internal test (n = 82) sets. Nodules resected from January 2022 to June 2022 formed a holdout test set (n = 39). Segmentation performance was assessed with Dice coefficients with radiologists' manual segmentations as reference. Classification performance was assessed by ROC AUC and precision-recall AUC (PR AUC) and compared with that of four readers (three radiologists, one surgeon). The code used is publicly available (https://github.com/XiaodongZhang-PKUFH/Lung-PNet.git). RESULTS. In the holdout test set, Dice coefficients for segmentation of IACs and of other lesions were 0.860 and 0.838, and ROC AUC and PR AUC for classification as IAC were 0.911 and 0.842. At threshold probability of 50.0% or greater for prediction of IAC, Lung-PNet had sensitivity, specificity, accuracy, and F1 score of 50.0%, 92.0%, 76.9%, and 60.9% in the holdout test set. In the holdout test set, accuracy and F1 score (p values vs Lung-PNet) for individual readers were as follows: reader 1, 51.3% (p = .02) and 48.6% (p = .008); reader 2, 79.5% (p = .75) and 75.0% (p = .10); reader 3, 66.7% (p = .35) and 68.3% (p < .001); reader 4, 71.8% (p = .48) and 42.1% (p = .18). CONCLUSION. Lung-PNet had robust performance for segmenting and classifying (IAC vs other entities) pGGNs on chest CT. CLINICAL IMPACT. This automated deep learning tool may help guide selection of surgical strategies for pGGN management.
Collapse
Affiliation(s)
- Kang Qi
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Yu-Dong Zhang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Gang Lin
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Xining Zhang
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Haibo Liu
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Weiming Huang
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Jingyun Wu
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Kai Zhao
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Jing Liu
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| | - Jian Li
- Department of Thoracic Surgery, Peking University First Hospital, Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, 8 Xishiku St, Beijing 100034, China
| |
Collapse
|
15
|
Lin CY, Guo SM, Lien JJJ, Lin WT, Liu YS, Lai CH, Hsu IL, Chang CC, Tseng YL. Combined model integrating deep learning, radiomics, and clinical data to classify lung nodules at chest CT. LA RADIOLOGIA MEDICA 2024; 129:56-69. [PMID: 37971691 PMCID: PMC10808169 DOI: 10.1007/s11547-023-01730-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 09/21/2023] [Indexed: 11/19/2023]
Abstract
OBJECTIVES The study aimed to develop a combined model that integrates deep learning (DL), radiomics, and clinical data to classify lung nodules into benign or malignant categories, and to further classify lung nodules into different pathological subtypes and Lung Imaging Reporting and Data System (Lung-RADS) scores. MATERIALS AND METHODS The proposed model was trained, validated, and tested using three datasets: one public dataset, the Lung Nodule Analysis 2016 (LUNA16) Grand challenge dataset (n = 1004), and two private datasets, the Lung Nodule Received Operation (LNOP) dataset (n = 1027) and the Lung Nodule in Health Examination (LNHE) dataset (n = 1525). The proposed model used a stacked ensemble model by employing a machine learning (ML) approach with an AutoGluon-Tabular classifier. The input variables were modified 3D convolutional neural network (CNN) features, radiomics features, and clinical features. Three classification tasks were performed: Task 1: Classification of lung nodules into benign or malignant in the LUNA16 dataset; Task 2: Classification of lung nodules into different pathological subtypes; and Task 3: Classification of Lung-RADS score. Classification performance was determined based on accuracy, recall, precision, and F1-score. Ten-fold cross-validation was applied to each task. RESULTS The proposed model achieved high accuracy in classifying lung nodules into benign or malignant categories in LUNA 16 with an accuracy of 92.8%, as well as in classifying lung nodules into different pathological subtypes with an F1-score of 75.5% and Lung-RADS scores with an F1-score of 80.4%. CONCLUSION Our proposed model provides an accurate classification of lung nodules based on the benign/malignant, different pathological subtypes, and Lung-RADS system.
Collapse
Affiliation(s)
- Chia-Ying Lin
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Shu-Mei Guo
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Jenn-Jier James Lien
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Wen-Tsen Lin
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Yi-Sheng Liu
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Chao-Han Lai
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - I-Lin Hsu
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan City, Taiwan, R.O.C
| | - Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, Tainan City, 701, Taiwan, R.O.C..
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, Tainan City, 701, Taiwan, R.O.C
| |
Collapse
|
16
|
Zhang L, Shao Y, Chen G, Tian S, Zhang Q, Wu J, Bai C, Yang D. An artificial intelligence-assisted diagnostic system for the prediction of benignity and malignancy of pulmonary nodules and its practical value for patients with different clinical characteristics. Front Med (Lausanne) 2023; 10:1286433. [PMID: 38196835 PMCID: PMC10774219 DOI: 10.3389/fmed.2023.1286433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 01/11/2024] Open
Abstract
Objectives This study aimed to explore the value of an artificial intelligence (AI)-assisted diagnostic system in the prediction of pulmonary nodules. Methods The AI system was able to make predictions of benign or malignant nodules. 260 cases of solitary pulmonary nodules (SPNs) were divided into 173 malignant cases and 87 benign cases based on the surgical pathological diagnosis. A stratified data analysis was applied to compare the diagnostic effectiveness of the AI system to distinguish between the subgroups with different clinical characteristics. Results The accuracy of AI system in judging benignity and malignancy of the nodules was 75.77% (p < 0.05). We created an ROC curve by calculating the true positive rate (TPR) and the false positive rate (FPR) at different threshold values, and the AUC was 0.755. Results of the stratified analysis were as follows. (1) By nodule position: the AUC was 0.677, 0.758, 0.744, 0.982, and 0.725, respectively, for the nodules in the left upper lobe, left lower lobe, right upper lobe, right middle lobe, and right lower lobe. (2) By nodule size: the AUC was 0.778, 0.771, and 0.686, respectively, for the nodules measuring 5-10, 10-20, and 20-30 mm in diameter. (3) The predictive accuracy was higher for the subsolid pulmonary nodules than for the solid ones (80.54 vs. 66.67%). Conclusion The AI system can be applied to assist in the prediction of benign and malignant pulmonary nodules. It can provide a valuable reference, especially for the diagnosis of subsolid nodules and small nodules measuring 5-10 mm in diameter.
Collapse
Affiliation(s)
- Lichuan Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Yue Shao
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Guangmei Chen
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Simiao Tian
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Qing Zhang
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jianlin Wu
- Department of Respiratory Medicine, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Chunxue Bai
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| | - Dawei Yang
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital Fudan University, Shanghai, China
- Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, China
- Shanghai Respiratory Research Institution, Shanghai, China
| |
Collapse
|
17
|
Sun J, Zhang L, Hu B, Du Z, Cho WC, Witharana P, Sun H, Ma D, Ye M, Chen J, Wang X, Yang J, Zhu C, Shen J. Deep learning-based solid component measuring enabled interpretable prediction of tumor invasiveness for lung adenocarcinoma. Lung Cancer 2023; 186:107392. [PMID: 37816297 DOI: 10.1016/j.lungcan.2023.107392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/27/2023] [Accepted: 10/04/2023] [Indexed: 10/12/2023]
Abstract
BACKGROUND The nature of the solid component of subsolid nodules (SSNs) can indicate tumor pathological invasiveness. However, preoperative solid component assessment still lacks a reference standard. METHODS In this retrospective study, an AI algorithm was proposed for measuring the solid components ratio in SSNs, which was used to assess the diameter ratio (1D), area ratio (2D), and volume ratio (3D). The radiologist measured each SSN's consolidation to tumor ratio (CTR) twice, four weeks apart. The area under the receiver-operating characteristic (ROC) curve (AUC) was calculated for each method used to discriminate an Invasive Adenocarcinoma (IA) from a non-IA. The AUC and the time cost of each measurement were compared. Furthermore, we examined the consistency of measurements made by the radiologist on two separate occasions. RESULTS A total of 379 patients (the primary dataset n = 278, the validation dataset n = 101) were included. In the primary dataset, compared to the manual approach (AUC: 0.697), the AI algorithm (AUC: 0.811) had better predictive performance (P =.0027) in measuring solid components ratio in 3D. Algorithm measurement in 3D had an AUC no inferior to 1D (AUC: 0.806) and 2D (AUC: 0.796). In the validation dataset, the AI 3D method also achieved superior diagnostic performance compared to the radiologist (AUC: 0.803 vs 0.682, P =.046). The two measurements of the CTR in the primary dataset, taken 4 weeks apart, have 7.9 % cases in poor consistency. The measurement time cost by the radiologist is about 60 times that of the AI algorithm (P <.001). CONCLUSION The 3D measurement of solid components using AI, is an effective and objective approach to predict the pathological invasiveness of SSNs. It can be a preoperative interpretable indicator of pathological invasiveness in patients with lung adenocarcinoma.
Collapse
Affiliation(s)
- Jiajing Sun
- Taizhou Hospital, Zhejiang University School of Medicine, Taizhou, China
| | - Li Zhang
- Dianei Technology, Shanghai, China
| | - Bingyu Hu
- Department of Thoracic Surgery, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Zhicheng Du
- Department of Medical Statistics, School of Public Health, Sun Yat-sen University Guangzhou, China
| | - William C Cho
- Department of Clinical Oncology, Queen Elizabeth Hospital, Kowloon, Hong Kong, China
| | - Pasan Witharana
- Northern General Hospital, Herries Rd, Sheffield S5 7AU, UK; Imperial College London, London SW7 2BX, UK
| | - Hua Sun
- Taizhou Hospital, Zhejiang University School of Medicine, Taizhou, China
| | - Dehua Ma
- Department of Thoracic Surgery, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | - Minhua Ye
- Department of Thoracic Surgery, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China
| | | | | | - Jiancheng Yang
- Dianei Technology, Shanghai, China; Shanghai Jiao Tong University, Shanghai, China; EPFL, Lausanne, Switzerland
| | - Chengchu Zhu
- Taizhou Hospital, Zhejiang University School of Medicine, Taizhou, China; Department of Thoracic Surgery, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China.
| | - Jianfei Shen
- Taizhou Hospital, Zhejiang University School of Medicine, Taizhou, China; Department of Thoracic Surgery, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Linhai, China.
| |
Collapse
|
18
|
Zhu Y, Chen LL, Luo YW, Zhang L, Ma HY, Yang HS, Liu BC, Li LJ, Zhang WB, Li XM, Xie CM, Yang JC, Wang DL, Li Q. Prognostic impact of deep learning-based quantification in clinical stage 0-I lung adenocarcinoma. Eur Radiol 2023; 33:8542-8553. [PMID: 37436506 DOI: 10.1007/s00330-023-09845-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 03/24/2023] [Accepted: 04/21/2023] [Indexed: 07/13/2023]
Abstract
OBJECTIVES To evaluate the performance of automatic deep learning (DL) algorithm for size, mass, and volume measurements in predicting prognosis of lung adenocarcinoma (LUAD) and compared with manual measurements. METHODS A total of 542 patients with clinical stage 0-I peripheral LUAD and with preoperative CT data of 1-mm slice thickness were included. Maximal solid size on axial image (MSSA) was evaluated by two chest radiologists. MSSA, volume of solid component (SV), and mass of solid component (SM) were evaluated by DL. Consolidation-to-tumor ratios (CTRs) were calculated. For ground glass nodules (GGNs), solid parts were extracted with different density level thresholds. The prognosis prediction efficacy of DL was compared with that of manual measurements. Multivariate Cox proportional hazards model was used to find independent risk factors. RESULTS The prognosis prediction efficacy of T-staging (TS) measured by radiologists was inferior to that of DL. For GGNs, MSSA-based CTR measured by radiologists (RMSSA%) could not stratify RFS and OS risk, whereas measured by DL using 0HU (2D-AIMSSA0HU%) could by using different cutoffs. SM and SV measured by DL using 0 HU (AISM0HU% and AISV0HU%) could effectively stratify the survival risk regardless of different cutoffs and were superior to 2D-AIMSSA0HU%. AISM0HU% and AISV0HU% were independent risk factors. CONCLUSION DL algorithm can replace human for more accurate T-staging of LUAD. For GGNs, 2D-AIMSSA0HU% could predict prognosis rather than RMSSA%. The prediction efficacy of AISM0HU% and AISV0HU% was more accurate than of 2D-AIMSSA0HU% and both were independent risk factors. CLINICAL RELEVANCE STATEMENT Deep learning algorithm could replace human for size measurements and could better stratify prognosis than manual measurements in patients with lung adenocarcinoma. KEY POINTS • Deep learning (DL) algorithm could replace human for size measurements and could better stratify prognosis than manual measurements in patients with lung adenocarcinoma (LUAD). • For GGNs, maximal solid size on axial image (MSSA)-based consolidation-to-tumor ratio (CTR) measured by DL using 0 HU could stratify survival risk than that measured by radiologists. • The prediction efficacy of mass- and volume-based CTRs measured by DL using 0 HU was more accurate than of MSSA-based CTR and both were independent risk factors.
Collapse
Affiliation(s)
- Ying Zhu
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, Province Guangdong, People's Republic of China
| | - Li-Li Chen
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, Province Guangdong, People's Republic of China
| | - Ying-Wei Luo
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China
| | - Li Zhang
- Dianei Technology, Shanghai, 200000, People's Republic of China
| | - Hui-Yun Ma
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China
| | - Hao-Shuai Yang
- Department of Thoracic Surgery, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, Province Guangdong, People's Republic of China
| | - Bao-Cong Liu
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China
| | - Lu-Jie Li
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, Province Guangdong, People's Republic of China
| | - Wen-Biao Zhang
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China
| | - Xiang-Min Li
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, 510080, Province Guangdong, People's Republic of China
| | - Chuan-Miao Xie
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China
| | - Jian-Cheng Yang
- Dianei Technology, Shanghai, 200000, People's Republic of China.
- Shanghai Jiao Tong University, Shanghai, China.
- EPFL, Lausanne, Switzerland.
| | - De-Ling Wang
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China.
| | - Qiong Li
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, Province Guangdong, People's Republic of China.
| |
Collapse
|
19
|
Chen TF, Yang L, Chen HB, Zhou ZG, Wu ZT, Luo HH, Li Q, Zhu Y. A pairwise radiomics algorithm-lesion pair relation estimation model for distinguishing multiple primary lung cancer from intrapulmonary metastasis. PRECISION CLINICAL MEDICINE 2023; 6:pbad029. [PMID: 38024138 PMCID: PMC10662663 DOI: 10.1093/pcmedi/pbad029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Background Distinguishing multiple primary lung cancer (MPLC) from intrapulmonary metastasis (IPM) is critical for their disparate treatment strategy and prognosis. This study aimed to establish a non-invasive model to make the differentiation pre-operatively. Methods We retrospectively studied 168 patients with multiple lung cancers (307 pairs of lesions) including 118 cases for modeling and internal validation, and 50 cases for independent external validation. Radiomic features on computed tomography (CT) were extracted to calculate the absolute deviation of paired lesions. Features were then selected by correlation coefficients and random forest classifier 5-fold cross-validation, based on which the lesion pair relation estimation (PRE) model was developed. A major voting strategy was used to decide diagnosis for cases with multiple pairs of lesions. Cases from another institute were included as the external validation set for the PRE model to compete with two experienced clinicians. Results Seven radiomic features were selected for the PRE model construction. With major voting strategy, the mean area under receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity of the training versus internal validation versus external validation cohort to distinguish MPLC were 0.983 versus 0.844 versus 0.793, 0.942 versus 0.846 versus 0.760, 0.905 versus 0.728 versus 0.727, and 0.962 versus 0.910 versus 0.769, respectively. AUCs of the two clinicians were 0.619 and 0.580. Conclusions The CT radiomic feature-based lesion PRE model is potentially an accurate diagnostic tool for the differentiation of MPLC and IPM, which could help with clinical decision making.
Collapse
Affiliation(s)
- Ting-Fei Chen
- Department of Thoracic Surgery, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
| | - Lei Yang
- Department of Thoracic Surgery, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
| | - Hai-Bin Chen
- Breax Laboratory, PCAB Research Center of Breath and Metabolism, Beijing 100017, China
| | - Zhi-Guo Zhou
- Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, University of Kansas Medical Center, and University of Kansas Cancer Center, Kansas City, KS 66160, USA
| | - Zhen-Tian Wu
- Center for Information Technology & Statistics, Statistics Section, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
| | - Hong-He Luo
- Department of Thoracic Surgery, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
| | - Qiong Li
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510000, China
| | - Ying Zhu
- Department of Radiology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510000, China
| |
Collapse
|
20
|
Kang CC, Lee TY, Lim WF, Yeo WWY. Opportunities and challenges of 5G network technology toward precision medicine. Clin Transl Sci 2023; 16:2078-2094. [PMID: 37702288 PMCID: PMC10651640 DOI: 10.1111/cts.13640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 08/31/2023] [Accepted: 09/01/2023] [Indexed: 09/14/2023] Open
Abstract
Moving away from traditional "one-size-fits-all" treatment to precision-based medicine has tremendously improved disease prognosis, accuracy of diagnosis, disease progression prediction, and targeted-treatment. The current cutting-edge of 5G network technology is enabling a growing trend in precision medicine to extend its utility and value to the smart healthcare system. The 5G network technology will bring together big data, artificial intelligence, and machine learning to provide essential levels of connectivity to enable a new health ecosystem toward precision medicine. In the 5G-enabled health ecosystem, its applications involve predictive and preventative measurements which enable advances in patient personalization. This review aims to discuss the opportunities, challenges, and prospects posed to 5G network technology in moving forward to deliver personalized treatments and patient-centric care via a precision medicine approach.
Collapse
Affiliation(s)
- Chia Chao Kang
- School of Electrical Engineering and Artificial IntelligenceXiamen University MalaysiaSepangSelangorMalaysia
| | - Tze Yan Lee
- School of Liberal Arts, Science and Technology (PUScLST)Perdana UniversityKuala LumpurMalaysia
| | - Wai Feng Lim
- Sunway Medical CentreSubang JayaSelangor Darul EhsanMalaysia
| | - Wendy Wai Yeng Yeo
- School of PharmacyMonash University MalaysiaBandar SunwaySelangor Darul EhsanMalaysia
| |
Collapse
|
21
|
Gandhi Z, Gurram P, Amgai B, Lekkala SP, Lokhandwala A, Manne S, Mohammed A, Koshiya H, Dewaswala N, Desai R, Bhopalwala H, Ganti S, Surani S. Artificial Intelligence and Lung Cancer: Impact on Improving Patient Outcomes. Cancers (Basel) 2023; 15:5236. [PMID: 37958411 PMCID: PMC10650618 DOI: 10.3390/cancers15215236] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 11/15/2023] Open
Abstract
Lung cancer remains one of the leading causes of cancer-related deaths worldwide, emphasizing the need for improved diagnostic and treatment approaches. In recent years, the emergence of artificial intelligence (AI) has sparked considerable interest in its potential role in lung cancer. This review aims to provide an overview of the current state of AI applications in lung cancer screening, diagnosis, and treatment. AI algorithms like machine learning, deep learning, and radiomics have shown remarkable capabilities in the detection and characterization of lung nodules, thereby aiding in accurate lung cancer screening and diagnosis. These systems can analyze various imaging modalities, such as low-dose CT scans, PET-CT imaging, and even chest radiographs, accurately identifying suspicious nodules and facilitating timely intervention. AI models have exhibited promise in utilizing biomarkers and tumor markers as supplementary screening tools, effectively enhancing the specificity and accuracy of early detection. These models can accurately distinguish between benign and malignant lung nodules, assisting radiologists in making more accurate and informed diagnostic decisions. Additionally, AI algorithms hold the potential to integrate multiple imaging modalities and clinical data, providing a more comprehensive diagnostic assessment. By utilizing high-quality data, including patient demographics, clinical history, and genetic profiles, AI models can predict treatment responses and guide the selection of optimal therapies. Notably, these models have shown considerable success in predicting the likelihood of response and recurrence following targeted therapies and optimizing radiation therapy for lung cancer patients. Implementing these AI tools in clinical practice can aid in the early diagnosis and timely management of lung cancer and potentially improve outcomes, including the mortality and morbidity of the patients.
Collapse
Affiliation(s)
- Zainab Gandhi
- Department of Internal Medicine, Geisinger Wyoming Valley Medical Center, Wilkes Barre, PA 18711, USA
| | - Priyatham Gurram
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Birendra Amgai
- Department of Internal Medicine, Geisinger Community Medical Center, Scranton, PA 18510, USA;
| | - Sai Prasanna Lekkala
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Alifya Lokhandwala
- Department of Medicine, Jawaharlal Nehru Medical College, Wardha 442001, India;
| | - Suvidha Manne
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Adil Mohammed
- Department of Internal Medicine, Central Michigan University College of Medicine, Saginaw, MI 48602, USA;
| | - Hiren Koshiya
- Department of Internal Medicine, Prime West Consortium, Inglewood, CA 92395, USA;
| | - Nakeya Dewaswala
- Department of Cardiology, University of Kentucky, Lexington, KY 40536, USA;
| | - Rupak Desai
- Independent Researcher, Atlanta, GA 30079, USA;
| | - Huzaifa Bhopalwala
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Shyam Ganti
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Salim Surani
- Departmet of Pulmonary, Critical Care Medicine, Texas A&M University, College Station, TX 77845, USA;
| |
Collapse
|
22
|
Liu PM, Feng B, Shi JF, Feng HJ, Hu ZJ, Chen YH, Zhang JP. A deep-learning model using enhanced chest CT images to predict PD-L1 expression in non-small-cell lung cancer patients. Clin Radiol 2023; 78:e689-e697. [PMID: 37460338 DOI: 10.1016/j.crad.2023.05.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 05/11/2023] [Accepted: 05/18/2023] [Indexed: 09/03/2023]
Abstract
AIM To develop a deep-learning model using contrast-enhanced chest computed tomography (CT) images to predict programmed death-ligand 1 (PD-L1) expression in patients with non-small-cell lung cancer (NSCLC). MATERIALS AND METHODS Preoperative enhanced chest CT images and immunohistochemistry results for PD-L1 expression (<1% and ≥1% were defined as negative and positive, respectively) were collected retrospectively from 125 NSCLC patients to train and validate a deep-learning radiomics model (DLRM) for the prediction of PD-L1 expression in tumours. The DLRM was developed by combining the deep-learning signature (DLS) obtained from a convolutional neural network and clinicopathological factors. The indexes of the area under the curve (AUC), integrated discrimination improvement (IDI), and decision curve analysis (DCA) were used to evaluate the efficiency of the DLRM. RESULTS DLS and tumour stage were identified as independent predictors of PD-L1 expression by the DLRM. The AUCs of the DLRM were 0.804 (95% confidence interval: 0.697-0.911) and 0.804 (95% confidence interval: 0.679-0.929) in the training and validation cohorts, respectively. IDI analysis showed the DLRM had better diagnostic accuracy than DLS (0.0028 [p<0.05]) in the validation cohort. Additionally, DCA revealed that the DLRM had more net benefit than the DLS for clinical utility. CONCLUSION The proposed DLRM using enhanced chest CT images could function as a non-invasive diagnostic tool to differentiate PD-L1 expression in NSCLC patients.
Collapse
Affiliation(s)
- P M Liu
- Cancer Center, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - B Feng
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, 541004, China
| | - J F Shi
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, 541004, China
| | - H J Feng
- Cancer Center, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Z J Hu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, 541004, China
| | - Y H Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, 541004, China
| | - J P Zhang
- Cancer Center, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China.
| |
Collapse
|
23
|
Zhao ZR, Yu YH, Lin ZC, Ma DH, Lin YB, Hu J, Luo QQ, Li GF, Chen C, Yang YL, Yang JC, Lin YB, Long H. Invasiveness assessment by artificial intelligence against intraoperative frozen section for pulmonary nodules ≤ 3 cm. J Cancer Res Clin Oncol 2023; 149:7759-7765. [PMID: 37016100 DOI: 10.1007/s00432-023-04713-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/18/2023] [Indexed: 04/06/2023]
Abstract
PURPOSE To investigate the performance of an artificial intelligence (AI) algorithm for assessing the malignancy and invasiveness of pulmonary nodules in a multicenter cohort. METHODS A previously developed deep learning system based on a 3D convolutional neural network was used to predict tumor malignancy and invasiveness. Dataset of pulmonary nodules no more than 3 cm was integrated with CT images and pathologic information. Receiver operating characteristic curve analysis was used to evaluate the performance of the system. RESULTS A total of 466 resected pulmonary nodules were included in this study. The areas under the curves (AUCs) of the deep learning system in the prediction of malignancy as compared with pathological reports were 0.80, 0.80, and 0.75 for all, subcentimeter, and solid nodules, respectively. Additionally, the AUC in the AI-assisted prediction of invasive adenocarcinoma (IA) among subsolid lesions (n = 184) was 0.88. Most malignancies that were misdiagnosed by the AI system as benign diseases with a diameter measuring greater than 1 cm (26/250, 10.4%) presented as solid nodules (19/26, 73.1%) on CT. In an exploratory analysis involving nodules underwent intraoperative pathologic examination, the concordance rate in identifying IA between the AI model and frozen section examination was 0.69, with a sensitivity of 0.50 and specificity of 0.97. CONCLUSION The deep learning system can discriminate malignant diseases for pulmonary nodules measuring no more than 3 cm. The AI model has a high positive predictive value for invasive adenocarcinoma with respect to intraoperative frozen section examination, which might help determine the individualized surgical strategy.
Collapse
Affiliation(s)
- Ze-Rui Zhao
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China
| | | | - Zhi-Chao Lin
- Department of Thoracic Surgery, Jiangmen Central Hospital, Jiangmen, China
| | - De-Hua Ma
- Department of Thoracic Surgery, Taizhou Hospital, Taizhou, China
| | - Yao-Bin Lin
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China
| | - Jian Hu
- Department of Thoracic Surgery, School of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Qing-Quan Luo
- Shanghai Lung Cancer Center, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Gao-Feng Li
- Department of Thoracic Surgery, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Chun Chen
- Department of Thoracic Surgery, Fujian Medical University Union Hospital, Fuzhou, China
| | - Yu-Lun Yang
- Department of Thoracic Surgery, Fifth Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jian-Cheng Yang
- Dianei Technology, Shanghai, China.
- Shanghai Jiao Tong University, 800# Dong Chuan Road, Shanghai, 200240, People's Republic of China.
- EPFL, Lausanne, Switzerland.
| | - Yong-Bin Lin
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China.
| | - Hao Long
- State Key Laboratory of Oncology in Southern China, Department of Thoracic Surgery, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, Guangdong, People's Republic of China.
| |
Collapse
|
24
|
Baheti B, Pati S, Menze B, Bakas S. Leveraging 2D Deep Learning ImageNet-trained models for Native 3D Medical Image Analysis. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2023; 13769:68-79. [PMID: 37928819 PMCID: PMC10623403 DOI: 10.1007/978-3-031-33842-7_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2023]
Abstract
Convolutional neural networks (CNNs) have shown promising performance in various 2D computer vision tasks due to availability of large amounts of 2D training data. Contrarily, medical imaging deals with 3D data and usually lacks the equivalent extent and diversity of data, for developing AI models. Transfer learning provides the means to use models trained for one application as a starting point to another application. In this work, we leverage 2D pre-trained models as a starting point in 3D medical applications by exploring the concept of Axial-Coronal-Sagittal (ACS) convolutions. We have incorporated ACS as an alternative of native 3D convolutions in the Generally Nuanced Deep Learning Framework (GaNDLF), providing various well-established and state-of-the-art network architectures with the availability of pre-trained encoders from 2D data. Results of our experimental evaluation on 3D MRI data of brain tumor patients for i) tumor segmentation and ii) radiogenomic classification, show model size reduction by ~22% and improvement in validation accuracy by ~33%. Our findings support the advantage of ACS convolutions in pre-trained 2D CNNs over 3D CNN without pre-training, for 3D segmentation and classification tasks, democratizing existing models trained in datasets of unprecedented size and showing promise in the field of healthcare.
Collapse
Affiliation(s)
- Bhakti Baheti
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
25
|
Zhou J, Hu B, Feng W, Zhang Z, Fu X, Shao H, Wang H, Jin L, Ai S, Ji Y. An ensemble deep learning model for risk stratification of invasive lung adenocarcinoma using thin-slice CT. NPJ Digit Med 2023; 6:119. [PMID: 37407729 DOI: 10.1038/s41746-023-00866-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 06/26/2023] [Indexed: 07/07/2023] Open
Abstract
Lung cancer screening using computed tomography (CT) has increased the detection rate of small pulmonary nodules and early-stage lung adenocarcinoma. It would be clinically meaningful to accurate assessment of the nodule histology by CT scans with advanced deep learning algorithms. However, recent studies mainly focus on predicting benign and malignant nodules, lacking of model for the risk stratification of invasive adenocarcinoma. We propose an ensemble multi-view 3D convolutional neural network (EMV-3D-CNN) model to study the risk stratification of lung adenocarcinoma. We include 1075 lung nodules (≤30 mm and ≥4 mm) with preoperative thin-section CT scans and definite pathology confirmed by surgery. Our model achieves a state-of-art performance of 91.3% and 92.9% AUC for diagnosis of benign/malignant and pre-invasive/invasive nodules, respectively. Importantly, our model outperforms senior doctors in risk stratification of invasive adenocarcinoma with 77.6% accuracy [i.e., Grades 1, 2, 3]). It provides detailed predictive histological information for the surgical management of pulmonary nodules. Finally, for user-friendly access, the proposed model is implemented as a web-based system ( https://seeyourlung.com.cn ).
Collapse
Affiliation(s)
- Jing Zhou
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Bin Hu
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Wei Feng
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Zhang Zhang
- Department of Thoracic Surgery, Changsha Central Hospital, Changsha, China
| | - Xiaotong Fu
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Handie Shao
- Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, China
| | - Hansheng Wang
- Guanghua School of Management, Peking University, Beijing, China
| | - Longyu Jin
- Department of Cardiothoracic Surgery, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Siyuan Ai
- Department of Thoracic Surgery, Beijing LIANGXIANG Hospital, Beijing, China
| | - Ying Ji
- Department of Thoracic Surgery, Beijing Institute of Respiratory Medicine and Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
26
|
Zhang Y, Qian F, Teng J, Wang H, Yu H, Chen Q, Wang L, Zhu J, Yu Y, Yuan J, Cai W, Xu N, Zhu H, Lu Y, Yao M, Zhu J, Dong J, Yu L, Ren H, Yang J, Sun J, Zhong H, Han B. China lung cancer screening (CLUS) version 2.0 with new techniques implemented: Artificial intelligence, circulating molecular biomarkers and autofluorescence bronchoscopy. Lung Cancer 2023; 181:107262. [PMID: 37263180 DOI: 10.1016/j.lungcan.2023.107262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 05/12/2023] [Accepted: 05/24/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVE The present study, CLUS version 2.0, was conducted to evaluate the performance of new techniques in improving the implementation of lung cancer screening and to validate the efficacy of LDCT in reducing lung cancer-specific mortality in a high-risk Chinese population. METHODS From July 2018 to February 2019, high-risk participants from six screening centers in Shanghai were enrolled in our study. Artificial intelligence, circulating molecular biomarkers and autofluorescencebronchoscopy were applied during screening. RESULTS A total of 5087 eligible high-risk participants were enrolled in the study; 4490 individuals were invited, and 4395 participants (97.9%) finally underwent LDCT detection. Positive screening results were observed in 857 (19.5%) participants. Solid nodules represented 53.6% of all positive results, while multiple nodules were the most common location type (26.8%). Up to December 2020, 77 participants received lung resection or biopsy, including 70 lung cancers, 2 mediastinal tumors, 1 tracheobronchial tumor, 1 malignant pleural mesothelioma and 3 benign nodules. Lung cancer patients accounted for 1.6% of all the screened participants, and 91.4% were in the early stage (stage 0-1). CONCLUSIONS LDCT screening can detect a high proportion of early-stage lung cancer patients in a Chinese high-risk population. The utilization of new techniques would be conducive to improving the implementation of LDCT screening.
Collapse
Affiliation(s)
- Yanwei Zhang
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fangfei Qian
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiajun Teng
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Huimin Wang
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hong Yu
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qunhui Chen
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lan Wang
- Xuhui District Health Commission, Shanghai, China
| | - Jingjing Zhu
- Xuhui District Center for Disease Control, Shanghai, China
| | | | - Junyi Yuan
- Information Center, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weiming Cai
- Department of Outpatient, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ning Xu
- Tianlin Community Health Center, Shanghai, China
| | - Huixian Zhu
- Xujiahui Community Health Center, Shanghai, China
| | - Yun Lu
- Hongmei Community Health Center, Shanghai, China
| | - Mingling Yao
- Caohejing Community Health Center, Shanghai, China
| | - Jiayu Zhu
- Xietu Community Health Center, Shanghai, China
| | | | - Lingming Yu
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hua Ren
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiancheng Yang
- Dianei Technology, Shanghai, China; Shanghai Jiao Tong University, Shanghai, China; Computer Vision Laboratory, Swiss Federal Institute of Technology in Lausanne (EPFL), Lausanne, Switzerland
| | - Jiayuan Sun
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Hua Zhong
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Baohui Han
- Department of Pulmonary Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
27
|
Mu J, Kuang K, Ao M, Li W, Dai H, Ouyang Z, Li J, Huang J, Guo S, Yang J, Yang L. Deep learning predicts malignancy and metastasis of solid pulmonary nodules from CT scans. Front Med (Lausanne) 2023; 10:1145846. [PMID: 37275359 PMCID: PMC10235703 DOI: 10.3389/fmed.2023.1145846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/10/2023] [Indexed: 06/07/2023] Open
Abstract
In the clinic, it is difficult to distinguish the malignancy and aggressiveness of solid pulmonary nodules (PNs). Incorrect assessments may lead to delayed diagnosis and an increased risk of complications. We developed and validated a deep learning-based model for the prediction of malignancy as well as local or distant metastasis in solid PNs based on CT images of primary lesions during initial diagnosis. In this study, we reviewed the data from multiple patients with solid PNs at our institution from 1 January 2019 to 30 April 2022. The patients were divided into three groups: benign, Ia-stage lung cancer, and T1-stage lung cancer with metastasis. Each cohort was further split into training and testing groups. The deep learning system predicted the malignancy and metastasis status of solid PNs based on CT images, and then we compared the malignancy prediction results among four different levels of clinicians. Experiments confirmed that human-computer collaboration can further enhance diagnostic accuracy. We made a held-out testing set of 134 cases, with 689 cases in total. Our convolutional neural network model reached an area under the ROC (AUC) of 80.37% for malignancy prediction and an AUC of 86.44% for metastasis prediction. In observer studies involving four clinicians, the proposed deep learning method outperformed a junior respiratory clinician and a 5-year respiratory clinician by considerable margins; it was on par with a senior respiratory clinician and was only slightly inferior to a senior radiologist. Our human-computer collaboration experiment showed that by simply adding binary human diagnosis into model prediction probabilities, model AUC scores improved to 81.80-88.70% when combined with three out of four clinicians. In summary, the deep learning method can accurately diagnose the malignancy of solid PNs, improve its performance when collaborating with human experts, predict local or distant metastasis in patients with T1-stage lung cancer, and facilitate the application of precision medicine.
Collapse
Affiliation(s)
- Junhao Mu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Kaiming Kuang
- Dianei Technology, Shanghai, China
- University of California, San Diego, San Diego, CA, United States
| | - Min Ao
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Weiyi Li
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Haiyun Dai
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Zubin Ouyang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jingyu Li
- Dianei Technology, Shanghai, China
- School of Computer Science, Wuhan University, Wuhan, China
| | - Jing Huang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Shuliang Guo
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Jiancheng Yang
- Dianei Technology, Shanghai, China
- Shanghai Jiao Tong University, Shanghai, China
- École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Li Yang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| |
Collapse
|
28
|
Zhang Z, Wei X. Artificial intelligence-assisted selection and efficacy prediction of antineoplastic strategies for precision cancer therapy. Semin Cancer Biol 2023; 90:57-72. [PMID: 36796530 DOI: 10.1016/j.semcancer.2023.02.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/12/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023]
Abstract
The rapid development of artificial intelligence (AI) technologies in the context of the vast amount of collectable data obtained from high-throughput sequencing has led to an unprecedented understanding of cancer and accelerated the advent of a new era of clinical oncology with a tone of precision treatment and personalized medicine. However, the gains achieved by a variety of AI models in clinical oncology practice are far from what one would expect, and in particular, there are still many uncertainties in the selection of clinical treatment options that pose significant challenges to the application of AI in clinical oncology. In this review, we summarize emerging approaches, relevant datasets and open-source software of AI and show how to integrate them to address problems from clinical oncology and cancer research. We focus on the principles and procedures for identifying different antitumor strategies with the assistance of AI, including targeted cancer therapy, conventional cancer therapy, and cancer immunotherapy. In addition, we also highlight the current challenges and directions of AI in clinical oncology translation. Overall, we hope this article will provide researchers and clinicians with a deeper understanding of the role and implications of AI in precision cancer therapy, and help AI move more quickly into accepted cancer guidelines.
Collapse
Affiliation(s)
- Zhe Zhang
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China; State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, and Collaborative Innovation Center for Biotherapy, Chengdu 610041, PR China
| | - Xiawei Wei
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China.
| |
Collapse
|
29
|
Ding Y, Zhang J, Zhuang W, Gao Z, Kuang K, Tian D, Deng C, Wu H, Chen R, Lu G, Chen G, Mendogni P, Migliore M, Kang MW, Kanzaki R, Tang Y, Yang J, Shi Q, Qiao G. Improving the efficiency of identifying malignant pulmonary nodules before surgery via a combination of artificial intelligence CT image recognition and serum autoantibodies. Eur Radiol 2023; 33:3092-3102. [PMID: 36480027 DOI: 10.1007/s00330-022-09317-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 10/21/2022] [Accepted: 11/24/2022] [Indexed: 12/09/2022]
Abstract
OBJECTIVE To construct a new pulmonary nodule diagnostic model with high diagnostic efficiency, non-invasive and simple to measure. METHODS This study included 424 patients with radioactive pulmonary nodules who underwent preoperative 7-autoantibody (7-AAB) panel testing, CT-based AI diagnosis, and pathological diagnosis by surgical resection. The patients were randomly divided into a training set (n = 212) and a validation set (n = 212). The nomogram was developed through forward stepwise logistic regression based on the predictive factors identified by univariate and multivariate analyses in the training set and was verified internally in the verification set. RESULTS A diagnostic nomogram was constructed based on the statistically significant variables of age as well as CT-based AI diagnostic, 7-AAB panel, and CEA test results. In the validation set, the sensitivity, specificity, positive predictive value, and AUC were 82.29%, 90.48%, 97.24%, and 0.899 (95%[CI], 0.851-0.936), respectively. The nomogram showed significantly higher sensitivity than the 7-AAB panel test result (82.29% vs. 35.88%, p < 0.001) and CEA (82.29% vs. 18.82%, p < 0.001); it also had a significantly higher specificity than AI diagnosis (90.48% vs. 69.04%, p = 0.022). For lesions with a diameter of ≤ 2 cm, the specificity of the Nomogram was higher than that of the AI diagnostic system (90.00% vs. 67.50%, p = 0.022). CONCLUSIONS Based on the combination of a 7-AAB panel, an AI diagnostic system, and other clinical features, our Nomogram demonstrated good diagnostic performance in distinguishing lung nodules, especially those with ≤ 2 cm diameters. KEY POINTS • A novel diagnostic model of lung nodules was constructed by combining high-specific tumor markers with a high-sensitivity artificial intelligence diagnostic system. • The diagnostic model has good diagnostic performance in distinguishing malignant and benign pulmonary nodules, especially for nodules smaller than 2 cm. • The diagnostic model can assist the clinical decision-making of pulmonary nodules, with the advantages of high diagnostic efficiency, noninvasive, and simple measurement.
Collapse
Affiliation(s)
- Yu Ding
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Jingyu Zhang
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, No. 1, Medical College Road, Yuzhong District, Chongqing, 400016, China
| | - Weitao Zhuang
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Zhen Gao
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | | | - Dan Tian
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Cheng Deng
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Hansheng Wu
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China
- Department of Thoracic Surgery, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Rixin Chen
- Research Center of Medical Sciences, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Guojie Lu
- Department of Thoracic Surgery, Guangzhou Panyu Central Hospital, Guangzhou, China
| | - Gang Chen
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Paolo Mendogni
- Thoracic Surgery and Lung Transplant Unit, Foundation IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
| | - Marcello Migliore
- Thoracic Surgery, Cardio-Thoracic Department, University Hospital of Wales, Cardiff, UK
- Minimally Invasive Surgery and New Technology, University Hospital of Catania, Department of Surgery and Medical Specialties, University of Catania, Catania, Italy
| | - Min-Woong Kang
- Department of Thoracic and Cardiovascular Surgery, Chungnam National University School of Medicine, Daejeon, South Korea
| | - Ryu Kanzaki
- Department of General Thoracic Surgery, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Yong Tang
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China
| | - Jiancheng Yang
- Dianei Technology, Shanghai, China
- Computer Vision Laboratory (CVLab), Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Qiuling Shi
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, No. 1, Medical College Road, Yuzhong District, Chongqing, 400016, China.
| | - Guibin Qiao
- Department of Thoracic Surgery, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd Road, Guangzhou, 510080, China.
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou, China.
| |
Collapse
|
30
|
Chen X, Wang W, Jiang Y, Qian X. A dual-transformation with contrastive learning framework for lymph node metastasis prediction in pancreatic cancer. Med Image Anal 2023; 85:102753. [PMID: 36682152 DOI: 10.1016/j.media.2023.102753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 07/23/2022] [Accepted: 01/18/2023] [Indexed: 01/20/2023]
Abstract
Pancreatic cancer is a malignant tumor, and its high recurrence rate after surgery is related to the lymph node metastasis status. In clinical practice, a preoperative imaging prediction method is necessary for prognosis assessment and treatment decision; however, there are two major challenges: insufficient data and difficulty in discriminative feature extraction. This paper proposed a deep learning model to predict lymph node metastasis in pancreatic cancer using multiphase CT, where a dual-transformation with contrastive learning framework is developed to overcome the challenges in fine-grained prediction with small sample sizes. Specifically, we designed a novel dynamic surface projection method to transform 3D data into 2D images for effectively using the 3D information, preserving the spatial correlation of the original texture information and reducing computational resources. Then, this dynamic surface projection was combined with the spiral transformation to establish a dual-transformation method for enhancing the diversity and complementarity of the dataset. A dual-transformation-based data augmentation method was also developed to produce numerous 2D-transformed images to alleviate the effect of insufficient samples. Finally, the dual-transformation-guided contrastive learning scheme based on intra-space-transformation consistency and inter-class specificity was designed to mine additional supervised information, thereby extracting more discriminative features. Extensive experiments have shown the promising performance of the proposed model for predicting lymph node metastasis in pancreatic cancer. Our dual-transformation with contrastive learning scheme was further confirmed on an external public dataset, representing a potential paradigm for the fine-grained classification of oncological images with small sample sizes. The code will be released at https://github.com/SJTUBME-QianLab/Dual-transformation.
Collapse
Affiliation(s)
- Xiahan Chen
- School of Biomedical Engineering, Shanghai JiaoTong University, Shanghai 200240, China
| | - Weishen Wang
- Department of General Surgery, Pancreatic Disease Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Yu Jiang
- Department of General Surgery, Pancreatic Disease Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Xiaohua Qian
- School of Biomedical Engineering, Shanghai JiaoTong University, Shanghai 200240, China.
| |
Collapse
|
31
|
Evaluation of surgical complexity by automated surgical process recognition in robotic distal gastrectomy using artificial intelligence. Surg Endosc 2023:10.1007/s00464-023-09924-9. [PMID: 36823363 PMCID: PMC9949687 DOI: 10.1007/s00464-023-09924-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 01/28/2023] [Indexed: 02/25/2023]
Abstract
BACKGROUND Although radical gastrectomy with lymph node dissection is the standard treatment for gastric cancer, the complication rate remains high. Thus, estimation of surgical complexity is required for safety. We aim to investigate the association between the surgical process and complexity, such as a risk of complications in robotic distal gastrectomy (RDG), to establish an artificial intelligence (AI)-based automated surgical phase recognition by analyzing robotic surgical videos, and to investigate the predictability of surgical complexity by AI. METHOD This study assessed clinical data and robotic surgical videos for 56 patients who underwent RDG for gastric cancer. We investigated (1) the relationship between surgical complexity and perioperative factors (patient characteristics, surgical process); (2) AI training for automated phase recognition and model performance was assessed by comparing predictions to the surgeon-annotated reference; (3) AI model predictability for surgical complexity was calculated by the area under the curve. RESULT Surgical complexity score comprised extended total surgical duration, bleeding, and complications and was strongly associated with the intraoperative surgical process, especially in the beginning phases (area under the curve 0.913). We established an AI model that can recognize surgical phases from video with 87% accuracy; AI can determine intraoperative surgical complexity by calculating the duration of beginning phases from phases 1-3 (area under the curve 0.859). CONCLUSION Surgical complexity, as a surrogate of short-term outcomes, can be predicted by the surgical process, especially in the extended duration of beginning phases. Surgical complexity can also be evaluated with automation using our artificial intelligence-based model.
Collapse
|
32
|
Sengar N, Joshi RC, Dutta MK, Burget R. EyeDeep-Net: a multi-class diagnosis of retinal diseases using deep neural network. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08249-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
33
|
Radiomics-Based Machine Learning for Predicting the Injury Time of Rib Fractures in Gemstone Spectral Imaging Scans. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 10:bioengineering10010008. [PMID: 36671582 PMCID: PMC9855073 DOI: 10.3390/bioengineering10010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 12/07/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022]
Abstract
This retrospective study aimed to predict the injury time of rib fractures in distinguishing fresh (30 days) or old (90 days) rib fractures. We enrolled 111 patients with chest trauma who had been scanned for rib fractures at our hospital between January 2018 and December 2018 using gemstone spectral imaging (GSI). The volume of interest of each broken end of the rib fractures was segmented using calcium-based material decomposition images derived from the GSI scans. The training and testing sets were randomly assigned in a 7:3 ratio. All cases were divided into groups distinguishing the injury time at 30 and 90 days. We constructed radiomics-based models to predict the injury time of rib fractures. The model performance was assessed by the area under the curve (AUC) obtained by the receiver operating characteristic analysis. We included 54 patients with 259 rib fracture segmentations (34 men; mean age, 52 years ± 12.02; and range, 19-72 years). Nine features were excluded by the least absolute shrinkage and selection operator logistic regression to build the radiomics signature. For distinguishing the injury time at 30 days, the Support Vector Machine (SVM) model and human-model collaboration resulted in an accuracy and AUC of 0.85 and 0.871 and 0.91 and 0.912, respectively, and 0.81 and 0.804 and 0.83 and 0.85, respectively, at 90 days in the testing set. The radiomics-based model displayed good accuracy in differentiating between the injury time of rib fractures at 30 and 90 days, and the human-model collaboration generated more accurate outcomes, which may help to add value to clinical practice and distinguish artificial injury in forensic medicine.
Collapse
|
34
|
Lv Y, Wei Y, Xu K, Zhang X, Hua R, Huang J, Li M, Tang C, Yang L, Liu B, Yuan Y, Li S, Gao Y, Zhang X, Wu Y, Han Y, Shang Z, Yu H, Zhan Y, Shi F, Ye B. 3D deep learning versus the current methods for predicting tumor invasiveness of lung adenocarcinoma based on high-resolution computed tomography images. Front Oncol 2022; 12:995870. [PMID: 36338695 PMCID: PMC9634256 DOI: 10.3389/fonc.2022.995870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 09/30/2022] [Indexed: 11/22/2022] Open
Abstract
Background Different pathological subtypes of lung adenocarcinoma lead to different treatment decisions and prognoses, and it is clinically important to distinguish invasive lung adenocarcinoma from preinvasive adenocarcinoma (adenocarcinoma in situ and minimally invasive adenocarcinoma). This study aims to investigate the performance of the deep learning approach based on high-resolution computed tomography (HRCT) images in the classification of tumor invasiveness and compare it with the performances of currently available approaches. Methods In this study, we used a deep learning approach based on 3D conventional networks to automatically predict the invasiveness of pulmonary nodules. A total of 901 early-stage non-small cell lung cancer patients who underwent surgical treatment at Shanghai Chest Hospital between November 2015 and March 2017 were retrospectively included and randomly assigned to a training set (n=814) or testing set 1 (n=87). We subsequently included 116 patients who underwent surgical treatment and intraoperative frozen section between April 2019 and January 2020 to form testing set 2. We compared the performance of our deep learning approach in predicting tumor invasiveness with that of intraoperative frozen section analysis and human experts (radiologists and surgeons). Results The deep learning approach yielded an area under the receiver operating characteristic curve (AUC) of 0.946 for distinguishing preinvasive adenocarcinoma from invasive lung adenocarcinoma in the testing set 1, which is significantly higher than the AUCs of human experts (P<0.05). In testing set 2, the deep learning approach distinguished invasive adenocarcinoma from preinvasive adenocarcinoma with an AUC of 0.862, which is higher than that of frozen section analysis (0.755, P=0.043), senior thoracic surgeons (0.720, P=0.006), radiologists (0.766, P>0.05) and junior thoracic surgeons (0.768, P>0.05). Conclusions We developed a deep learning model that achieved comparable performance to intraoperative frozen section analysis in determining tumor invasiveness. The proposed method may contribute to clinical decisions related to the extent of surgical resection.
Collapse
Affiliation(s)
- Yilv Lv
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Kuan Xu
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaobin Zhang
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Rong Hua
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jia Huang
- Department of Oncologic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Min Li
- Department of Radiology, Shanghai Municipal Hospital of Traditional Chinese Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Cui Tang
- Department of Radiology, Yangpu Hospital, Tongji University, Shanghai, China
| | - Long Yang
- Department of Thoracic Surgery, Affiliated Hospital of Gansu Medical College, Pingliang, China
| | - Bingchun Liu
- Department of Thoracic Surgery, Weifang People’s Hospital, Weifang, China
| | - Yonggang Yuan
- Department of Thoracic Surgery, Qilu Hospital of Shandong University, Qingdao, China
| | - Siwen Li
- Department of Thoracic Surgery, Qingyuan People’s Hospital, Guangzhou Medical University, Guangzhou, China
| | - Yaozong Gao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xianjie Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yifan Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuchen Han
- Department of Pathology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhanxian Shang
- Department of Pathology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Hong Yu
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- *Correspondence: Bo Ye, ; Feng Shi,
| | - Bo Ye
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Bo Ye, ; Feng Shi,
| |
Collapse
|
35
|
Huang H, Zheng D, Chen H, Wang Y, Chen C, Xu L, Li G, Wang Y, He X, Li W. Fusion of CT images and clinical variables based on deep learning for predicting invasiveness risk of stage I lung adenocarcinoma. Med Phys 2022; 49:6384-6394. [PMID: 35938604 DOI: 10.1002/mp.15903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 04/01/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To develop a novel multimodal data fusion model by incorporating computed tomography (CT) images and clinical variables based on deep learning for predicting the invasiveness risk of stage I lung adenocarcinoma that manifests as ground-glass nodules (GGNs), and compare the diagnostic performance of it with that of radiologists. METHODS A total of 1946 patients with solitary and histopathologically confirmed GGNs with maximum diameter less than 3 cm were retrospectively enrolled. The training dataset containing 1704 GGNs was augmented by resampling, scaling, random cropping, etc., to generate new training data. A multimodal data fusion model based on residual learning architecture and two multilayer perceptron with attention mechanism combining CT images with patient general data and serum tumor markers was built. The distance-based confidence scores (DCS) were calculated and compared among multimodal data models with different combinations. An observer study was conducted and the prediction performance of the fusion algorithms was compared with that of the two radiologists by an independent testing dataset with 242 GGNs. RESULTS Among the whole GGNs, 606 GGNs are confirmed as invasive adenocarcinoma (IA) and 1340 are non-IA. The proposed novel multimodal data fusion model combining CT images, patient general data and serum tumor markers achieved the highest accuracy (88.5%), Area under a ROC curve (AUC) (0.957), F1 (81.5%), F1weighted (81.9%) and Matthews correlation coefficient (MCC) (73.2%) for classifying between IA and non-IA GGNs, which was even better than the senior radiologist's performance (accuracy, 86.1%). In addition, the DCSs for multimodal data suggested that CT image had a stronger influence (0.9540) quantitatively than general data (0.6726) or tumor marker (0.6971). CONCLUSION This study demonstrated that the feasibility of integrating different types of data including CT images and clinical variables, and the multimodal data fusion model yielded higher performance for distinguishing IA from non-IA GGNs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Haozhe Huang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Dezhong Zheng
- Laboratory for Medical Imaging Informatics, Shanghai Institute of Technical Physics, Chinese Academy of Science, 500 Yutian Road, Hongkou District, Shanghai, 200083, China.,University of Chinese Academy of Sciences, 19 Yuquan Road, Shijingshan District, Beijing, 100049, China
| | - Hong Chen
- Department of Medical Imaging, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, 600 South Wanping Road, Xuhui District, Shanghai, 200030, China
| | - Ying Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Chao Chen
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Lichao Xu
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Guodong Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Yaohui Wang
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Xinhong He
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| | - Wentao Li
- Department of Interventional Radiology, Fudan University Shanghai Cancer Center, 270 Dongan Road, Xuhui District, Shanghai, 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Xuhui District, 130 Dongan Road, Shanghai, 200032, China
| |
Collapse
|
36
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
37
|
Zhao W, Sun Y, Kuang K, Yang J, Li G, Ni B, Jiang Y, Jiang B, Liu J, Li M. ViSTA: A Novel Network Improving Lung Adenocarcinoma Invasiveness Prediction from Follow-Up CT Series. Cancers (Basel) 2022; 14:cancers14153675. [PMID: 35954342 PMCID: PMC9367560 DOI: 10.3390/cancers14153675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 07/17/2022] [Accepted: 07/20/2022] [Indexed: 11/21/2022] Open
Abstract
Simple Summary Assessing follow-up computed tomography(CT) series is of great importance in clinical practice for lung nodule diagnosis. Deep learning is a thriving data mining method in medical imaging and has obtained surprising results. However, previous studies mostly focused on the analysis of single static time points instead of the entire follow-up series and required regular intervals between CT examinations. In the current study, we propose a new deep learning framework, named ViSTA, that can better evaluate tumor invasiveness using irregularly serial follow-up CT images to avoid aggressive procedures or delay diagnosis in clinical practice. ViSTA provides a new solution for irregularly sampled data. ViSTA delivers superior performance compared with other static or serial deep learning models. The proposed ViSTA framework is capable of improving performance close to the human level in the prediction of invasiveness of lung adenocarcinoma while being transferrable to other tasks analyzing serial medical data. Abstract To investigate the value of the deep learning method in predicting the invasiveness of early lung adenocarcinoma based on irregularly sampled follow-up computed tomography (CT) scans. In total, 351 nodules were enrolled in the study. A new deep learning network based on temporal attention, named Visual Simple Temporal Attention (ViSTA), was proposed to process irregularly sampled follow-up CT scans. We conducted substantial experiments to investigate the supplemental value in predicting the invasiveness using serial CTs. A test set composed of 69 lung nodules was reviewed by three radiologists. The performance of the model and radiologists were compared and analyzed. We also performed a visual investigation to explore the inherent growth pattern of the early adenocarcinomas. Among counterpart models, ViSTA showed the best performance (AUC: 86.4% vs. 60.6%, 75.9%, 66.9%, 73.9%, 76.5%, 78.3%). ViSTA also outperformed the model based on Volume Doubling Time (AUC: 60.6%). ViSTA scored higher than two junior radiologists (accuracy of 81.2% vs. 75.4% and 71.0%) and came close to the senior radiologist (85.5%). Our proposed model using irregularly sampled follow-up CT scans achieved promising accuracy in evaluating the invasiveness of the early stage lung adenocarcinoma. Its performance is comparable with senior experts and better than junior experts and traditional deep learning models. With further validation, it can potentially be applied in clinical practice.
Collapse
Affiliation(s)
- Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Yingli Sun
- Department of Radiology, Huadong Hospital, Fudan University, Shanghai 200040, China;
| | - Kaiming Kuang
- Dianei Technology, Shanghai 200051, China; (K.K.); (J.Y.)
| | - Jiancheng Yang
- Dianei Technology, Shanghai 200051, China; (K.K.); (J.Y.)
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Ge Li
- Department of Radiology, The Xiangya Hospital, Central South University, Changsha 410008, China;
| | - Bingbing Ni
- Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;
| | - Yingjia Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Bo Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (W.Z.); (Y.J.); (B.J.)
- Radiology Quality Control Center, Changsha 410011, China
- Correspondence: (J.L.); (M.L.); Tel.: +86-137-8708-5002 (J.L.); +86-138-1662-0371 (M.L.); Fax: +86-0731-85292116 (J.L.); +86-21-57643271 (M.L.)
| | - Ming Li
- Department of Radiology, Huadong Hospital, Fudan University, Shanghai 200040, China;
- Institute of Functional and Molecular Medical Imaging, Fudan University, Shanghai 200437, China
- Correspondence: (J.L.); (M.L.); Tel.: +86-137-8708-5002 (J.L.); +86-138-1662-0371 (M.L.); Fax: +86-0731-85292116 (J.L.); +86-21-57643271 (M.L.)
| |
Collapse
|
38
|
Bai H, Meng S, Xiong C, Liu Z, Shi W, Ren Q, Xia W, Zhao X, Jian J, Song Y, Ni C, Gao X, Li Z. Preoperative CECT-based Radiomic Signature for Predicting the Response of Transarterial Chemoembolization (TACE) Therapy in Hepatocellular Carcinoma. Cardiovasc Intervent Radiol 2022; 45:1524-1533. [PMID: 35896687 DOI: 10.1007/s00270-022-03221-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 06/30/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE To evaluate the efficiency of radiomics signatures in predicting the response of transarterial chemoembolization (TACE) therapy based on preoperative contrast-enhanced computed tomography (CECT). MATERIALS This study consisted of 111 patients with intermediate-stage hepatocellular carcinoma who underwent CECT at both the arterial phase (AP) and venous phase (VP) before and after TACE. According to mRECIST 1.1, patients were divided into an objective-response group (n = 38) and a non-response group (n = 73). Among them, 79 patients were assigned as the training dataset, and the remaining 32 cases were assigned as the test dataset. METHODS Radiomics features were extracted from CECT images. Two feature ranking methods and three classifiers were used to find the best single-phase radiomics signatures for both AP and VP on the training set. Meanwhile, multi-phase radiomics signatures were built upon integration of images from two CECT phases by decision-level fusion and feature-level fusion. Finally, multivariable logistic regression was used to develop a nomogram by combining radiomics signatures and clinic-radiologic characteristics. The prediction performance was evaluated by AUC on the test dataset. RESULTS The multi-phase radiomics signature (AUC = 0.883) performed better in predicting TACE therapy response compared to the best single-phase radiomics signature (AUC = 0.861). The nomogram (AUC = 0.913) showed better performance than any radiomics signatures. CONCLUSION The radiomics signatures and nomogram were developed and validated for predicting responses to TACE therapy, and the radiomics model may play a positive role in identifying patients who may benefit from TACE therapy in clinical practice.
Collapse
Affiliation(s)
- Honglin Bai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China.,School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, 215163, China
| | - Siyu Meng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China
| | - Chuanfeng Xiong
- Tandon School of Engineering, New York University, 6 MetroTech Center, Brooklyn, NY, USA
| | - Zhao Liu
- Department of Interventional Radiology, The First Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215006, Jiangsu, China
| | - Wei Shi
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China
| | - Qimeng Ren
- Department of Interventional Radiology, The First Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215006, Jiangsu, China
| | - Wei Xia
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China
| | - XingYu Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China.,School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, 215163, China
| | - Junming Jian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China
| | - Yizhi Song
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China
| | - Caifang Ni
- Department of Interventional Radiology, The First Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215006, Jiangsu, China.
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, No. 88 Keling Road, Suzhou, 215163, Jiangsu, China.
| | - Zhi Li
- Department of Interventional Radiology, The First Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215006, Jiangsu, China. .,People's Hospital of Xinjiang Kizilsu Kirgiz Autonomous Prefecture, West Pamir Road 5, Atush, Xinjiang, 845350, China.
| |
Collapse
|
39
|
Zeng Y, Long C, Zhao W, Liu J. Predicting the Severity of Neurological Impairment Caused by Ischemic Stroke Using Deep Learning Based on Diffusion-Weighted Images. J Clin Med 2022; 11:jcm11144008. [PMID: 35887776 PMCID: PMC9325315 DOI: 10.3390/jcm11144008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/23/2022] [Accepted: 07/05/2022] [Indexed: 02/01/2023] Open
Abstract
Purpose: To develop a preliminary deep learning model that uses diffusion-weighted imaging (DWI) images to classify the severity of neurological impairment caused by ischemic stroke. Materials and Methods: This retrospective study included 851 ischemic stroke patients (711 patients in the training set and 140 patients in the test set). The patients’ NIHSS scores, which reflect the severity of neurological impairment, were reviewed upon admission and on Day 7 of hospitalization and were classified into two stages (stage 1 for NIHSS < 5 and stage 2 for NIHSS ≥ 5). A 3D-CNN was trained to predict the stage of NIHSS based on different preprocessed DWI images. The performance in predicting the severity of anterior and posterior circulation stroke was also investigated. The AUC, specificity, and sensitivity were calculated to evaluate the performance of the model. Results: Our proposed model obtained better performance in predicting the NIHSS stage on Day 7 of hospitalization than that at admission (best AUC 0.895 vs. 0.846). Model D trained with DWI images (normalized with z-score and resized to 256 × 256 × 64 voxels) achieved the best AUC of 0.846 in predicting the NIHSS stage at admission. Model E rained with DWI images (normalized with maximum−minimum and resized to 128 × 128 × 32 voxels) achieved the best AUC of 0.895 in predicting the NIHSS stage on Day 7 of hospitalization. Our model also showed promising performance in predicting the NIHSS stage on Day 7 of hospitalization for anterior and posterior circulation stroke, with the best AUCs of 0.905 and 0.903, respectively. Conclusions: Our proposed 3D-CNN model can effectively predict the neurological severity of IS using DWI images and performs better in predicting the NIHSS stage on Day 7 of hospitalization. The model also obtained promising performance in subgroup analysis, which can potentially help clinical decision making.
Collapse
Affiliation(s)
- Ying Zeng
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Department of Radiology, Xiangtan Central Hospital, Xiangtan 411199, China
| | - Chen Long
- Department of Stroke Unit, Xiangtan Central Hospital, Xiangtan 411199, China;
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Clinical Research Center for Medical Imaging, Changsha 410011, China
- Correspondence: (W.Z.); (J.L.)
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China;
- Clinical Research Center for Medical Imaging, Changsha 410011, China
- Department of Radiology Quality Control Center, Changsha 410011, China
- Correspondence: (W.Z.); (J.L.)
| |
Collapse
|
40
|
Ouyang ML, Zheng RX, Wang YR, Zuo ZY, Gu LD, Tian YQ, Wei YG, Huang XY, Tang K, Wang LX. Deep Learning Analysis Using 18F-FDG PET/CT to Predict Occult Lymph Node Metastasis in Patients With Clinical N0 Lung Adenocarcinoma. Front Oncol 2022; 12:915871. [PMID: 35875089 PMCID: PMC9301998 DOI: 10.3389/fonc.2022.915871] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/07/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction The aim of this work was to determine the feasibility of using a deep learning approach to predict occult lymph node metastasis (OLM) based on preoperative FDG-PET/CT images in patients with clinical node-negative (cN0) lung adenocarcinoma. Materials and Methods Dataset 1 (for training and internal validation) included 376 consecutive patients with cN0 lung adenocarcinoma from our hospital between May 2012 and May 2021. Dataset 2 (for prospective test) used 58 consecutive patients with cN0 lung adenocarcinoma from June 2021 to February 2022 at the same center. Three deep learning models: PET alone, CT alone, and combined model, were developed for the prediction of OLM. The performance of the models was evaluated on internal validation and prospective test in terms of accuracy, sensitivity, specificity, and areas under the receiver operating characteristic curve (AUCs). Results The combined model incorporating PET and CT showed the best performance, achieved an AUC of 0.81 [95% confidence interval (CI): 0.61, 1.00] in the prediction of OLM in internal validation set (n = 60) and an AUC of 0.87 (95% CI: 0.75, 0.99) in the prospective test set (n = 58). The model achieved 87.50% sensitivity, 80.00% specificity, and 81.00% accuracy in the internal validation set and achieved 75.00% sensitivity, 88.46% specificity, and 86.60% accuracy in the prospective test set. Conclusion This study presented a deep learning approach to enable the prediction of occult nodal involvement based on the PET/CT images before surgery in cN0 lung adenocarcinoma, which would help clinicians select patients who would be suitable for sublobar resection.
Collapse
Affiliation(s)
- Ming-li Ouyang
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Rui-xuan Zheng
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yi-ran Wang
- Department of Medical Engineering, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zi-yi Zuo
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Liu-dan Gu
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yu-qian Tian
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yu-guo Wei
- Precision Health Institution, General Electric (GE) Healthcare, Hangzhou, China
| | - Xiao-ying Huang
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- *Correspondence: Liang-xing Wang, ; Kun Tang, ; Xiao-ying Huang,
| | - Kun Tang
- Department of Nuclear Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- *Correspondence: Liang-xing Wang, ; Kun Tang, ; Xiao-ying Huang,
| | - Liang-xing Wang
- Key Laboratory of Heart and Lung, Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- *Correspondence: Liang-xing Wang, ; Kun Tang, ; Xiao-ying Huang,
| |
Collapse
|
41
|
Pei Q, Luo Y, Chen Y, Li J, Xie D, Ye T. Artificial intelligence in clinical applications for lung cancer: diagnosis, treatment and prognosis. Clin Chem Lab Med 2022; 60:1974-1983. [PMID: 35771735 DOI: 10.1515/cclm-2022-0291] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/17/2022] [Indexed: 12/12/2022]
Abstract
Artificial Intelligence (AI) is a branch of computer science that includes research in robotics, language recognition, image recognition, natural language processing, and expert systems. AI is poised to change medical practice, and oncology is not an exception to this trend. As the matter of fact, lung cancer has the highest morbidity and mortality worldwide. The leading cause is the complexity of associating early pulmonary nodules with neoplastic changes and numerous factors leading to strenuous treatment choice and poor prognosis. AI can effectively enhance the diagnostic efficiency of lung cancer while providing optimal treatment and evaluating prognosis, thereby reducing mortality. This review seeks to provide an overview of AI relevant to all the fields of lung cancer. We define the core concepts of AI and cover the basics of the functioning of natural language processing, image recognition, human-computer interaction and machine learning. We also discuss the most recent breakthroughs in AI technologies and their clinical application regarding diagnosis, treatment, and prognosis in lung cancer. Finally, we highlight the future challenges of AI in lung cancer and its impact on medical practice.
Collapse
Affiliation(s)
- Qin Pei
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Yanan Luo
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Yiyu Chen
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Jingyuan Li
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Dan Xie
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| | - Ting Ye
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, P.R. China
| |
Collapse
|
42
|
Takeuchi M, Kawakubo H, Saito K, Maeda Y, Matsuda S, Fukuda K, Nakamura R, Kitagawa Y. Automated Surgical-Phase Recognition for Robot-Assisted Minimally Invasive Esophagectomy Using Artificial Intelligence. Ann Surg Oncol 2022; 29:6847-6855. [PMID: 35763234 DOI: 10.1245/s10434-022-11996-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/11/2022] [Indexed: 11/18/2022]
Abstract
BACKGROUND Although a number of robot-assisted minimally invasive esophagectomy (RAMIE) procedures have been performed due to three-dimensional field of view, image stabilization, and flexible joint function, both the surgeons and surgical teams require proficiency. This study aimed to establish an artificial intelligence (AI)-based automated surgical-phase recognition system for RAMIE by analyzing robotic surgical videos. METHODS This study enrolled 31 patients who underwent RAMIE. The videos were annotated into the following nine surgical phases: preparation, lower mediastinal dissection, upper mediastinal dissection, azygos vein division, subcarinal lymph node dissection (LND), right recurrent laryngeal nerve (RLN) LND, left RLN LND, esophageal transection, and post-dissection to completion of surgery to train the AI for automated phase recognition. An additional phase ("no step") was used to indicate video sequences upon removal of the camera from the thoracic cavity. All the patients were divided into two groups, namely, early period (20 patients) and late period (11 patients), after which the relationship between the surgical-phase duration and the surgical periods was assessed. RESULTS Fourfold cross validation was applied to evaluate the performance of the current model. The AI had an accuracy of 84%. The preparation (p = 0.012), post-dissection to completion of surgery (p = 0.003), and "no step" (p < 0.001) phases predicted by the AI were significantly shorter in the late period than in the early period. CONCLUSIONS A highly accurate automated surgical-phase recognition system for RAMIE was established using deep learning. Specific phase durations were significantly associated with the surgical period at the authors' institution.
Collapse
Affiliation(s)
- Masashi Takeuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Hirofumi Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan.
| | - Kosuke Saito
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yusuke Maeda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Satoru Matsuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Kazumasa Fukuda
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Rieko Nakamura
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
43
|
Tao J, Liang C, Yin K, Fang J, Chen B, Wang Z, Lan X, Zhang J. 3D convolutional neural network model from contrast-enhanced CT to predict spread through air spaces in non-small cell lung cancer. Diagn Interv Imaging 2022; 103:535-544. [PMID: 35773100 DOI: 10.1016/j.diii.2022.06.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 06/11/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE The purpose of this study was to compare the efficacy of five non-invasive models, including three-dimensional (3D) convolutional neural network (CNN) model, to predict the spread through air spaces (STAS) status of non-small cell lung cancer (NSCLC), and to obtain the best prediction model to provide a basis for clinical surgery planning. MATERIALS AND METHODS A total of 203 patients (112 men, 91 women; mean age, 60 years; age range 22-80 years) with NSCLC were retrospectively included. Of these, 153 were used for training cohort and 50 for validation cohort. According to the image biomarker standardization initiative reference manual, the image processing and feature extraction were standardized using PyRadiomics. The logistic regression classifier was used to build the model. Five models (clinicopathological/CT model, conventional radiomics model, computer vision (CV) model, 3D CNN model and combined model) were constructed to predict STAS by NSCLC. Area under the receiver operating characteristic curves (AUC) were used to validate the capability of the five models to predict STAS. RESULTS For predicting STAS, the 3D CNN model was superior to the clinicopathological/CT model, conventional radiomics model, CV model and combined model and achieved satisfactory discrimination performance, with an AUC of 0.93 (95% CI: 0.70-0.82) in the training cohort and 0.80 (95% CI: 0.65-0.86) in the validation cohort. Decision curve analysis indicated that, when the probability of the threshold was over 10%, the 3D CNN model was beneficial for predicting STAS status compared to either treating all or treating none of the patients within certain ranges of risk threshold CONCLUSION: The 3D CNN model can be used for the preoperative prediction of STAS in patients with NSCLC, and was superior to the other four models in predicting patients' risk of developing STAS.
Collapse
Affiliation(s)
- Junli Tao
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Ke Yin
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Jiayang Fang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Bohui Chen
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Zhenyu Wang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Xiaosong Lan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030 PR China; Key Laboratory for Biorheological Science and Technology of Ministry of Education (Chongqing University), Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400044, PR China.
| |
Collapse
|
44
|
Ou X, Gao L, Quan X, Zhang H, Yang J, Li W. BFENet: A two-stream interaction CNN method for multi-label ophthalmic diseases classification with bilateral fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106739. [PMID: 35344766 DOI: 10.1016/j.cmpb.2022.106739] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 02/23/2022] [Accepted: 03/07/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Early fundus screening and timely treatment of ophthalmology diseases can effectively prevent blindness. Previous studies just focus on fundus images of single eye without utilizing the useful relevant information of the left and right eyes. While clinical ophthalmologists usually use binocular fundus images to help ocular disease diagnosis. Besides, previous works usually target only one ocular diseases at a time. Considering the importance of patient-level bilateral eye diagnosis and multi-label ophthalmic diseases classification, we propose a bilateral feature enhancement network (BFENet) to address the above two problems. METHODS We propose a two-stream interactive CNN architecture for multi-label ophthalmic diseases classification with bilateral fundus images. Firstly, we design a feature enhancement module, which makes use of the interaction between bilateral fundus images to strengthen the extracted feature information. Specifically, attention mechanism is used to learn the interdependence between local and global information in the designed interactive architecture for two-stream, which leads to the reweighting of these features, and recover more details. In order to capture more disease characteristics, we further design a novel multiscale module, which enriches the feature maps by superimposing feature information of different resolutions images extracted through dilated convolution. RESULTS In the off-site set, the Kappa, F1, AUC and Final score are 0.535, 0.892, 0.912 and 0.780, respectively. In the on-site set, the Kappa, F1, AUC and Final score are 0.513, 0.886, 0.903 and 0.767 respectively. Comparing with existing methods, BFENet achieves the best classification performance. CONCLUSIONS Comprehensive experiments are conducted to demonstrate the effectiveness of this proposed model. Besides, our method can be extended to similar tasks where the correlation between different images is important.
Collapse
Affiliation(s)
- Xingyuan Ou
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Li Gao
- Ophthalmology, Tianjin Huanhu Hospital, Tianjin, China
| | - Xiongwen Quan
- College of Artificial Intelligence, Nankai University, Tianjin, China.
| | - Han Zhang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Jinglong Yang
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Wei Li
- College of Artificial Intelligence, Nankai University, Tianjin, China
| |
Collapse
|
45
|
Lee JH, Hwang EJ, Kim H, Park CM. A narrative review of deep learning applications in lung cancer research: from screening to prognostication. Transl Lung Cancer Res 2022; 11:1217-1229. [PMID: 35832457 PMCID: PMC9271435 DOI: 10.21037/tlcr-21-1012] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 05/16/2022] [Indexed: 01/17/2023]
Abstract
Background and Objective Deep learning (DL) algorithms have been developed for various tasks, including lung nodule detection on chest radiographs or lung cancer computed tomography screening, potential candidate selection in lung cancer screening, malignancy prediction for indeterminate pulmonary nodules, lung cancer staging, treatment response prediction, prognostication, and prediction of genetic mutations in lung cancer. Furthermore, these DL algorithms have been applied in various clinical settings in order for them to be generalized in real-world clinical practice. Multiple DL algorithms have been corroborated to be on par with experts or current clinical prediction models for several specific tasks. However, no article has yet comprehensively reviewed DL algorithms dedicated to lung cancer research. This narrative review presents an overview of the literature dealing with DL techniques applied in lung cancer research and briefly summarizes the results according to the DL algorithms’ clinical use cases. Methods we performed a narrative review by searching the Embase and OVID-MEDLINE databases for articles published in English from October, 2016 until September, 2021 and reviewing the bibliographies of key references to identify important literature related to DL in lung cancer research. The background, development, results, and clinical implications of each DL algorithm are briefly discussed. Lastly, we end this review article by highlighting future directions in lung cancer research using DL techniques. Key Content and Findings DL algorithms have been introduced to show comparable or higher performance than human experts in various clinical settings. Specifically, they have been actively applied to detect lung nodules in chest radiographs or computed tomography (CT) examinations, optimize candidate selection for lung cancer screening (LCS), predict the malignancy of lung nodules, stage lung cancer, and predict treatment response, patients’ prognoses, and genetic mutations in lung cancers. Conclusions DL algorithms have corroborated their potential value for various tasks, ranging from lung cancer screening to prognostication of lung cancer patients. Future research is warranted for the clinical application of these algorithms in daily clinical practice and verification of their real-world clinical usefulness.
Collapse
Affiliation(s)
- Jong Hyuk Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Eui Jin Hwang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea
| | - Chang Min Park
- Department of Radiology, Seoul National University Hospital, Seoul, Korea.,Department of Radiology, Seoul National University College of Medicine, Seoul, Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Korea.,Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, Korea
| |
Collapse
|
46
|
Wang C, Shao J, Xu X, Yi L, Wang G, Bai C, Guo J, He Y, Zhang L, Yi Z, Li W. DeepLN: A Multi-Task AI Tool to Predict the Imaging Characteristics, Malignancy and Pathological Subtypes in CT-Detected Pulmonary Nodules. Front Oncol 2022; 12:683792. [PMID: 35646699 PMCID: PMC9130467 DOI: 10.3389/fonc.2022.683792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 03/07/2022] [Indexed: 02/05/2023] Open
Abstract
Objectives Distinction of malignant pulmonary nodules from the benign ones based on computed tomography (CT) images can be time-consuming but significant in routine clinical management. The advent of artificial intelligence (AI) has provided an opportunity to improve the accuracy of cancer risk prediction. Methods A total of 8950 detected pulmonary nodules with complete pathological results were retrospectively enrolled. The different radiological manifestations were identified mainly as various nodules densities and morphological features. Then, these nodules were classified into benign and malignant groups, both of which were subdivided into finer specific pathological types. Here, we proposed a deep convolutional neural network for the assessment of lung nodules named DeepLN to identify the radiological features and predict the pathologic subtypes of pulmonary nodules. Results In terms of density, the area under the receiver operating characteristic curves (AUCs) of DeepLN were 0.9707 (95% confidence interval, CI: 0.9645-0.9765), 0.7789 (95%CI: 0.7569-0.7995), and 0.8950 (95%CI: 0.8822-0.9088) for the pure-ground glass opacity (pGGO), mixed-ground glass opacity (mGGO) and solid nodules. As for the morphological features, the AUCs were 0.8347 (95%CI: 0.8193-0.8499) and 0.9074 (95%CI: 0.8834-0.9314) for spiculation and lung cavity respectively. For the identification of malignant nodules, our DeepLN algorithm achieved an AUC of 0.8503 (95%CI: 0.8319-0.8681) in the test set. Pertaining to predicting the pathological subtypes in the test set, the multi-task AUCs were 0.8841 (95%CI: 0.8567-0.9083) for benign tumors, 0.8265 (95%CI: 0.8004-0.8499) for inflammation, and 0.8022 (95%CI: 0.7616-0.8445) for other benign ones, while AUCs were 0.8675 (95%CI: 0.8525-0.8813) for lung adenocarcinoma (LUAD), 0.8792 (95%CI: 0.8640-0.8950) for squamous cell carcinoma (LUSC), 0.7404 (95%CI: 0.7031-0.7782) for other malignant ones respectively in the malignant group. Conclusions The DeepLN based on deep learning algorithm represented a competitive performance to predict the imaging characteristics, malignancy and pathologic subtypes on the basis of non-invasive CT images, and thus had great possibility to be utilized in the routine clinical workflow.
Collapse
Affiliation(s)
- Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Xiuyuan Xu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Le Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Gang Wang
- Precision Medicine Center, West China Hospital, Sichuan University, Chengdu, China
| | - Congchen Bai
- Department of Medical Informatics, West China Hospital, Sichuan University, Chengdu, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Yanqi He
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| |
Collapse
|
47
|
Takeuchi M, Collins T, Ndagijimana A, Kawakubo H, Kitagawa Y, Marescaux J, Mutter D, Perretta S, Hostettler A, Dallemagne B. Automatic surgical phase recognition in laparoscopic inguinal hernia repair with artificial intelligence. Hernia 2022; 26:1669-1678. [PMID: 35536371 DOI: 10.1007/s10029-022-02621-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/21/2022] [Indexed: 11/26/2022]
Abstract
BACKGROUND Because of the complexity of the intra-abdominal anatomy in the posterior approach, a longer learning curve has been observed in laparoscopic transabdominal preperitoneal (TAPP) inguinal hernia repair. Consequently, automatic tools using artificial intelligence (AI) to monitor TAPP procedures and assess learning curves are required. The primary objective of this study was to establish a deep learning-based automated surgical phase recognition system for TAPP. A secondary objective was to investigate the relationship between surgical skills and phase duration. METHODS This study enrolled 119 patients who underwent the TAPP procedure. The surgical videos were annotated (delineated in time) and split into seven surgical phases (preparation, peritoneal flap incision, peritoneal flap dissection, hernia dissection, mesh deployment, mesh fixation, peritoneal flap closure, and additional closure). An AI model was trained to automatically recognize surgical phases from videos. The relationship between phase duration and surgical skills were also evaluated. RESULTS A fourfold cross-validation was used to assess the performance of the AI model. The accuracy was 88.81 and 85.82%, in unilateral and bilateral cases, respectively. In unilateral hernia cases, the duration of peritoneal incision (p = 0.003) and hernia dissection (p = 0.014) detected via AI were significantly shorter for experts than for trainees. CONCLUSION An automated surgical phase recognition system was established for TAPP using deep learning with a high accuracy. Our AI-based system can be useful for the automatic monitoring of surgery progress, improving OR efficiency, evaluating surgical skills and video-based surgical education. Specific phase durations detected via the AI model were significantly associated with the surgeons' learning curve.
Collapse
Affiliation(s)
- M Takeuchi
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France.
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan.
| | - T Collins
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - A Ndagijimana
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - H Kawakubo
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Y Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - J Marescaux
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - D Mutter
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| | - S Perretta
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| | - A Hostettler
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) Africa, Kigali, Rwanda
| | - B Dallemagne
- IRCAD, Research Institute Against Digestive Cancer (IRCAD) France, 1, place de l'Hôpital, 67091, Strasbourg, France
- Department of Digestive and Endocrine Surgery, University Hospital, Strasbourg, France
| |
Collapse
|
48
|
[Clinical Study of Artificial Intelligence-assisted Diagnosis System in Predicting the
Invasive Subtypes of Early-stage Lung Adenocarcinoma Appearing as Pulmonary Nodules]. ZHONGGUO FEI AI ZA ZHI = CHINESE JOURNAL OF LUNG CANCER 2022; 25:245-252. [PMID: 35477188 PMCID: PMC9051300 DOI: 10.3779/j.issn.1009-3419.2022.102.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
BACKGROUND Lung cancer is the cancer with the highest mortality at home and abroad at present. The detection of lung nodules is a key step to reducing the mortality of lung cancer. Artificial intelligence-assisted diagnosis system presents as the state of the art in the area of nodule detection, differentiation between benign and malignant and diagnosis of invasive subtypes, however, a validation with clinical data is necessary for further application. Therefore, the aim of this study is to evaluate the effectiveness of artificial intelligence-assisted diagnosis system in predicting the invasive subtypes of early‑stage lung adenocarcinoma appearing as pulmonary nodules. METHODS Clinical data of 223 patients with early-stage lung adenocarcinoma appearing as pulmonary nodules admitted to the Lanzhou University Second Hospital from January 1st, 2016 to December 31th, 2021 were retrospectively analyzed, which were divided into invasive adenocarcinoma group (n=170) and non-invasive adenocarcinoma group (n=53), and the non-invasive adenocarcinoma group was subdivided into minimally invasive adenocarcinoma group (n=31) and preinvasive lesions group (n=22). The malignant probability and imaging characteristics of each group were compared to analyze their predictive ability for the invasive subtypes of early-stage lung adenocarcinoma. The concordance between qualitative diagnostic results of artificial intelligence-assisted diagnosis of the invasive subtypes of early-stage lung adenocarcinoma and postoperative pathology was then analyzed. RESULTS In different invasive subtypes of early-stage lung adenocarcinoma, the mean CT value of pulmonary nodules (P<0.001), diameter (P<0.001), volume (P<0.001), malignant probability (P<0.001), pleural retraction sign (P<0.001), lobulation (P<0.001), spiculation (P<0.001) were significantly different. At the same time, it was also found that with the increased invasiveness of different invasive subtypes of early-stage lung adenocarcinoma, the proportion of dominant signs of each group gradually increased. On the issue of binary classification, the sensitivity, specificity, and area under the curve (AUC) values of the artificial intelligence-assisted diagnosis system for the qualitative diagnosis of invasive subtypes of early-stage lung adenocarcinoma were 81.76%, 92.45% and 0.871 respectively. On the issue of three classification, the accuracy, recall rate, F1 score, and AUC values of the artificial intelligence-assisted diagnosis system for the qualitative diagnosis of invasive subtypes of early-stage lung adenocarcinoma were 83.86%, 85.03%, 76.46% and 0.879 respectively. CONCLUSIONS Artificial intelligence-assisted diagnosis system could predict the invasive subtypes of early‑stage lung adenocarcinoma appearing as pulmonary nodules, and has a certain predictive value. With the optimization of algorithms and the improvement of data, it may provide guidance for individualized treatment of patients.
Collapse
|
49
|
[Chinese Experts Consensus on Artificial Intelligence Assisted Management for
Pulmonary Nodule (2022 Version)]. ZHONGGUO FEI AI ZA ZHI = CHINESE JOURNAL OF LUNG CANCER 2022; 25:219-225. [PMID: 35340198 PMCID: PMC9051301 DOI: 10.3779/j.issn.1009-3419.2022.102.08] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Low-dose computed tomography (CT) for lung cancer screening has been proven to reduce lung cancer deaths in the screening group compared with the control group. The increasing number of pulmonary nodules being detected by CT scans significantly increase the workload of the radiologists for scan interpretation. Artificial intelligence (AI) has the potential to increase the efficiency of pulmonary nodule discrimination and has been tested in preliminary studies for nodule management. As more and more artificial AI products are commercialized, the consensus statement has been organized in a collaborative effort by Thoracic Surgery Committee, Department of Simulated Medicine, Wu Jieping Medical Foundation to aid clinicians in the application of AI-assisted management for pulmonary nodules.
.
Collapse
|
50
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|