1
|
Ashayeri H, Sobhi N, Pławiak P, Pedrammehr S, Alizadehsani R, Jafarizadeh A. Transfer Learning in Cancer Genetics, Mutation Detection, Gene Expression Analysis, and Syndrome Recognition. Cancers (Basel) 2024; 16:2138. [PMID: 38893257 PMCID: PMC11171544 DOI: 10.3390/cancers16112138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 05/30/2024] [Accepted: 06/01/2024] [Indexed: 06/21/2024] Open
Abstract
Artificial intelligence (AI), encompassing machine learning (ML) and deep learning (DL), has revolutionized medical research, facilitating advancements in drug discovery and cancer diagnosis. ML identifies patterns in data, while DL employs neural networks for intricate processing. Predictive modeling challenges, such as data labeling, are addressed by transfer learning (TL), leveraging pre-existing models for faster training. TL shows potential in genetic research, improving tasks like gene expression analysis, mutation detection, genetic syndrome recognition, and genotype-phenotype association. This review explores the role of TL in overcoming challenges in mutation detection, genetic syndrome detection, gene expression, or phenotype-genotype association. TL has shown effectiveness in various aspects of genetic research. TL enhances the accuracy and efficiency of mutation detection, aiding in the identification of genetic abnormalities. TL can improve the diagnostic accuracy of syndrome-related genetic patterns. Moreover, TL plays a crucial role in gene expression analysis in order to accurately predict gene expression levels and their interactions. Additionally, TL enhances phenotype-genotype association studies by leveraging pre-trained models. In conclusion, TL enhances AI efficiency by improving mutation prediction, gene expression analysis, and genetic syndrome detection. Future studies should focus on increasing domain similarities, expanding databases, and incorporating clinical data for better predictions.
Collapse
Affiliation(s)
- Hamidreza Ashayeri
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz 5165665811, Iran;
| | - Navid Sobhi
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz 5165665811, Iran; (N.S.); (A.J.)
| | - Paweł Pławiak
- Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Warszawska 24, 31-155 Krakow, Poland
- Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Bałtycka 5, 44-100 Gliwice, Poland
| | - Siamak Pedrammehr
- Faculty of Design, Tabriz Islamic Art University, Tabriz 5164736931, Iran;
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Burwood, VIC 3216, Australia;
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Burwood, VIC 3216, Australia;
| | - Ali Jafarizadeh
- Nikookari Eye Center, Tabriz University of Medical Sciences, Tabriz 5165665811, Iran; (N.S.); (A.J.)
- Immunology Research Center, Tabriz University of Medical Sciences, Tabriz 5165665811, Iran
| |
Collapse
|
2
|
Nguyen HS, Ho DKN, Nguyen NN, Tran HM, Tam KW, Le NQK. Predicting EGFR Mutation Status in Non-Small Cell Lung Cancer Using Artificial Intelligence: A Systematic Review and Meta-Analysis. Acad Radiol 2024; 31:660-683. [PMID: 37120403 DOI: 10.1016/j.acra.2023.03.040] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 05/01/2023]
Abstract
RATIONALE AND OBJECTIVES Recent advancements in artificial intelligence (AI) render a substantial promise for epidermal growth factor receptor (EGFR) mutation status prediction in non-small cell lung cancer (NSCLC). We aimed to evaluate the performance and quality of AI algorithms that use radiomics features in predicting EGFR mutation status in patient with NSCLC. MATERIALS AND METHODS We searched PubMed (Medline), EMBASE, Web of Science, and IEEExplore for studies published up to February 28, 2022. Studies utilizing an AI algorithm (either conventional machine learning [cML] and deep learning [DL]) for predicting EGFR mutations in patients with NSLCL were included. We extracted binary diagnostic accuracy data and constructed a bivariate random-effects model to obtain pooled sensitivity, specificity, and 95% confidence interval. This study is registered with PROSPERO, CRD42021278738. RESULTS Our search identified 460 studies, of which 42 were included. Thirty-five studies were included in the meta-analysis. The AI algorithms exhibited an overall area under the curve (AUC) value of 0.789 and pooled sensitivity and specificity levels of 72.2% and 73.3%, respectively. The DL algorithms outperformed cML in terms of AUC (0.822 vs. 0.775) and sensitivity (80.1% vs. 71.1%), but had lower specificity (70.0% vs. 73.8%, p-value < 0.001) compared to cML. Subgroup analysis revealed that the use of positron-emission tomography/computed tomography, additional clinical information, deep feature extraction, and manual segmentation can improve diagnostic performance. CONCLUSION DL algorithms can serve as a novel method for increasing predictive accuracy and thus have considerable potential for use in predicting EGFR mutation status in patient with NSCLC. We also suggest that guidelines on using AI algorithms in medical image analysis should be developed with a focus on oncologic radiomics.
Collapse
Affiliation(s)
- Hung Song Nguyen
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (H.S.N., N.N.N.); Department of Pediatrics, Pham Ngoc Thach University of Medicine, Ho Chi Minh City, Viet Nam (H.S.N.); Intensive Care Unit Department, Children's Hospital 1, Ho Chi Minh City, Viet Nam (H.S.N.)
| | - Dang Khanh Ngan Ho
- School of Nutrition and Health Sciences, College of Nutrition, Taipei Medical University, Taipei, Taiwan (D.K.N.H.)
| | - Nam Nhat Nguyen
- International Ph.D. Program in Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (H.S.N., N.N.N.)
| | - Huy Minh Tran
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy at Ho Chi Minh City, Ho Chi Minh City, Viet Nam (H.M.T.)
| | - Ka-Wai Tam
- Center for Evidence-based Health Care, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan (K.-W.T.); Cochrane Taiwan, Taipei Medical University, Taipei City, Taiwan (K.-W.T.); Division of General Surgery, Department of Surgery, Shuang Ho Hospital, Taipei Medical University, New Taipei City, Taiwan (K.-W.T.); Division of General Surgery, Department of Surgery, School of Medicine, College of Medicine, Taipei Medical University, Taipei City, Taiwan (K.-W.T.)
| | - Nguyen Quoc Khanh Le
- Professional Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); AIBioMed Research Group, Taipei Medical University, Taipei 110, Taiwan (N.Q.K.L.); Translational Imaging Research Center, Taipei Medical University Hospital, Taipei 110, Taiwan (N.Q.K.L.).
| |
Collapse
|
3
|
Xiao Z, Cai H, Wang Y, Cui R, Huo L, Lee EYP, Liang Y, Li X, Hu Z, Chen L, Zhang N. Deep learning for predicting epidermal growth factor receptor mutations of non-small cell lung cancer on PET/CT images. Quant Imaging Med Surg 2023; 13:1286-1299. [PMID: 36915325 PMCID: PMC10006109 DOI: 10.21037/qims-22-760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/08/2022] [Indexed: 02/09/2023]
Abstract
Background Predicting the mutation status of the epidermal growth factor receptor (EGFR) gene based on an integrated positron emission tomography/computed tomography (PET/CT) image of non-small cell lung cancer (NSCLC) is a noninvasive, low-cost method which is valuable for targeted therapy. Although deep learning has been very successful in robotic vision, it is still challenging to predict gene mutations in PET/CT-derived studies because of the small amount of medical data and the different parameters of PET/CT devices. Methods We used the advanced EfficientNet-V2 model to predict the EGFR mutation based on fused PET/CT images. First, we extracted 3-dimensional (3D) pulmonary nodules from PET and CT as regions of interest (ROIs). We then fused each single PET and CT image. The network model was used to predict the mutation status of lung nodules by the new data after fusion, and the model was weighted adaptively. The EfficientNet-V2 model used multiple channels to represent nodules comprehensively. Results We trained the EfficientNet-V2 model through our PET/CT fusion algorithm using a dataset of 150 patients. The prediction accuracy of EGFR and non-EGFR mutations was 86.25% in the training dataset, and the accuracy rate was 81.92% in the validation set. Conclusions Combined with experiments, the demonstrated PET/CT fusion algorithm outperformed radiomics methods in predicting EGFR and non-EGFR mutations in NSCLC.
Collapse
Affiliation(s)
- Zhenghui Xiao
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Southern University of Science and Technology, Shenzhen, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Ruixue Cui
- Nuclear Medicine Department, State Key Laboratory of Complex Severe and Rare Diseases, Center for Rare Diseases Research, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Li Huo
- Nuclear Medicine Department, State Key Laboratory of Complex Severe and Rare Diseases, Center for Rare Diseases Research, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Elaine Yuen-Phin Lee
- Department of Diagnostic Radiology, Clinical School of Medicine, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Ying Liang
- Department of Nuclear Medicine, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
4
|
Texture Analysis of Enhanced MRI and Pathological Slides Predicts EGFR Mutation Status in Breast Cancer. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1376659. [PMID: 35663041 PMCID: PMC9162871 DOI: 10.1155/2022/1376659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 04/25/2022] [Accepted: 04/29/2022] [Indexed: 12/02/2022]
Abstract
Objective Image texture information was extracted from enhanced magnetic resonance imaging (MRI) and pathological hematoxylin and eosin- (HE-) stained images of female breast cancer patients. We established models individually, and then, we combine the two kinds of data to establish model. Through this method, we verified whether sufficient information could be obtained from enhanced MRI and pathological slides to assist in the determination of epidermal growth factor receptor (EGFR) mutation status in patients. Methods We obtained enhanced MRI data from patients with breast cancer before treatment and selected diffusion-weighted imaging (DWI), T1 fast-spin echo (T1 FSE), and T2 fast-spin echo (T2 FSE) as the data sources for extracting texture information. Imaging physicians manually outlined the 3D regions of interest (ROIs) and extracted texture features according to the gray level cooccurrence matrix (GLCM) of the images. For the HE staining images of the patients, we adopted a specific normalization algorithm to simulate the images dyed with only hematoxylin or eosin and extracted textures. We extracted texture features to predict the expression of EGFR. After evaluating the predictive power of each model, the models from the two data sources were combined for remodeling. Results For enhanced MRI data, the modeling of texture information of T1 FSE had a good predictive effect for EGFR mutation status. For pathological images, eosin-stained images can achieve a better prediction effect. We selected these two classifiers as the weak classifiers of the final model and obtained good results (training group: AUC, 0.983; 95% CI, 0.95-1.00; accuracy, 0.962; specificity, 0.936; and sensitivity, 0.979; test group: AUC, 0.983; 95% CI, 0.94-1.00; accuracy, 0.943; specificity, 1.00; and sensitivity, 0.905). Conclusion The EGFR mutation status of patients with breast cancer can be well predicted based on enhanced MRI data and pathological data. This helps hospitals that do not test the EGFR mutation status of patients with breast cancer. The technology gives clinicians more information about breast cancer, which helps them make accurate diagnoses and select suitable treatments.
Collapse
|
5
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 150] [Impact Index Per Article: 50.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
6
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and "motivate" the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
7
|
Murugesan M, Kaliannan K, Balraj S, Singaram K, Kaliannan T, Albert JR. A Hybrid deep learning model for effective segmentation and classification of lung nodules from CT images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-212189] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Deep learning algorithms will be used to detect lung nodule anomalies at an earlier stage. The primary goal of this effort is to properly identify lung cancer, which is critical in preserving a person’s life. Lung cancer has been a source of concern for people all around the world for decades. Several researchers presented numerous issues and solutions for various stages of a computer-aided system for diagnosing lung cancer in its early stages, as well as information about lung cancer. Computer vision is one of the field of artificial intelligence this is a better way to detect and prevent the lung cancer. This study focuses on the stages involved in detecting lung tumor regions, namely pre-processing, segmentation, and classification models. An adaptive median filter is used in pre-processing to identify the noise. The work’s originality seeks to create a simple yet effective model for the rapid identification and U-net architecture based segmentation of lung nodules. This approach focuses on the identification and segmentation of lung cancer by detecting picture normalcy and abnormalities.
Collapse
Affiliation(s)
- Malathi Murugesan
- Department of ECE, Vivekanandha College of Engineering for Women (Autonomous), Namakkal, Tamilnadu, India
| | - Kalaiselvi Kaliannan
- Department of Networking and Communications, SRM Institute of Science and Technology, Kattankulathur, Kanchipuram Dt, Tamil Nadu
| | - Shankarlal Balraj
- Department of ECE, Perunthalaivar Kamarajar Institute of Engineering and Technology, Karaikal, Puducherry, India
| | - Kokila Singaram
- Department of ECE, Vivekanandha College of Engineering for Women (Autonomous), Tiruchengode, Namakkal
| | - Thenmalar Kaliannan
- Department of EEE, Vivekanandha College of Engineering for Women (Autonomous), Elayampalayam, Namakkal
| | - Johny Renoald Albert
- Department of EEE, Vivekanandha College of Engineering for Women (Autonomous), Elayampalayam, Namakkal
| |
Collapse
|
8
|
|
9
|
Gui D, Song Q, Song B, Li H, Wang M, Min X, Li A. AIR-Net: A novel multi-task learning method with auxiliary image reconstruction for predicting EGFR mutation status on CT images of NSCLC patients. Comput Biol Med 2021; 141:105157. [PMID: 34953355 DOI: 10.1016/j.compbiomed.2021.105157] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/16/2021] [Accepted: 12/16/2021] [Indexed: 11/26/2022]
Abstract
Automated and accurate EGFR mutation status prediction using computed tomography (CT) imagery is of great value for tailoring optimal treatments to non-small cell lung cancer (NSCLC) patients. However, existing deep learning based methods usually adopt a single task learning strategy to design and train EGFR mutation status prediction models with limited training data, which may be insufficient to learn distinguishable representations for promoting prediction performance. In this paper, a novel multi-task learning method named AIR-Net is proposed to precisely predict EGFR mutation status on CT images. First, an auxiliary image reconstruction task is effectively integrated with EGFR mutation status prediction, aiming at providing extra supervision at the training phase. Particularly, we adequately employ multi-level information in a shared encoder to generate more comprehensive representations of tumors. Second, a powerful feature consistency loss is further introduced to constrain semantic consistency of original and reconstructed images, which contributes to enhanced image reconstruction and offers more effective regularization to AIR-Net during training. Performance analysis of AIR-Net indicates that auxiliary image reconstruction plays an essential role in identifying EGFR mutation status. Furthermore, extensive experimental results demonstrate that our method achieves favorable performance against other competitive prediction methods. All the results executed in this study suggest that the effectiveness and superiority of AIR-Net in precisely predicting EGFR mutation status of NSCLC.
Collapse
Affiliation(s)
- Dongqi Gui
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230027, China.
| | - Qilong Song
- Department of Radiology, Anhui Chest Hospital, Hefei, 230022, China.
| | - Biao Song
- Department of Radiology, Anhui Chest Hospital, Hefei, 230022, China.
| | - Haichun Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230027, China.
| | - Minghui Wang
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230027, China.
| | - Xuhong Min
- Department of Radiology, Anhui Chest Hospital, Hefei, 230022, China.
| | - Ao Li
- School of Information Science and Technology, University of Science and Technology of China, Hefei, 230027, China.
| |
Collapse
|
10
|
Zhang T, Wang Y, Sun Y, Yuan M, Zhong Y, Li H, Yu T, Wang J. High-resolution CT image analysis based on 3D convolutional neural network can enhance the classification performance of radiologists in classifying pulmonary non-solid nodules. Eur J Radiol 2021; 141:109810. [PMID: 34102564 DOI: 10.1016/j.ejrad.2021.109810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/19/2022]
Abstract
OBJECTIVE To investigate whether 3D convolutional neural network (CNN) is able to enhance the classification performance of radiologists in classifying pulmonary non-solid nodules (NSNs). MATERIALS AND METHODS Data of patients with solitary NSNs and diagnosed as adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), or invasive adenocarcinoma (IAC) in pathological after surgical resection were analyzed retrospectively. Ultimately, 532 patients in our institution were included in the study: 427 cases (144 AIS, 167 MIA, 116 IAC) were assigned to training dataset and 105 cases (36 AIS, 41 MIA and 28 IAC) were assigned to validation dataset. For external validation, 177 patients (60 AIS, 69 MIA and 48 IAC) from another hospital were assigned to testing dataset. The clinical and morphological characteristics of NSNs were established as radiologists' model. The trained classification model based on 3D CNN was used to identify NSNs types automatically. The evaluation and comparison on classification performance of the two models and CNN + radiologists' model were performed via receiver operating curve (ROC) analysis and integrated discrimination improvement (IDI) index. The Akaike information criterion (AIC) was calculated to find the best-fit model. RESULTS In external testing dataset, radiologists' model showed inferior classification performance than CNN model both in discriminating AIS from MIA-IAC and AIS-MIA from IAC (the area under the ROC curve (Az value), 0.693 vs 0.820, P = 0.011; 0.746 vs 0.833, P = 0.026, respectively). However, combining CNN significantly enhanced the classification performance of radiologists and exhibited higher Az values than CNN model alone (Az values, 0.893 vs 0.820, P < 0.001; 0.906 vs 0.833, P < 0.001, respectively). The IDI index further confirmed CNN's contribution to radiologists in classifying NSNs (IDI = 25.8 % (18.3-46.1 %), P < 0.001; IDI = 30.1 % (26.1-45.2 %), P < 0.001, respectively). The CNN + radiologists' model also provided the best fit over radiologists' model and CNN model alone (AIC value 63.3 % vs. 29.5 %, 49.5 %, P < 0.001; 69.2 % vs. 34.9 %, 53.6 %, P < 0.001, respectively). CONCLUSION CNN successfully classified NSNs based on CT images and its classification performance were superior to radiologists' model. But the classification performance of radiologists can be significantly enhanced when combined with CNN in classifying NSNs.
Collapse
Affiliation(s)
- Teng Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Yida Wang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China.
| | - Yingli Sun
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| | - Mei Yuan
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Yan Zhong
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Hai Li
- Department of Pathology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Tongfu Yu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| | - Jie Wang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, China.
| |
Collapse
|
11
|
Dong Y, Hou L, Yang W, Han J, Wang J, Qiang Y, Zhao J, Hou J, Song K, Ma Y, Kazihise NGF, Cui Y, Yang X. Multi-channel multi-task deep learning for predicting EGFR and KRAS mutations of non-small cell lung cancer on CT images. Quant Imaging Med Surg 2021; 11:2354-2375. [PMID: 34079707 PMCID: PMC8107307 DOI: 10.21037/qims-20-600] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 01/27/2021] [Indexed: 12/17/2022]
Abstract
BACKGROUND Predicting the mutation statuses of 2 essential pathogenic genes [epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma (KRAS)] in non-small cell lung cancer (NSCLC) based on CT is valuable for targeted therapy because it is a non-invasive and less costly method. Although deep learning technology has realized substantial computer vision achievements, CT imaging being used to predict gene mutations remains challenging due to small dataset limitations. METHODS We propose a multi-channel and multi-task deep learning (MMDL) model for the simultaneous prediction of EGFR and KRAS mutation statuses based on CT images. First, we decomposed each 3D lung nodule into 9 views. Then, we used the pre-trained inception-attention-resnet model for each view to learn the features of the nodules. By combining 9 inception-attention-resnet models to predict the types of gene mutations in lung nodules, the models were adaptively weighted, and the proposed MMDL model could be trained end-to-end. The MMDL model utilized multiple channels to characterize the nodule more comprehensively and integrate patient personal information into our learning process. RESULTS We trained the proposed MMDL model using a dataset of 363 patients collected by our partner hospital and conducted a multi-center validation on 162 patients in The Cancer Imaging Archive (TCIA) public dataset. The accuracies for the prediction of EGFR and KRAS mutations were, respectively, 79.43% and 72.25% in the training dataset and 75.06% and 69.64% in the validation dataset. CONCLUSIONS The experimental results demonstrated that the proposed MMDL model outperformed the latest methods in predicting EGFR and KRAS mutations in NSCLC.
Collapse
Affiliation(s)
- Yunyun Dong
- School of Software, Taiyuan University of Technology, Taiyuan, China
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Lina Hou
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| | - Wenkai Yang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiahao Han
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiawen Wang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yan Qiang
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Juanjuan Zhao
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Jiaxin Hou
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Kai Song
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yulan Ma
- School of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | | | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Taiyuan, China
| |
Collapse
|