1
|
Martis JE, M S S, R B, Mutawa AM, Murugappan M. Novel Hybrid Quantum Architecture-Based Lung Cancer Detection Using Chest Radiograph and Computerized Tomography Images. Bioengineering (Basel) 2024; 11:799. [PMID: 39199758 PMCID: PMC11351577 DOI: 10.3390/bioengineering11080799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/28/2024] [Accepted: 08/02/2024] [Indexed: 09/01/2024] Open
Abstract
Lung cancer, the second most common type of cancer worldwide, presents significant health challenges. Detecting this disease early is essential for improving patient outcomes and simplifying treatment. In this study, we propose a hybrid framework that combines deep learning (DL) with quantum computing to enhance the accuracy of lung cancer detection using chest radiographs (CXR) and computerized tomography (CT) images. Our system utilizes pre-trained models for feature extraction and quantum circuits for classification, achieving state-of-the-art performance in various metrics. Not only does our system achieve an overall accuracy of 92.12%, it also excels in other crucial performance measures, such as sensitivity (94%), specificity (90%), F1-score (93%), and precision (92%). These results demonstrate that our hybrid approach can more accurately identify lung cancer signatures compared to traditional methods. Moreover, the incorporation of quantum computing enhances processing speed and scalability, making our system a promising tool for early lung cancer screening and diagnosis. By leveraging the strengths of quantum computing, our approach surpasses traditional methods in terms of speed, accuracy, and efficiency. This study highlights the potential of hybrid computational technologies to transform early cancer detection, paving the way for wider clinical applications and improved patient care outcomes.
Collapse
Affiliation(s)
- Jason Elroy Martis
- Department of ISE, NMAM Institute of Technology, Nitte Deemed to be University, Udupi 574110, Karnataka, India (B.R.)
| | - Sannidhan M S
- Department of CSE, NMAM Institute of Technology, Nitte Deemed to be University, Udupi 574110, Karnataka, India;
| | - Balasubramani R
- Department of ISE, NMAM Institute of Technology, Nitte Deemed to be University, Udupi 574110, Karnataka, India (B.R.)
| | - A. M. Mutawa
- Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, Safat 13060, Kuwait
- Computer Sciences Department, University of Hamburg, 22527 Hamburg, Germany
| | - M. Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha 13133, Kuwait
- Department of Electronics and Communication Engineering, School of Engineering, Vels Institute of Sciences, Technology, and Advanced Studies, Chennai 600117, Tamil Nadu, India
- Center of Excellence for Unmanned Aerial Systems (CoEUAS), Universiti Malaysia Perlis, Arau 02600, Malaysia
| |
Collapse
|
2
|
Peng J, Xu Z, Dan H, Li J, Wang J, Luo X, Xu H, Zeng X, Chen Q. Oral epithelial dysplasia detection and grading in oral leukoplakia using deep learning. BMC Oral Health 2024; 24:434. [PMID: 38594651 PMCID: PMC11005210 DOI: 10.1186/s12903-024-04191-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 03/27/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND The grading of oral epithelial dysplasia is often time-consuming for oral pathologists and the results are poorly reproducible between observers. In this study, we aimed to establish an objective, accurate and useful detection and grading system for oral epithelial dysplasia in the whole-slides of oral leukoplakia. METHODS Four convolutional neural networks were compared using the image patches from 56 whole-slide of oral leukoplakia labeled by pathologists as the gold standard. Sequentially, feature detection models were trained, validated and tested with 1,000 image patches using the optimal network. Lastly, a comprehensive system named E-MOD-plus was established by combining feature detection models and a multiclass logistic model. RESULTS EfficientNet-B0 was selected as the optimal network to build feature detection models. In the internal dataset of whole-slide images, the prediction accuracy of E-MOD-plus was 81.3% (95% confidence interval: 71.4-90.5%) and the area under the receiver operating characteristic curve was 0.793 (95% confidence interval: 0.650 to 0.925); in the external dataset of 229 tissue microarray images, the prediction accuracy was 86.5% (95% confidence interval: 82.4-90.0%) and the area under the receiver operating characteristic curve was 0.669 (95% confidence interval: 0.496 to 0.843). CONCLUSIONS E-MOD-plus was objective and accurate in the detection of pathological features as well as the grading of oral epithelial dysplasia, and had potential to assist pathologists in clinical practice.
Collapse
Affiliation(s)
- Jiakuan Peng
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
- Department of Stomatology, North Sichuan Medical College, Nanchong, Sichuan, China
| | - Ziang Xu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Hongxia Dan
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jing Li
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Jiongke Wang
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Xiaobo Luo
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Hao Xu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Xin Zeng
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| | - Qianming Chen
- Key Laboratory of Oral Biomedical Research of Zhejiang Province, Affiliated Stomatology Hospital, Zhejiang University School of Stomatology, Hangzhou, Zhejiang, China
| |
Collapse
|
3
|
Quanyang W, Yao H, Sicong W, Linlin Q, Zewei Z, Donghui H, Hongjia L, Shijun Z. Artificial intelligence in lung cancer screening: Detection, classification, prediction, and prognosis. Cancer Med 2024; 13:e7140. [PMID: 38581113 PMCID: PMC10997848 DOI: 10.1002/cam4.7140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/15/2024] [Accepted: 03/16/2024] [Indexed: 04/08/2024] Open
Abstract
BACKGROUND The exceptional capabilities of artificial intelligence (AI) in extracting image information and processing complex models have led to its recognition across various medical fields. With the continuous evolution of AI technologies based on deep learning, particularly the advent of convolutional neural networks (CNNs), AI presents an expanded horizon of applications in lung cancer screening, including lung segmentation, nodule detection, false-positive reduction, nodule classification, and prognosis. METHODOLOGY This review initially analyzes the current status of AI technologies. It then explores the applications of AI in lung cancer screening, including lung segmentation, nodule detection, and classification, and assesses the potential of AI in enhancing the sensitivity of nodule detection and reducing false-positive rates. Finally, it addresses the challenges and future directions of AI in lung cancer screening. RESULTS AI holds substantial prospects in lung cancer screening. It demonstrates significant potential in improving nodule detection sensitivity, reducing false-positive rates, and classifying nodules, while also showing value in predicting nodule growth and pathological/genetic typing. CONCLUSIONS AI offers a promising supportive approach to lung cancer screening, presenting considerable potential in enhancing nodule detection sensitivity, reducing false-positive rates, and classifying nodules. However, the universality and interpretability of AI results need further enhancement. Future research should focus on the large-scale validation of new deep learning-based algorithms and multi-center studies to improve the efficacy of AI in lung cancer screening.
Collapse
Affiliation(s)
- Wu Quanyang
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Huang Yao
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Wang Sicong
- Magnetic Resonance Imaging ResearchGeneral Electric Healthcare (China)BeijingChina
| | - Qi Linlin
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhang Zewei
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Hou Donghui
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Li Hongjia
- PET‐CT CenterNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| | - Zhao Shijun
- Department of Diagnostic RadiologyNational Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
4
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
5
|
Rikhari H, Baidya Kayal E, Ganguly S, Sasi A, Sharma S, Dheeksha DS, Saini M, Rangarajan K, Bakhshi S, Kandasamy D, Mehndiratta A. Fully automatic deep learning-based lung parenchyma segmentation and boundary correction in thoracic CT scans. Int J Comput Assist Radiol Surg 2024; 19:261-272. [PMID: 37594684 DOI: 10.1007/s11548-023-03010-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 07/25/2023] [Indexed: 08/19/2023]
Abstract
PURPOSE The proposed work aims to develop an algorithm to precisely segment the lung parenchyma in thoracic CT scans. To achieve this goal, the proposed technique utilized a combination of deep learning and traditional image processing algorithms. The initial step utilized a trained convolutional neural network (CNN) to generate preliminary lung masks, followed by the proposed post-processing algorithm for lung boundary correction. METHODS First, the proposed method trained an improved 2D U-Net CNN model with Inception-ResNet-v2 as its backbone. The model was trained on 32 CT scans from two different sources: one from the VESSEL12 grand challenge and the other from AIIMS Delhi. Further, the model's performance was evaluated on a test dataset of 16 CT scans with juxta-pleural nodules obtained from AIIMS Delhi and the LUNA16 challenge. The model's performance was assessed using evaluation metrics such as average volumetric dice coefficient (DSCavg), average IoU score (IoUavg), and average F1 score (F1avg). Finally, the proposed post-processing algorithm was implemented to eliminate false positives from the model's prediction and to include juxta-pleural nodules in the final lung masks. RESULTS The trained model reported a DSCavg of 0.9791 ± 0.008, IoUavg of 0.9624 ± 0.007, and F1avg of 0.9792 ± 0.004 on the test dataset. Applying the post-processing algorithm to the predicted lung masks obtained a DSCavg of 0.9713 ± 0.007, IoUavg of 0.9486 ± 0.007, and F1avg of 0.9701 ± 0.008. The post-processing algorithm successfully included juxta-pleural nodules in the final lung mask. CONCLUSIONS Using a CNN model, the proposed method for lung parenchyma segmentation produced precise segmentation results. Furthermore, the post-processing algorithm addressed false positives and negatives in the model's predictions. Overall, the proposed approach demonstrated promising results for lung parenchyma segmentation. The method has the potential to be valuable in the advancement of computer-aided diagnosis (CAD) systems for automatic nodule detection.
Collapse
Affiliation(s)
- Himanshu Rikhari
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Esha Baidya Kayal
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Shuvadeep Ganguly
- All India Institute of Medical Sciences New Delhi, Medical Oncology, Dr. B.R.A. IRCH, New Delhi, India
| | - Archana Sasi
- All India Institute of Medical Sciences New Delhi, Medical Oncology, Dr. B.R.A. IRCH, New Delhi, India
| | - Swetambri Sharma
- All India Institute of Medical Sciences New Delhi, Medical Oncology, Dr. B.R.A. IRCH, New Delhi, India
| | - D S Dheeksha
- Radiodiagnosis, All India Institute of Medical Sciences New Delhi, New Delhi, India
| | - Manish Saini
- Radiodiagnosis, All India Institute of Medical Sciences New Delhi, New Delhi, India
| | - Krithika Rangarajan
- Radiodiagnosis, All India Institute of Medical Sciences New Delhi, Dr. B.R.A. IRCH, New Delhi, India
| | - Sameer Bakhshi
- All India Institute of Medical Sciences New Delhi, Medical Oncology, Dr. B.R.A. IRCH, New Delhi, India
| | | | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India.
- Department of Biomedical Engineering, All India Institute of Medical Sciences New Delhi, New Delhi, India.
| |
Collapse
|
6
|
Liang H, Hu M, Ma Y, Yang L, Chen J, Lou L, Chen C, Xiao Y. Performance of Deep-Learning Solutions on Lung Nodule Malignancy Classification: A Systematic Review. Life (Basel) 2023; 13:1911. [PMID: 37763314 PMCID: PMC10532719 DOI: 10.3390/life13091911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/06/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
OBJECTIVE For several years, computer technology has been utilized to diagnose lung nodules. When compared to traditional machine learning methods for image processing, deep-learning methods can improve the accuracy of lung nodule diagnosis by avoiding the laborious pre-processing step of the picture (extraction of fake features, etc.). Our goal is to investigate how well deep-learning approaches classify lung nodule malignancy. METHOD We evaluated the performance of deep-learning methods on lung nodule malignancy classification via a systematic literature search. We conducted searches for appropriate articles in the PubMed and ISI Web of Science databases and chose those that employed deep learning to classify or predict lung nodule malignancy for our investigation. The figures were plotted, and the data were extracted using SAS version 9.4 and Microsoft Excel 2010, respectively. RESULTS Sixteen studies that met the criteria were included in this study. The articles classified or predicted pulmonary nodule malignancy using classification and summarization, using convolutional neural network (CNN), autoencoder (AE), and deep belief network (DBN). The AUC of deep-learning models is typically greater than 90% in articles. It demonstrated that deep learning performed well in the diagnosis and forecasting of lung nodules. CONCLUSION It is a thorough analysis of the most recent advancements in lung nodule deep-learning technologies. The advancement of image processing techniques, traditional machine learning techniques, deep-learning techniques, and other techniques have all been applied to the technology for pulmonary nodule diagnosis. Although the deep-learning model has demonstrated distinct advantages in the detection of pulmonary nodules, it also carries significant drawbacks that warrant additional research.
Collapse
Affiliation(s)
- Hailun Liang
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Meili Hu
- Department of Gynecology, Baoding Maternal and Child Health Care Hospital, Baoding 071000, China;
| | - Yuxin Ma
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Lei Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Beijing Office for Cancer Prevention and Control, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Jie Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Liwei Lou
- School of Statistics, Renmin University of China, Beijing 100872, China
| | - Chen Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Yuan Xiao
- Blockchain Research Institute, Renmin University of China, Beijing 100872, China
| |
Collapse
|
7
|
Baidya Kayal E, Ganguly S, Sasi A, Sharma S, DS D, Saini M, Rangarajan K, Kandasamy D, Bakhshi S, Mehndiratta A. A proposed methodology for detecting the malignant potential of pulmonary nodules in sarcoma using computed tomographic imaging and artificial intelligence-based models. Front Oncol 2023; 13:1212526. [PMID: 37671060 PMCID: PMC10476362 DOI: 10.3389/fonc.2023.1212526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 07/31/2023] [Indexed: 09/07/2023] Open
Abstract
The presence of lung metastases in patients with primary malignancies is an important criterion for treatment management and prognostication. Computed tomography (CT) of the chest is the preferred method to detect lung metastasis. However, CT has limited efficacy in differentiating metastatic nodules from benign nodules (e.g., granulomas due to tuberculosis) especially at early stages (<5 mm). There is also a significant subjectivity associated in making this distinction, leading to frequent CT follow-ups and additional radiation exposure along with financial and emotional burden to the patients and family. Even 18F-fluoro-deoxyglucose positron emission technology-computed tomography (18F-FDG PET-CT) is not always confirmatory for this clinical problem. While pathological biopsy is the gold standard to demonstrate malignancy, invasive sampling of small lung nodules is often not clinically feasible. Currently, there is no non-invasive imaging technique that can reliably characterize lung metastases. The lung is one of the favored sites of metastasis in sarcomas. Hence, patients with sarcomas, especially from tuberculosis prevalent developing countries, can provide an ideal platform to develop a model to differentiate lung metastases from benign nodules. To overcome the lack of optimal specificity of CT scan in detecting pulmonary metastasis, a novel artificial intelligence (AI)-based protocol is proposed utilizing a combination of radiological and clinical biomarkers to identify lung nodules and characterize it as benign or metastasis. This protocol includes a retrospective cohort of nearly 2,000-2,250 sample nodules (from at least 450 patients) for training and testing and an ambispective cohort of nearly 500 nodules (from 100 patients; 50 patients each from the retrospective and prospective cohort) for validation. Ground-truth annotation of lung nodules will be performed using an in-house-built segmentation tool. Ground-truth labeling of lung nodules (metastatic/benign) will be performed based on histopathological results or baseline and/or follow-up radiological findings along with clinical outcome of the patient. Optimal methods for data handling and statistical analysis are included to develop a robust protocol for early detection and classification of pulmonary metastasis at baseline and at follow-up and identification of associated potential clinical and radiological markers.
Collapse
Affiliation(s)
- Esha Baidya Kayal
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Shuvadeep Ganguly
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Archana Sasi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Swetambri Sharma
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Dheeksha DS
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Manish Saini
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Krithika Rangarajan
- Radiodiagnosis, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | | | - Sameer Bakhshi
- Medical Oncology, Dr. B.R.Ambedkar Institute Rotary Cancer Hospital, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, Delhi, India
| |
Collapse
|
8
|
Tong Y, Udupa JK, Chong E, Winchell N, Sun C, Zou Y, Schuster SJ, Torigian DA. Prediction of lymphoma response to CAR T cells by deep learning-based image analysis. PLoS One 2023; 18:e0282573. [PMID: 37478073 PMCID: PMC10361488 DOI: 10.1371/journal.pone.0282573] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 02/21/2023] [Indexed: 07/23/2023] Open
Abstract
Clinical prognostic scoring systems have limited utility for predicting treatment outcomes in lymphomas. We therefore tested the feasibility of a deep-learning (DL)-based image analysis methodology on pre-treatment diagnostic computed tomography (dCT), low-dose CT (lCT), and 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) images and rule-based reasoning to predict treatment response to chimeric antigen receptor (CAR) T-cell therapy in B-cell lymphomas. Pre-treatment images of 770 lymph node lesions from 39 adult patients with B-cell lymphomas treated with CD19-directed CAR T-cells were analyzed. Transfer learning using a pre-trained neural network model, then retrained for a specific task, was used to predict lesion-level treatment responses from separate dCT, lCT, and FDG-PET images. Patient-level response analysis was performed by applying rule-based reasoning to lesion-level prediction results. Patient-level response prediction was also compared to prediction based on the international prognostic index (IPI) for diffuse large B-cell lymphoma. The average accuracy of lesion-level response prediction based on single whole dCT slice-based input was 0.82+0.05 with sensitivity 0.87+0.07, specificity 0.77+0.12, and AUC 0.91+0.03. Patient-level response prediction from dCT, using the "Majority 60%" rule, had accuracy 0.81, sensitivity 0.75, and specificity 0.88 using 12-month post-treatment patient response as the reference standard and outperformed response prediction based on IPI risk factors (accuracy 0.54, sensitivity 0.38, and specificity 0.61 (p = 0.046)). Prediction of treatment outcome in B-cell lymphomas from pre-treatment medical images using DL-based image analysis and rule-based reasoning is feasible. This approach can potentially provide clinically useful prognostic information for decision-making in advance of initiating CAR T-cell therapy.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Emeline Chong
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Nicole Winchell
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Changjian Sun
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Yongning Zou
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Stephen J Schuster
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- Lymphoma Program, Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| |
Collapse
|
9
|
Fang D, Jiang H, Chen W, Qin Z, Shi J, Zhang J. Pulmonary nodule detection on lung parenchyma images using hyber-deep algorithm. Heliyon 2023; 9:e17599. [PMID: 37449096 PMCID: PMC10336504 DOI: 10.1016/j.heliyon.2023.e17599] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/18/2023] [Accepted: 06/22/2023] [Indexed: 07/18/2023] Open
Abstract
The incidence of lung cancer has seen a significant increase in recent times, leading to a rise in fatalities. The detection of pulmonary nodules from CT images has emerged as an effective method to aid in the diagnosis of lung cancer. Ensuring information security holds utmost significance in the detection of nodules, with particular attention given to safeguarding patient privacy within the context of the Internet of Things (IoT). In this regard, migration learning emerges as a potent technique for preserving the confidentiality of patient data. Firstly, we applied several data-preprocessing steps such as lung segmentation based on K-Means, denoising methods, and lung parenchyma extraction through a dedicated medical IoT network. We used the Microsoft Common Object in Context (MS-COCO) dataset to pre-train the detection framework and fine-tuned it with the Lung Nodule Analysis 16 (LUNA16) dataset to adapt to nodule detection tasks. To evaluate the effectiveness of our proposed pipeline, we conducted extensive experiments that included subjective evaluation of detection results and quantitative data analysis. The results of these experiments demonstrated the efficacy of our approach in accurately detecting pulmonary nodules. Our study provides a promising framework for trustworthy pulmonary nodule detection on lung parenchyma images using a secured hyper-deep algorithm, which has the potential to improve lung cancer diagnosis and reduce fatalities associated with it.
Collapse
Affiliation(s)
- Da Fang
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| | - Hao Jiang
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| | - Wenyang Chen
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| | - Zhibao Qin
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| | - Junsheng Shi
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| | - Jun Zhang
- School of Physics and Electronic Information, Yunnan Normal University, Kunming 650500, China
- Yunnan Key Laboratory of Optoelectronic Information Technology, Kunming 650500, China
| |
Collapse
|
10
|
Gasmi I, Calinghen A, Parienti JJ, Belloy F, Fohlen A, Pelage JP. Comparison of diagnostic performance of a deep learning algorithm, emergency physicians, junior radiologists and senior radiologists in the detection of appendicular fractures in children. Pediatr Radiol 2023; 53:1675-1684. [PMID: 36877239 DOI: 10.1007/s00247-023-05621-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 11/21/2022] [Accepted: 01/30/2023] [Indexed: 03/07/2023]
Abstract
BACKGROUND Advances have been made in the use of artificial intelligence (AI) in the field of diagnostic imaging, particularly in the detection of fractures on conventional radiographs. Studies looking at the detection of fractures in the pediatric population are few. The anatomical variations and evolution according to the child's age require specific studies of this population. Failure to diagnose fractures early in children may lead to serious consequences for growth. OBJECTIVE To evaluate the performance of an AI algorithm based on deep neural networks toward detecting traumatic appendicular fractures in a pediatric population. To compare sensitivity, specificity, positive predictive value and negative predictive value of different readers and the AI algorithm. MATERIALS AND METHODS This retrospective study conducted on 878 patients younger than 18 years of age evaluated conventional radiographs obtained after recent non-life-threatening trauma. All radiographs of the shoulder, arm, elbow, forearm, wrist, hand, leg, knee, ankle and foot were evaluated. The diagnostic performance of a consensus of radiology experts in pediatric imaging (reference standard) was compared with those of pediatric radiologists, emergency physicians, senior residents and junior residents. The predictions made by the AI algorithm and the annotations made by the different physicians were compared. RESULTS The algorithm predicted 174 fractures out of 182, corresponding to a sensitivity of 95.6%, a specificity of 91.64% and a negative predictive value of 98.76%. The AI predictions were close to that of pediatric radiologists (sensitivity 98.35%) and that of senior residents (95.05%) and were above those of emergency physicians (81.87%) and junior residents (90.1%). The algorithm identified 3 (1.6%) fractures not initially seen by pediatric radiologists. CONCLUSION This study suggests that deep learning algorithms can be useful in improving the detection of fractures in children.
Collapse
Affiliation(s)
- Idriss Gasmi
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Arvin Calinghen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Jean-Jacques Parienti
- GRAM 2.0 EA2656 UNICAEN Normandie, University Hospital, Caen, France
- Department of Clinical Research, Caen University Hospital, Caen, France
| | - Frederique Belloy
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
| | - Audrey Fohlen
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France
| | - Jean-Pierre Pelage
- Department of Radiology, Caen University Medical Center, 14033 Cedex 9, Caen, France.
- UNICAEN CEA CNRS ISTCT- CERVOxy, Normandie University, 14000, Caen, France.
| |
Collapse
|
11
|
V R N, Chandra S S V. ExtRanFS: An Automated Lung Cancer Malignancy Detection System Using Extremely Randomized Feature Selector. Diagnostics (Basel) 2023; 13:2206. [PMID: 37443600 DOI: 10.3390/diagnostics13132206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/22/2023] [Accepted: 06/25/2023] [Indexed: 07/15/2023] Open
Abstract
Lung cancer is an abnormality where the body's cells multiply uncontrollably. The disease can be deadly if not detected in the initial stage. To address this issue, an automated lung cancer malignancy detection (ExtRanFS) framework is developed using transfer learning. We used the IQ-OTH/NCCD dataset gathered from the Iraq Hospital in 2019, encompassing CT scans of patients suffering from various lung cancers and healthy subjects. The annotated dataset consists of CT slices from 110 patients, of which 40 were diagnosed with malignant tumors and 15 with benign tumors. Fifty-five patients were determined to be in good health. All CT images are in DICOM format with a 1mm slice thickness, consisting of 80 to 200 slices at various sides and angles. The proposed system utilized a convolution-based pre-trained VGG16 model as the feature extractor and an Extremely Randomized Tree Classifier as the feature selector. The selected features are fed to the Multi-Layer Perceptron (MLP) Classifier for detecting whether the lung cancer is benign, malignant, or normal. The accuracy, sensitivity, and F1-Score of the proposed framework are 99.09%, 98.33%, and 98.33%, respectively. To evaluate the proposed model, a comparison is performed with other pre-trained models as feature extractors and also with the existing state-of-the-art methodologies as classifiers. From the experimental results, it is evident that the proposed framework outperformed other existing methodologies. This work would be beneficial to both the practitioners and the patients in identifying whether the tumor is benign, malignant, or normal.
Collapse
Affiliation(s)
- Nitha V R
- Department of Computer Science, University of Kerala, Thiruvananthapuram 695581, India
| | - Vinod Chandra S S
- Department of Computer Science, University of Kerala, Thiruvananthapuram 695581, India
| |
Collapse
|
12
|
Esfandiari MA, Fallah Tafti M, Jafarnia Dabanloo N, Yousefirizi F. Detection of the rotator cuff tears using a novel convolutional neural network from magnetic resonance image (MRI). Heliyon 2023; 9:e15804. [PMID: 37206038 PMCID: PMC10189183 DOI: 10.1016/j.heliyon.2023.e15804] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/21/2023] Open
Abstract
The rotator cuff tear is a common situation for basketballers, handballers, or other athletes that strongly use their shoulders. This injury can be diagnosed precisely from a magnetic resonance (MR) image. In this paper, a novel deep learning-based framework is proposed to diagnose rotator cuff tear from MRI images of patients suspected of the rotator cuff tear. First, we collected 150 shoulders MRI images from two classes of rotator cuff tear patients and healthy ones with the same numbers. These images were observed by an orthopedic specialist and then tagged and used as input in the various configurations of the Convolutional Neural Network (CNN). At this stage, five different configurations of convolutional networks have been examined. Then, in the next step, the selected network with the highest accuracy is used to extract the deep features and classify the two classes of rotator cuff tear and healthy. Also, MRI images are feed to two quick pre-trained CNNs (MobileNetv2 and SqueezeNet) to compare with the proposed CNN. Finally, the evaluation is performed using the 5-fold cross-validation method. Also, a specific Graphical User Interface (GUI) was designed in the MATLAB environment for simplicity, which allows for testing by detecting the image class. The proposed CNN achieved higher accuracy than the two mentioned pre-trained CNNs. The average accuracy, precision, sensitivity, and specificity achieved by the best selected CNN configuration are equal to 92.67%, 91.13%, 91.75%, and 92.22%, respectively. The deep learning algorithm could accurately rule out significant rotator cuff tear based on shoulder MRI.
Collapse
Affiliation(s)
- Mohammad Amin Esfandiari
- Department of Biomedical Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Mohammad Fallah Tafti
- Department of Biomedical Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
- Corresponding author.
| | - Nader Jafarnia Dabanloo
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Fereshteh Yousefirizi
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| |
Collapse
|
13
|
Rehman A, Khan A, Fatima G, Naz S, Razzak I. Review on chest pathogies detection systems using deep learning techniques. Artif Intell Rev 2023; 56:1-47. [PMID: 37362896 PMCID: PMC10027283 DOI: 10.1007/s10462-023-10457-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
Chest radiography is the standard and most affordable way to diagnose, analyze, and examine different thoracic and chest diseases. Typically, the radiograph is examined by an expert radiologist or physician to decide about a particular anomaly, if exists. Moreover, computer-aided methods are used to assist radiologists and make the analysis process accurate, fast, and more automated. A tremendous improvement in automatic chest pathologies detection and analysis can be observed with the emergence of deep learning. The survey aims to review, technically evaluate, and synthesize the different computer-aided chest pathologies detection systems. The state-of-the-art of single and multi-pathologies detection systems, which are published in the last five years, are thoroughly discussed. The taxonomy of image acquisition, dataset preprocessing, feature extraction, and deep learning models are presented. The mathematical concepts related to feature extraction model architectures are discussed. Moreover, the different articles are compared based on their contributions, datasets, methods used, and the results achieved. The article ends with the main findings, current trends, challenges, and future recommendations.
Collapse
Affiliation(s)
- Arshia Rehman
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Ahmad Khan
- COMSATS University Islamabad, Abbottabad-Campus, Abbottabad, Pakistan
| | - Gohar Fatima
- The Islamia University of Bahawalpur, Bahawal Nagar Campus, Bahawal Nagar, Pakistan
| | - Saeeda Naz
- Govt Girls Post Graduate College No.1, Abbottabad, Pakistan
| | - Imran Razzak
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
14
|
Role of Ensemble Deep Learning for Brain Tumor Classification in Multiple Magnetic Resonance Imaging Sequence Data. Diagnostics (Basel) 2023; 13:diagnostics13030481. [PMID: 36766587 PMCID: PMC9914433 DOI: 10.3390/diagnostics13030481] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 01/31/2023] Open
Abstract
The biopsy is a gold standard method for tumor grading. However, due to its invasive nature, it has sometimes proved fatal for brain tumor patients. As a result, a non-invasive computer-aided diagnosis (CAD) tool is required. Recently, many magnetic resonance imaging (MRI)-based CAD tools have been proposed for brain tumor grading. The MRI has several sequences, which can express tumor structure in different ways. However, a suitable MRI sequence for brain tumor classification is not yet known. The most common brain tumor is 'glioma', which is the most fatal form. Therefore, in the proposed study, to maximize the classification ability between low-grade versus high-grade glioma, three datasets were designed comprising three MRI sequences: T1-Weighted (T1W), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR). Further, five well-established convolutional neural networks, AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 were adopted for tumor classification. An ensemble algorithm was proposed using the majority vote of above five deep learning (DL) models to produce more consistent and improved results than any individual model. Five-fold cross validation (K5-CV) protocol was adopted for training and testing. For the proposed ensembled classifier with K5-CV, the highest test accuracies of 98.88 ± 0.63%, 97.98 ± 0.86%, and 94.75 ± 0.61% were achieved for FLAIR, T2W, and T1W-MRI data, respectively. FLAIR-MRI data was found to be most significant for brain tumor classification, where it showed a 4.17% and 0.91% improvement in accuracy against the T1W-MRI and T2W-MRI sequence data, respectively. The proposed ensembled algorithm (MajVot) showed significant improvements in the average accuracy of three datasets of 3.60%, 2.84%, 1.64%, 4.27%, and 1.14%, respectively, against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50.
Collapse
|
15
|
A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
16
|
Saad HM, Tourky GF, Al-kuraishy HM, Al-Gareeb AI, Khattab AM, Elmasry SA, Alsayegh AA, Hakami ZH, Alsulimani A, Sabatier JM, Eid MW, Shaheen HM, Mohammed AA, Batiha GES, De Waard M. The Potential Role of MUC16 (CA125) Biomarker in Lung Cancer: A Magic Biomarker but with Adversity. Diagnostics (Basel) 2022; 12:2985. [PMID: 36552994 PMCID: PMC9777200 DOI: 10.3390/diagnostics12122985] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/24/2022] [Accepted: 11/24/2022] [Indexed: 12/05/2022] Open
Abstract
Lung cancer is the second most commonly diagnosed cancer in the world. In terms of the diagnosis of lung cancer, combination carcinoembryonic antigen (CEA) and cancer antigen 125 (CA125) detection had higher sensitivity, specificity, and diagnostic odds ratios than CEA detection alone. Most individuals with elevated serum CA125 levels had lung cancer that was either in stage 3 or stage 4. Serum CA125 levels were similarly elevated in lung cancer patients who also had pleural effusions or ascites. Furthermore, there is strong evidence that human lung cancer produces CA125 in vitro, which suggests that other clinical illnesses outside of ovarian cancer could also be responsible for the rise of CA125. MUC16 (CA125) is a natural killer cell inhibitor. As a screening test for lung and ovarian cancer diagnosis and prognosis in the early stages, CA125 has been widely used as a marker in three different clinical settings. MUC16 mRNA levels in lung cancer are increased regardless of gender. As well, increased expression of mutated MUC16 enhances lung cancer cells proliferation and growth. Additionally, the CA125 serum level is thought to be a key indicator for lung cancer metastasis to the liver. Further, CA125 could be a useful biomarker in other cancer types diagnoses like ovarian, breast, and pancreatic cancers. One of the important limitations of CA125 as a first step in such a screening technique is that up to 20% of ovarian tumors lack antigen expression. Each of the 10 possible serum markers was expressed in 29-100% of ovarian tumors with minimal or no CA125 expression. Therefore, there is a controversy regarding CA125 in the diagnosis and prognosis of lung cancer and other cancer types. In this state, preclinical and clinical studies are warranted to elucidate the clinical benefit of CA125 in the diagnosis and prognosis of lung cancer.
Collapse
Affiliation(s)
- Hebatallah M. Saad
- Department of Pathology, Faculty of Veterinary Medicine, Matrouh University, Marsa Matruh 51744, Matrouh, Egypt
| | - Ghada F. Tourky
- Faculty of Veterinary Medicine, Damanhour University, Damanhour 22511, AlBeheira, Egypt
| | - Hayder M. Al-kuraishy
- Department of Clinical Pharmacology, Internal Medicine, College of Medicine, Al-Mustansiriyiah University, Baghdad P.O. Box 14132, Iraq
| | - Ali I. Al-Gareeb
- Department of Clinical Pharmacology, Internal Medicine, College of Medicine, Al-Mustansiriyiah University, Baghdad P.O. Box 14132, Iraq
| | - Ahmed M. Khattab
- Pharmacy College, Al-Azhar University, Cairo 11884, Cairo, Egypt
| | - Sohaila A. Elmasry
- Faculty of Science, Damanhour University, Damanhour 22511, AlBeheira, Egypt
| | - Abdulrahman A. Alsayegh
- Clinical Nutrition Department, Applied Medical Sciences College, Jazan University, Jazan 82817, Saudi Arabia
| | - Zaki H. Hakami
- Medical Laboratory Technology Department, College of Applied Medical Sciences, Jazan University, MS, CT (ASCP), PhD, Jazan 45142, Saudi Arabia
| | - Ahmad Alsulimani
- Medical Laboratory Technology Department, College of Applied Medical Sciences, Jazan University, MS, CT (ASCP), PhD, Jazan 45142, Saudi Arabia
| | - Jean-Marc Sabatier
- Aix-Marseille Université, Institut de Neurophysiopathologie (INP), CNRS UMR 7051, Faculté des Sciences Médicales et Paramédicales, 27 Bd Jean Moulin, 13005 Marseille, France
| | - Marwa W. Eid
- Faculty of Veterinary Medicine, Damanhour University, Damanhour 22511, AlBeheira, Egypt
| | - Hazem M. Shaheen
- Department of Pharmacology and Therapeutics, Faculty of Veterinary Medicine, Damanhour University, Damanhour 22511, AlBeheira, Egypt
| | - Ali A. Mohammed
- Consultant Respiratory & General Physician, The Chest Clinic, Barts Health NHS Trust Whipps Cross University Hospital, London E11 1NR, UK
| | - Gaber El-Saber Batiha
- Department of Pharmacology and Therapeutics, Faculty of Veterinary Medicine, Damanhour University, Damanhour 22511, AlBeheira, Egypt
| | - Michel De Waard
- Smartox Biotechnology, 6 rue des Platanes, 38120 Saint-Egrève, France
- L’institut du Thorax, INSERM, CNRS, UNIV NANTES, 44007 Nantes, France
- Université de Nice Sophia-Antipolis, LabEx «Ion Channels, Science & Therapeutics», 06560 Valbonne, France
| |
Collapse
|
17
|
Chao HS, Wu YH, Siana L, Chen YM. Generating High-Resolution CT Slices from Two Image Series Using Deep-Learning-Based Resolution Enhancement Methods. Diagnostics (Basel) 2022; 12:2725. [PMID: 36359568 PMCID: PMC9689374 DOI: 10.3390/diagnostics12112725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/30/2022] [Accepted: 11/04/2022] [Indexed: 08/30/2023] Open
Abstract
Medical image super-resolution (SR) has mainly been developed for a single image in the literature. However, there is a growing demand for high-resolution, thin-slice medical images. We hypothesized that fusing the two planes of a computed tomography (CT) study and applying the SR model to the third plane could yield high-quality thin-slice SR images. From the same CT study, we collected axial planes of 1 mm and 5 mm in thickness and coronal planes of 5 mm in thickness. Four SR algorithms were then used for SR reconstruction. Quantitative measurements were performed for image quality testing. We also tested the effects of different regions of interest (ROIs). Based on quantitative comparisons, the image quality obtained when the SR models were applied to the sagittal plane was better than that when applying the models to the other planes. The results were statistically significant according to the Wilcoxon signed-rank test. The overall effect of the enhanced deep residual network (EDSR) model was superior to those of the other three resolution-enhancement methods. A maximal ROI containing minimal blank areas was the most appropriate for quantitative measurements. Fusing two series of thick-slice CT images and applying SR models to the third plane can yield high-resolution thin-slice CT images. EDSR provides superior SR performance across all ROI conditions.
Collapse
Affiliation(s)
- Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei City 112, Taiwan
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei City 112, Taiwan
| | - Yu-Hong Wu
- Research and Development III, V5 Technologies Co., Ltd., Hsinchu 300, Taiwan
| | - Linda Siana
- Research and Development III, V5 Technologies Co., Ltd., Hsinchu 300, Taiwan
| | - Yuh-Min Chen
- Department of Chest Medicine, Taipei Veterans General Hospital, Taipei City 112, Taiwan
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei City 112, Taiwan
| |
Collapse
|
18
|
Chen S, Duan J, Wang H, Wang R, Li J, Qi M, Duan Y, Qi S. Automatic detection of stroke lesion from diffusion-weighted imaging via the improved YOLOv5. Comput Biol Med 2022; 150:106120. [PMID: 36179511 DOI: 10.1016/j.compbiomed.2022.106120] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/31/2022] [Accepted: 09/17/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND AND OBJECTIVE Stroke is the second most deadly disease globally and seriously endangers people's lives and health. The automatic detection of stroke lesions from diffusion-weighted imaging (DWI) can improve the diagnosis. Recently, automatic detection methods based on YOLOv5 have been utilized in medical images. However, most of them barely capture the stroke lesions because of their small size and fuzzy boundaries. METHODS To address this problem, a novel method for tracing the edge of the stroke lesion based on YOLOv5 (TE-YOLOv5) is proposed. Specifically, we constantly update the high-level features of the lesion using an aggregate pool (AP) module. Conversely, we feed the extracted feature into the reverse attention (RA) module to trace the edge relationship promptly. Overall, 1681 DWI images of 319 stroke patients have been collected, and experienced radiologists have marked the lesions. DWI images were randomly split into the training and test set at a ratio of 8:2. TE-YOLOv5 has been compared with the related models, and a detailed ablation analysis has been conducted to clarify the role of the RA and AP modules. RESULTS TE-YOLOv5 outperforms its counterparts and achieves competitive performance with a precision of 81.5%, a recall of 75.8%, and a mAP@0.5 of 80.7% (mean average precision while the intersection over union is 0.5) under the same backbone. At the patient level, the positive finding rate can reach 98.51%, while the confidence is set at 80.0%. After ablating RA, the mAP@0.5 decreases to 79.6%; after ablating RA and AP, the mAP@0.5 decreases to 78.1%. CONCLUSIONS The proposed TE-YOLOv5 can automatically and effectively detect stroke lesions from DWI images, especially for those with an extremely small size and blurred boundaries. AP and RA modules can aggregate multi-layer high-level features and concurrently track the edge relationship of stroke lesions. These detection methods might help radiologists improve stroke diagnosis and have great application potential in clinical practice.
Collapse
Affiliation(s)
- Shannan Chen
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Lab of Advanced Design and Intelligent Computing, Ministry of Education, Dalian University, Dalian, China.
| | - Jinfeng Duan
- Department of General Surgery, General Hospital of Northern Theater Command, Shenyang, China.
| | - Hong Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Rongqiang Wang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Jinze Li
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Miao Qi
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Yang Duan
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| |
Collapse
|
19
|
Karrar A, Mabrouk MS, Abdel Wahed M, Sayed AY. Auto diagnostic system for detecting solitary and juxtapleural pulmonary nodules in computed tomography images using machine learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07844-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2022]
Abstract
AbstractLung cancer is one of the most serious cancers in the world with the minimum survival rate after the diagnosis as it appears in Computed Tomography scans. Lung nodules may be isolated from (solitary) or attached to (juxtapleural) other structures such as blood vessels or the pleura. Diagnosis of lung nodules according to their location increases the survival rate as it achieves diagnostic and therapeutic quality assurance. In this paper, a Computer Aided Diagnosis (CADx) system is proposed to classify solitary nodules and juxtapleural nodules inside the lungs. Two main auto-diagnostic schemes of supervised learning for lung nodules classification are achieved. In the first scheme, (bounding box + Maximum intensity projection) and (Thresholding + K-means clustering) segmentation approaches are proposed then first- and second-order features are extracted. Fisher score ranking is also used in the first scheme as a feature selection method. The higher five, ten, and fifteen ranks of the feature set are selected. In the first scheme, Support Vector Machine (SVM) classifier is used. In the second scheme, the same segmentation approaches are used with Deep Convolutional neural networks (DCNN) which is a successful tool for deep learning classification. Because of the limited data sample and imbalanced data, tenfold cross-validation and random oversampling are used for the two schemes. For diagnosis of the solitary nodule, the first scheme with SVM achieved the highest accuracy and sensitivity 91.4% and 89.3%, respectively, with radial basis function and applying the (Thresholding + Kmeans clustering) segmentation approach and the higher 15 ranks of the feature set. In the second scheme, DCNN achieved the highest accuracy and sensitivity 96% and 95%, respectively, to detect the solitary nodule when applying the bounding box and maximum intensity projection segmentation approach. Receiver operating characteristic curve is used to evaluate the classifier’s performance. The max. AUC = 90.3% is achieved with DCNN classifier for detecting solitary nodules. This CAD system acts as a second opinion for the radiologist to help in the early diagnosis of lung cancer. The accuracy, sensitivity, and specificity of scheme I (SVM) and scheme II (DCNN) showed promising results in comparison to other published studies.
Collapse
|
20
|
Xie Y, Zaccagna F, Rundo L, Testa C, Agati R, Lodi R, Manners DN, Tonon C. Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics (Basel) 2022; 12:diagnostics12081850. [PMID: 36010200 PMCID: PMC9406354 DOI: 10.3390/diagnostics12081850] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/20/2022] [Accepted: 07/28/2022] [Indexed: 12/21/2022] Open
Abstract
Convolutional neural networks (CNNs) constitute a widely used deep learning approach that has frequently been applied to the problem of brain tumor diagnosis. Such techniques still face some critical challenges in moving towards clinic application. The main objective of this work is to present a comprehensive review of studies using CNN architectures to classify brain tumors using MR images with the aim of identifying useful strategies for and possible impediments in the development of this technology. Relevant articles were identified using a predefined, systematic procedure. For each article, data were extracted regarding training data, target problems, the network architecture, validation methods, and the reported quantitative performance criteria. The clinical relevance of the studies was then evaluated to identify limitations by considering the merits of convolutional neural networks and the remaining challenges that need to be solved to promote the clinical application and development of CNN algorithms. Finally, possible directions for future research are discussed for researchers in the biomedical and machine learning communities. A total of 83 studies were identified and reviewed. They differed in terms of the precise classification problem targeted and the strategies used to construct and train the chosen CNN. Consequently, the reported performance varied widely, with accuracies of 91.63–100% in differentiating meningiomas, gliomas, and pituitary tumors (26 articles) and of 60.0–99.46% in distinguishing low-grade from high-grade gliomas (13 articles). The review provides a survey of the state of the art in CNN-based deep learning methods for brain tumor classification. Many networks demonstrated good performance, and it is not evident that any specific methodological choice greatly outperforms the alternatives, especially given the inconsistencies in the reporting of validation methods, performance metrics, and training data encountered. Few studies have focused on clinical usability.
Collapse
Affiliation(s)
- Yuting Xie
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Claudia Testa
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
- Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
| | - Raffaele Agati
- Programma Neuroradiologia con Tecniche ad elevata complessità, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - David Neil Manners
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Correspondence:
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| |
Collapse
|
21
|
Sasaki Y, Kondo Y, Aoki T, Koizumi N, Ozaki T, Seki H. Use of deep learning to predict postoperative recurrence of lung adenocarcinoma from preoperative CT. Int J Comput Assist Radiol Surg 2022; 17:1651-1661. [PMID: 35763149 DOI: 10.1007/s11548-022-02694-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 05/31/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Although surgery is the primary treatment for lung cancer, some patients experience recurrence at a certain rate. If postoperative recurrence can be predicted early before treatment is initiated, it may be possible to provide individualized treatment for patients. Thus, in this study, we propose a computer-aided diagnosis (CAD) system that predicts postoperative recurrence from computed tomography (CT) images acquired before surgery in patients with lung adenocarcinoma using a deep convolutional neural network (DCNN). METHODS This retrospective study included 150 patients who underwent curative surgery for primary lung adenocarcinoma. To create original images, the tumor part was cropped from the preoperative contrast-enhanced CT images. The number of input images to the DCNN was increased to 3000 using data augmentation. We constructed a CAD system by transfer learning using a pretrained VGG19 model. Tenfold cross-validation was performed five times. Cases with an average identification rate of 0.5 or higher were determined to be a recurrence. RESULTS The median duration of follow-up was 73.2 months. The results of the performance evaluation showed that the sensitivity, specificity, and accuracy of the proposed method were 0.75, 0.87, and 0.82, respectively. The area under the receiver operating characteristic curve was 0.86. CONCLUSION We demonstrated the usefulness of DCNN in predicting postoperative recurrence of lung adenocarcinoma using preoperative CT images. Because our proposed method uses only CT images, we believe that it has the advantage of being able to assess postoperative recurrence on an individual patient basis, both preoperatively and noninvasively.
Collapse
Affiliation(s)
- Yuki Sasaki
- Division of Central Radiology, Niigata Cancer Center Hospital, 2-15-3 Kawagishi-cho, Chuo-ku, Niigata-shi, Niigata, 951-8566, Japan. .,Department of Radiological Technology, Graduate School of Health Sciences, Niigata University, Niigata, Japan.
| | - Yohan Kondo
- Department of Radiological Technology, Graduate School of Health Sciences, Niigata University, Niigata, Japan
| | - Tadashi Aoki
- Department of Thoracic Surgery, Niigata Cancer Center Hospital, Niigata, Japan
| | - Naoya Koizumi
- Department of Radiology, Niigata Cancer Center Hospital, Niigata, Japan
| | - Toshiro Ozaki
- Department of Radiology, Niigata Cancer Center Hospital, Niigata, Japan
| | - Hiroshi Seki
- Department of Radiology, Niigata Cancer Center Hospital, Niigata, Japan
| |
Collapse
|
22
|
Prognostic impact of artificial intelligence-based volumetric quantification of the solid part of the tumor in clinical stage 0-I adenocarcinoma. Lung Cancer 2022; 170:85-90. [PMID: 35728481 DOI: 10.1016/j.lungcan.2022.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/30/2022] [Accepted: 06/09/2022] [Indexed: 11/22/2022]
Abstract
INTRODUCTION The size of the solid part of a tumor, as measured using thin-section computed tomography, can help predict disease prognosis in patients with early-stage lung cancer. Although three-dimensional volumetric analysis may be more useful than two-dimensional evaluation, measuring the solid part of some lesions is difficult using this methods. We developed an artificial intelligence-based analysis software that can distinguish the solid and non-solid parts (ground-grass opacity). This software calculates the solid part volume in a totally automated and reproducible manner. The predictive performance of the artificial intelligence software was evaluated in terms of survival or recurrence-free survival. METHODS We analyzed the high-resolution computed tomography images of the primary lesion in 772 consecutive patients with clinical stage 0-I adenocarcinoma. We performed automated measurement of the solid part volume using an artificial intelligence-based algorithm in collaboration with FUJIFILM Corporation. The solid part size, the solid part volume based on traditional three-dimensional volumetric analysis, and the solid part volume based on artificial intelligence were compared. RESULTS Higher areas under the curve related to the solid part volume were provided by the artificial intelligence-based method (0.752) than by the solid part size (0.722) and traditional three-dimensional volumetric analysis-based method (0.723). Multivariate analysis demonstrated that the solid part volume based on artificial intelligence was independently correlated with overall survival (P = 0.019) and recurrence-free survival (P < 0.001). CONCLUSION The solid part volume measured by artificial intelligence was superior to conventional methods in predicting the prognosis of clinical stage 0-I adenocarcinoma.
Collapse
|
23
|
|
24
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
25
|
Kocher MR, Chamberlin J, Waltz J, Snoddy M, Stringer N, Stephenson J, Kahn J, Mercer M, Baruah D, Aquino G, Kabakus I, Hoelzer P, Sahbaee P, Schoepf UJ, Burt JR. Tumor burden of lung metastases at initial staging in breast cancer patients detected by artificial intelligence as a prognostic tool for precision medicine. Heliyon 2022; 8:e08962. [PMID: 35243082 PMCID: PMC8873537 DOI: 10.1016/j.heliyon.2022.e08962] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 12/20/2021] [Accepted: 02/11/2022] [Indexed: 12/05/2022] Open
Abstract
Background Determination of the total number and size of all pulmonary metastases on chest CT is time-consuming and as such has been understudied as an independent metric for disease assessment. A novel artificial intelligence (AI) model may allow for automated detection, size determination, and quantification of the number of pulmonary metastases on chest CT. Objective To investigate the utility of a novel AI program applied to initial staging chest CT in breast cancer patients in risk assessment of mortality and survival. Methods Retrospective imaging data from a cohort of 226 subjects with breast cancer was assessed by the novel AI program and the results validated by blinded readers. Mean clinical follow-up was 2.5 years for outcomes including cancer-related death and development of extrapulmonary metastatic disease. AI measurements including total number of pulmonary metastases and maximum nodule size were assessed by Cox-proportional hazard modeling and adjusted survival. Results 752 lung nodules were identified by the AI program, 689 of which were identified in 168 subjects having confirmed lung metastases (Lmet+) and 63 were identified in 58 subjects without confirmed lung metastases (Lmet-). When compared to the reader assessment, AI had a per-patient sensitivity, specificity, PPV and NPV of 0.952, 0.639, 0.878, and 0.830. Mortality in the Lmet + group was four times greater compared to the Lmet-group (p = 0.002). In a multivariate analysis, total lung nodule count by AI had a high correlation with overall mortality (OR 1.11 (range 1.07–1.15), p < 0.001) with an AUC of 0.811 (R2 = 0.226, p < 0.0001). When total lung nodule count and maximum nodule diameter were combined there was an AUC of 0.826 (R2 = 0.243, p < 0.001). Conclusion Automated AI-based detection of lung metastases in breast cancer patients at initial staging chest CT performed well at identifying pulmonary metastases and demonstrated strong correlation between the total number and maximum size of lung metastases with future mortality. Clinical impact As a component of precision medicine, AI-based measurements at the time of initial staging may improve prediction of which breast cancer patients will have negative future outcomes. Automated detection software can quantify lung metastases on initial staging chest CT in breast cancer patients. AI-detected lung metastases number and max diameter on CT at initial cancer staging were strong predictors of mortality. AI detection and segmentation tool contributes to accurate individualized prognostication in breast cancer patients.
Collapse
Affiliation(s)
- Madison R Kocher
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jordan Chamberlin
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jeffrey Waltz
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Madalyn Snoddy
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Natalie Stringer
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Joseph Stephenson
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jacob Kahn
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Megan Mercer
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Dhiraj Baruah
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Gilberto Aquino
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Ismail Kabakus
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | | | | | - U Joseph Schoepf
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| | - Jeremy R Burt
- Medical University of South Carolina, Department of Radiology, 96 Jonathan Lucas Street Suite 210, MSC 323 Charleston, SC 29425, USA
| |
Collapse
|
26
|
Ye Q, Gao Y, Ding W, Niu Z, Wang C, Jiang Y, Wang M, Fang EF, Menpes-Smith W, Xia J, Yang G. Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl Soft Comput 2022; 116:108291. [PMID: 34934410 PMCID: PMC8667427 DOI: 10.1016/j.asoc.2021.108291] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 12/20/2022]
Abstract
The world is currently experiencing an ongoing pandemic of an infectious disease named coronavirus disease 2019 (i.e., COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computed Tomography (CT) plays an important role in assessing the severity of the infection and can also be used to identify those symptomatic and asymptomatic COVID-19 carriers. With a surge of the cumulative number of COVID-19 patients, radiologists are increasingly stressed to examine the CT scans manually. Therefore, an automated 3D CT scan recognition tool is highly in demand since the manual analysis is time-consuming for radiologists and their fatigue can cause possible misjudgment. However, due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches. The multi-domain shift problem for the multi-center and multi-scanner studies is therefore nontrivial that is also crucial for a dependable recognition and critical for reproducible and objective diagnosis and prognosis. In this paper, we proposed a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net) that can efficiently handle the multi-domain shift problem via a new robust weakly supervised learning paradigm. Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Qinghao Ye
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Yuan Gao
- Institute of Biomedical Engineering, University of Oxford, UK
- Aladdin Healthcare Technologies Ltd, UK
| | | | | | - Chengjia Wang
- BHF Center for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Yinghui Jiang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Minhao Wang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Evandro Fei Fang
- Department of Clinical Molecular Biology, University of Oslo, Norway
| | | | - Jun Xia
- Radiology Department, Shenzhen Second People's Hospital, Shenzhen, China
| | - Guang Yang
- Royal Brompton Hospital, London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
27
|
Arimura H, Kodama T, Urakami A, Kamezawa H, Hirose TA, Ninomiya K. [6. Imaging Biopsy for Assisting Cancer Precision Therapy -Information Extracted from Radiomics]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022; 78:219-224. [PMID: 35185102 DOI: 10.6009/jjrt.780213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Affiliation(s)
- Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University
| | - Takumi Kodama
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| | - Akimasa Urakami
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| | - Hidemi Kamezawa
- Department of Radiological Technology, Faculty of Fukuoka Medical Technology, Teikyo University
| | - Taka-Aki Hirose
- Division of Radiology, Department of Medical Technology, Kyushu University Hospital
| | - Kenta Ninomiya
- Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University
| |
Collapse
|
28
|
Abstract
PURPOSE OF REVIEW In this article, we focus on the role of artificial intelligence in the management of lung cancer. We summarized commonly used algorithms, current applications and challenges of artificial intelligence in lung cancer. RECENT FINDINGS Feature engineering for tabular data and computer vision for image data are commonly used algorithms in lung cancer research. Furthermore, the use of artificial intelligence in lung cancer has extended to the entire clinical pathway including screening, diagnosis and treatment. Lung cancer screening mainly focuses on two aspects: identifying high-risk populations and the automatic detection of lung nodules. Artificial intelligence diagnosis of lung cancer covers imaging diagnosis, pathological diagnosis and genetic diagnosis. The artificial intelligence clinical decision-support system is the main application of artificial intelligence in lung cancer treatment. Currently, the challenges of artificial intelligence applications in lung cancer mainly focus on the interpretability of artificial intelligence models and limited annotated datasets; and recent advances in explainable machine learning, transfer learning and federated learning might solve these problems. SUMMARY Artificial intelligence shows great potential in many aspects of the management of lung cancer, especially in screening and diagnosis. Future studies on interpretability and privacy are needed for further application of artificial intelligence in lung cancer.
Collapse
Affiliation(s)
- Kai Zhang
- Department of Thoracic Surgery, Peking University People's Hospital, Beijing, China
| | | |
Collapse
|
29
|
Zhang C, Gu J, Zhu Y, Meng Z, Tong T, Li D, Liu Z, Du Y, Wang K, Tian J. AI in spotting high-risk characteristics of medical imaging and molecular pathology. PRECISION CLINICAL MEDICINE 2021; 4:271-286. [PMID: 35692858 PMCID: PMC8982528 DOI: 10.1093/pcmedi/pbab026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 11/26/2021] [Accepted: 11/29/2021] [Indexed: 02/07/2023] Open
Abstract
Medical imaging provides a comprehensive perspective and rich information for disease diagnosis. Combined with artificial intelligence technology, medical imaging can be further mined for detailed pathological information. Many studies have shown that the macroscopic imaging characteristics of tumors are closely related to microscopic gene, protein and molecular changes. In order to explore the function of artificial intelligence algorithms in in-depth analysis of medical imaging information, this paper reviews the articles published in recent years from three perspectives: medical imaging analysis method, clinical applications and the development of medical imaging in the direction of pathological molecular prediction. We believe that AI-aided medical imaging analysis will be extensively contributing to precise and efficient clinical decision.
Collapse
Affiliation(s)
- Chong Zhang
- Department of Big Data Management and Application, School of International Economics and Management, Beijing Technology and Business University, Beijing 100048, China
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Jionghui Gu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yangyang Zhu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dongyang Li
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing 100191, China
| |
Collapse
|
30
|
Ahmed HAK, FarghalyAmin M. Impact of lung-RADS classification system on the accurate diagnosis of pulmonary nodular lesions in oncology patients. THE EGYPTIAN JOURNAL OF RADIOLOGY AND NUCLEAR MEDICINE 2021. [DOI: 10.1186/s43055-021-00551-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Lung assessment is highly recommended in the management of oncology patients as it is the commonest affected site in metastatic dissemination. The low-dose CT with nodule reporting system based on Lung Reporting and Data System (lung-RADS) is a promising non-invasive tool for the characterization of incidentally detected pulmonary nodules. The authors aimed to assess the accuracy of the “lung-RADS” classification system as a non-invasive tool for the characterization of any newly developed pulmonary nodules among oncology patients. Ethics committee approval and informed written consent were obtained from the studied patients. A non-contrast LDCT study was performed on all patients with a nodule reporting system based on the lung-RADS classification system applied for evaluation of each detected pulmonary nodule. Diagnoses were established using the help of either histopathology or follow-up clinical results as a gold standard.
Results
In this prospective study, we enrolled 187 known malignancy patients with 200 suspicious newly developed pulmonary nodules. Their mean patient age was 48.4 ± 9.7 years. The studied 200 pulmonary nodular lesions were categorized using a nodule reporting system based on the lung-RADS into 6 sub-groups with 122 lesions found to be malignant and 78 lesions were of benign etiology, which showed a high sensitivity of 92.08%, specificity of 78.79%, and accuracy of 85.50% with 81.58% positive predictive value and 90.70% negative predictive value in the diagnosis of pulmonary nodules in cancer patients.
Conclusion
Low-density CT with a nodule reporting system based on the lung-RADS classification system was found to be an accurate non-invasive tool to characterize and to risk stratify pulmonary nodules in oncology patients.
Collapse
|
31
|
Shan W, Guo J, Mao X, Zhang Y, Huang Y, Wang S, Li Z, Meng X, Zhang P, Wu Z, Wang Q, Liu Y, He K, Wang Y. Automated Identification of Skull Fractures With Deep Learning: A Comparison Between Object Detection and Segmentation Approach. Front Neurol 2021; 12:687931. [PMID: 34777193 PMCID: PMC8585755 DOI: 10.3389/fneur.2021.687931] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 09/24/2021] [Indexed: 12/03/2022] Open
Abstract
Objective: Skull fractures caused by head trauma can lead to life-threatening complications. Hence, timely and accurate identification of fractures is of great importance. Therefore, this study aims to develop a deep learning system for automated identification of skull fractures from cranial computed tomography (CT) scans. Method: This study retrospectively analyzed CT scans of 4,782 patients (median age, 54 years; 2,583 males, 2,199 females; development set: n = 4,168, test set: n = 614) diagnosed with skull fractures between September 2016 and September 2020. Additional data of 7,856 healthy people were included in the analysis to reduce the probability of false detection. Skull fractures in all the scans were manually labeled by seven experienced neurologists. Two deep learning approaches were developed and tested for the identification of skull fractures. In the first approach, the fracture identification task was treated as an object detected problem, and a YOLOv3 network was trained to identify all the instances of skull fracture. In the second approach, the task was treated as a segmentation problem and a modified attention U-net was trained to segment all the voxels representing skull fracture. The developed models were tested using an external test set of 235 patients (93 with, and 142 without skull fracture). Results: On the test set, the YOLOv3 achieved average fracture detection sensitivity and specificity of 80.64, and 85.92%, respectively. On the same dataset, the modified attention U-Net achieved a fracture detection sensitivity and specificity of 82.80, and 88.73%, respectively. Conclusion: Deep learning methods can identify skull fractures with good sensitivity. The segmentation approach to fracture identification may achieve better results.
Collapse
Affiliation(s)
- Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
- Beijing Institute for Brain Disorders, Beijing, China
| | - Jianwei Guo
- Department of Orthopedics, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xuewei Mao
- Shandong Key Laboratory of Industrial Control Technology, School of Automation, Qingdao University, Qingdao, China
| | - Yulei Zhang
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Yikun Huang
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Shuai Wang
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Zixiao Li
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Xia Meng
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Pingye Zhang
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Zhenzhou Wu
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
- Beijing Institute for Brain Disorders, Beijing, China
| | - Yaou Liu
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| | - Kunlun He
- Laboratory of Translational Medicine, Chinese PLA General Hospital, Beijing, China
- Key Laboratory of Ministry of Industry and Information Technology of Biomedical Engineering and Translational Medicine, Chinese PLA General Hospital, Beijing, China
| | - Yongjun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- National Center for Clinical Medicine of Neurological Diseases, Beijing, China
| |
Collapse
|
32
|
Yan Y, Yao XJ, Wang SH, Zhang YD. A Survey of Computer-Aided Tumor Diagnosis Based on Convolutional Neural Network. BIOLOGY 2021; 10:biology10111084. [PMID: 34827077 PMCID: PMC8615026 DOI: 10.3390/biology10111084] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/19/2021] [Accepted: 10/20/2021] [Indexed: 01/10/2023]
Abstract
Simple Summary One of the hottest areas in deep learning is computerized tumor diagnosis and treatment. The identification of tumor markers, the outline of tumor growth activity, and the staging of various tumor kinds are frequently included. There are several deep learning models based on convolutional neural networks that have high performance and accurate identification, with the potential to improve medical tasks. Breakthroughs and updates in computer algorithms and hardware devices, and intelligent algorithms applied in medical images have a diagnostic accuracy that doctors cannot match in some diseases. This paper reviews the progress of tumor detection from traditional computer-aided methods to convolutional neural networks and demonstrates the potential of the practical application of convolutional neural networks from practical cases to transform the detection model from experiment to clinical application. Abstract Tumors are new tissues that are harmful to human health. The malignant tumor is one of the main diseases that seriously affect human health and threaten human life. For cancer treatment, early detection of pathological features is essential to reduce cancer mortality effectively. Traditional diagnostic methods include routine laboratory tests of the patient’s secretions, and serum, immune and genetic tests. At present, the commonly used clinical imaging examinations include X-ray, CT, MRI, SPECT scan, etc. With the emergence of new problems of radiation noise reduction, medical image noise reduction technology is more and more investigated by researchers. At the same time, doctors often need to rely on clinical experience and academic background knowledge in the follow-up diagnosis of lesions. However, it is challenging to promote clinical diagnosis technology. Therefore, due to the medical needs, research on medical imaging technology and computer-aided diagnosis appears. The advantages of a convolutional neural network in tumor diagnosis are increasingly obvious. The research on computer-aided diagnosis based on medical images of tumors has become a sharper focus in the industry. Neural networks have been commonly used to research intelligent methods to assist medical image diagnosis and have made significant progress. This paper introduces the traditional methods of computer-aided diagnosis of tumors. It introduces the segmentation and classification of tumor images as well as the diagnosis methods based on CNN to help doctors determine tumors. It provides a reference for developing a CNN computer-aided system based on tumor detection research in the future.
Collapse
|
33
|
Li J, Liu J, Wang Y, He Y, Liu K, Raghunathan R, Shen SS, He T, Yu X, Danforth R, Zheng F, Zhao H, Wong STC. Artificial intelligence-augmented, label-free molecular imaging method for tissue identification, cancer diagnosis, and cancer margin detection. BIOMEDICAL OPTICS EXPRESS 2021; 12:5559-5582. [PMID: 34692201 PMCID: PMC8515981 DOI: 10.1364/boe.428738] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 06/17/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
Label-free high-resolution molecular and cellular imaging strategies for intraoperative use are much needed, but not yet available. To fill this void, we developed an artificial intelligence-augmented molecular vibrational imaging method that integrates label-free and subcellular-resolution coherent anti-stokes Raman scattering (CARS) imaging with real-time quantitative image analysis via deep learning (artificial intelligence-augmented CARS or iCARS). The aim of this study was to evaluate the capability of the iCARS system to identify and differentiate the parathyroid gland and recurrent laryngeal nerve (RLN) from surrounding tissues and detect cancer margins. This goal was successfully met.
Collapse
Affiliation(s)
- Jiasong Li
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- These authors contributed equally to this work
| | - Jun Liu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Breast-thyroid-vascular Surgery, Shanghai General Hospital, Shanghai Jiao Tong University, 201620, Shanghai, China
- These authors contributed equally to this work
| | - Ye Wang
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Breast-thyroid-vascular Surgery, Shanghai General Hospital, Shanghai Jiao Tong University, 201620, Shanghai, China
- These authors contributed equally to this work
| | - Yunjie He
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Kai Liu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Raksha Raghunathan
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Steven S. Shen
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Tiancheng He
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Xiaohui Yu
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Rebecca Danforth
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Feibi Zheng
- Department of Surgery, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Hong Zhao
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
| | - Stephen T. C. Wong
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
- Department of Radiology, Houston Methodist Hospital, Weill Cornell Medicine, Houston, TX 77030, USA
| |
Collapse
|
34
|
Shakir H, Khan T, Rasheed H, Deng Y. Radiomics Based Bayesian Inversion Method for Prediction of Cancer and Pathological Stage. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2021; 9:4300208. [PMID: 34522470 PMCID: PMC8428789 DOI: 10.1109/jtehm.2021.3108390] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 07/23/2021] [Accepted: 08/13/2021] [Indexed: 01/10/2023]
Abstract
OBJECTIVE To develop a Bayesian inversion framework on longitudinal chest CT scans which can perform efficient multi-class classification of lung cancer. METHODS While the unavailability of large number of training medical images impedes the performance of lung cancer classifiers, the purpose built deep networks have not performed well in multi-class classification. The presented framework employs particle filtering approach to address the non-linear behaviour of radiomic features towards benign and cancerous (stages I, II, III, IV) nodules and performs efficient multi-class classification (benign, early stage cancer, advanced stage cancer) in terms of posterior probability function. A joint likelihood function incorporating diagnostic radiomic features is formulated which can compute likelihood of cancer and its pathological stage. The proposed research study also investigates and validates diagnostic features to discriminate accurately between early stage (I, II) and advanced stage (III, IV) cancer. RESULTS The proposed stochastic framework achieved 86% accuracy on the benchmark database which is better than the other prominent cancer detection methods. CONCLUSION The presented classification framework can aid radiologists in accurate interpretation of lung CT images at an early stage and can lead to timely medical treatment of cancer patients.
Collapse
Affiliation(s)
- Hina Shakir
- Department of Electrical EngineeringBahria UniversityKarachi75620Pakistan
| | - Tariq Khan
- Department of Electrical and Power EngineeringNational University of Science and TechnologyIslamabad75350Pakistan
| | - Haroon Rasheed
- Department of Electrical EngineeringBahria UniversityKarachi75620Pakistan
| | - Yiming Deng
- Department of Electrical and Computer EngineeringMichigan State UniversityEast LansingMI48824USA
| |
Collapse
|
35
|
Sun R, Meng Z, Hou X, Chen Y, Yang Y, Huang G, Nie S. Prediction of breast cancer molecular subtypes using DCE-MRI based on CNNs combined with ensemble learning. Phys Med Biol 2021; 66. [PMID: 34330117 DOI: 10.1088/1361-6560/ac195a] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022]
Abstract
To design an ensemble learning based prediction model using different breast DCE-MR post-contrast sequence images to distinguish two kinds of breast cancer subtypes (luminal and non-luminal). We retrospectively studied preoperative dynamic contrast enhanced-magnetic resonance imaging and molecular information of 266 breast cancer cases with either luminal subtype (luminal A and luminal B) or non-luminal subtype (human epidermal growth factor receptor 2 and triple negative). Then, multiple bounding boxes covering tumor lesions were acquired from three series of post-contrast DCE-MR sequence images which were determined by radiologists. Afterwards, three baseline convolutional neural networks (CNNs) with same architecture were concurrently trained, followed by preliminary prediction of probabilities from the testing database. Finally, the classification and evaluation of breast subtypes were realized by means of fusing predicted results from three CNNs employed via ensemble learning based on weighted voting. Taking advantage of 5-fold cross validation CV, the average prediction specificity, accuracy, precision and area under the ROC curve on testing dataset for the luminal versus non-luminal are 0.958, 0.852, 0.961, and 0.867, respectively, which empirically demonstrate that our proposed ensemble model has highly reliability and robustness. The breast DCE-MR post-contrast sequence image analysis utilizing the ensemble CNN model based on deep learning could show a valuable and extendible practical application on breast molecular subtype identification.
Collapse
Affiliation(s)
- Rong Sun
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Zijun Meng
- School of Information Engineering, China Jiliang University, Hangzhou, People's Republic of China
| | - Xuewen Hou
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Yang Chen
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Yifeng Yang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Gang Huang
- Shanghai University of Medicine and Health Sciences, Shanghai, People's Republic of China
| | - Shengdong Nie
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| |
Collapse
|
36
|
Park YJ, Choi D, Choi JY, Hyun SH. Performance Evaluation of a Deep Learning System for Differential Diagnosis of Lung Cancer With Conventional CT and FDG PET/CT Using Transfer Learning and Metadata. Clin Nucl Med 2021; 46:635-640. [PMID: 33883488 DOI: 10.1097/rlu.0000000000003661] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE We aimed to evaluate the performance of a deep learning system for differential diagnosis of lung cancer with conventional CT and FDG PET/CT using transfer learning (TL) and metadata. METHODS A total of 359 patients with a lung mass or nodule who underwent noncontrast chest CT and FDG PET/CT prior to treatment were enrolled retrospectively. All pulmonary lesions were classified by pathology (257 malignant, 102 benign). Deep learning classification models based on ResNet-18 were developed using the pretrained weights obtained from ImageNet data set. We propose a deep TL model for differential diagnosis of lung cancer using CT imaging data and metadata with SUVmax and lesion size derived from PET/CT. The area under the receiver operating characteristic curve (AUC) of the deep learning model was measured as a performance metric and verified by 5-fold cross-validation. RESULTS The performance metrics of the conventional CT model were generally better than those of the CT of PET/CT model. Introducing metadata with SUVmax and lesion size derived from PET/CT into baseline CT models improved the diagnostic performance of the CT of PET/CT model (AUC = 0.837 vs 0.762) and the conventional CT model (AUC = 0.877 vs 0.817). CONCLUSIONS Deep TL models with CT imaging data provide good diagnostic performance for lung cancer, and the conventional CT model showed overall better performance than the CT of PET/CT model. Metadata information derived from PET/CT can improve the performance of deep learning systems.
Collapse
Affiliation(s)
| | - Dongmin Choi
- Department of Computer Science, Yonsei University, Seoul, South Korea
| | - Joon Young Choi
- From the Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| | - Seung Hyup Hyun
- From the Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul
| |
Collapse
|
37
|
Kawaguchi Y, Matsuura Y, Kondo Y, Ichinose J, Nakao M, Okumura S, Mun M. The predictive power of artificial intelligence on mediastinal lymphnode metastasis. Gen Thorac Cardiovasc Surg 2021; 69:1545-1552. [PMID: 34181182 DOI: 10.1007/s11748-021-01671-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 06/09/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVE The aim of this study was to create the preoperative predictive model on mediastinal lymph-node metastasis based on artificial intelligence in surgically resected lung adenocarcinoma. METHODS We enrolled 301 surgical resections of patients with clinical stage N0-1 lung adenocarcinoma, who received positron emission tomography preoperatively between 2015 and 2019. We randomly assigned the patients into two groups: the training (n = 201) and validation groups (n = 100). The training group was used to obtain basic data for learning by artificial intelligence, whereas the validation group was used to verify the constructed algorithm. We used an automatic machine learning platform, to create artificial intelligence model. For comparison, multivariate analysis was performed in the training group, whereas for calculating and verifying the prediction accuracy rate, significant predicting factors were applied to the validation group. RESULTS Of the 301 patients, 41 patients were diagnosed as mediastinal lymph node metastasis. In multivariate analysis, the maximum standardized uptake value was an individual predictive factor. The accuracy rate of artificial intelligence model was 84%, and the specificity was 98% which were higher than those of the maximum standardized uptake value (61% and 57%). However, in terms of sensitivity, artificial intelligence model remarked low at 12%. CONCLUSIONS An artificial intelligence-based diagnostic algorithm showed remarkable specificity compared with the maximum standardized uptake value. Although this model is not ready to practical use and the result was preliminary because of poor sensitivity, artificial intelligence could be able to complement the shortcomings of existing diagnostic modalities.
Collapse
Affiliation(s)
- Yohei Kawaguchi
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Yosuke Matsuura
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan.
| | - Yasuto Kondo
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Junji Ichinose
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Masayuki Nakao
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Sakae Okumura
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Mingyon Mun
- Department of Thoracic Surgical Oncology, The Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| |
Collapse
|
38
|
Arumuga Maria Devi T, Mebin Jose VI. Three Stream Network Model for Lung Cancer Classification in the CT Images. OPEN COMPUTER SCIENCE 2021. [DOI: 10.1515/comp-2020-0145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Abstract
Lung cancer is considered to be one of the deadly diseases that threaten the survival of human beings. It is a challenging task to identify lung cancer in its early stage from the medical images because of the ambiguity in the lung regions. This paper proposes a new architecture to detect lung cancer obtained from the CT images. The proposed architecture has a three-stream network to extract the manual and automated features from the images. Among these three streams, automated feature extraction as well as the classification is done using residual deep neural network and custom deep neural network. Whereas the manual features are the handcrafted features obtained using high and low-frequency sub-bands in the frequency domain that are classified using a Support Vector Machine Classifier. This makes the architecture robust enough to capture all the important features required to classify lung cancer from the input image. Hence, there is no chance of missing feature information. Finally, all the obtained prediction scores are combined by weighted based fusion. The experimental results show 98.2% classification accuracy which is relatively higher in comparison to other existing methods.
Collapse
|
39
|
Gao J, Jiang Q, Zhou B, Chen D. Lung Nodule Detection using Convolutional Neural Networks with Transfer Learning on CT Images. Comb Chem High Throughput Screen 2021; 24:814-824. [PMID: 32664836 DOI: 10.2174/1386207323666200714002459] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 02/06/2020] [Accepted: 05/21/2020] [Indexed: 11/22/2022]
Abstract
AIM AND OBJECTIVE Lung nodule detection is critical in improving the five-year survival rate and reducing mortality for patients with lung cancer. Numerous methods based on Convolutional Neural Networks (CNNs) have been proposed for lung nodule detection in Computed Tomography (CT) images. With the collaborative development of computer hardware technology, the detection accuracy and efficiency can still be improved. MATERIALS AND METHODS In this study, an automatic lung nodule detection method using CNNs with transfer learning is presented. We first compared three of the state-of-the-art convolutional neural network (CNN) models, namely, VGG16, VGG19 and ResNet50, to determine the most suitable model for lung nodule detection. We then utilized two different training strategies, namely, freezing layers and fine-tuning, to illustrate the effectiveness of transfer learning. Furthermore, the hyper-parameters of the CNN model such as optimizer, batch size and epoch were optimized. RESULTS Evaluated on the Lung Nodule Analysis 2016 (LUNA16) challenge, promising results with an accuracy of 96.86%, a precision of 91.10%, a sensitivity of 90.78%, a specificity of 98.13%, and an AUC of 99.37% were achieved. CONCLUSION Compared with other works, state-of-the-art specificity is obtained, which demonstrates that the proposed method is effective and applicable to lung nodule detection.
Collapse
Affiliation(s)
- Jun Gao
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Qian Jiang
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Bo Zhou
- Shanghai University of Medicine & Health Science, Shanghai 201308, China
| | - Daozheng Chen
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| |
Collapse
|
40
|
Abstract
Atrial fibrillation (AF) and ventricular arrhythmia (Arr) are among the most common and fatal cardiac arrhythmias in the world. Electrocardiogram (ECG) data, collected as part of the UK Biobank, represents an opportunity for analysis and classification of these two diseases in the UK. The main objective of our study is to investigate a two-stage model for the classification of individuals with AF and Arr in the UK Biobank dataset. The current literature addresses heart arrhythmia classification very extensively. However, the data used by most researchers lack enough instances of these common diseases. Moreover, by proposing the two-stage model and separation of normal and abnormal cases, we have improved the performance of the classifiers in detection of each specific disease. Our approach consists of two stages of classification. In the first stage, features of the ECG input are classified into two main classes: normal and abnormal. At the second stage, the features of the ECG are further categorised as abnormal and further classified into two diseases of AF and Arr. A diverse set of ECG features such as the QRS duration, PR interval and RR interval, as well as covariates such as sex, BMI, age and other factors, are used in the modelling process. For both stages, we use the XGBoost Classifier algorithm. The healthy population present in the data, has been undersampled to tackle the class imbalance present in the data. This technique has been applied and evaluated using an ECG dataset from the UKBioBank ECG taken at rest repository. The main results of our paper are as follows: The classification performance for the proposed approach has been measured using F1 score, Sensitivity (Recall) and Specificity (Precision). The results of the proposed system are 87.22%, 88.55% and 85.95%, for average F1 Score, average sensitivity and average specificity, respectively. Contribution and significance: The performance level indicates that automatic detection of AF and Arr in participants present in the UK Biobank is more precise and efficient if done in a two-stage manner. Automatic detection and classification of AF and Arr individuals this way would mean early diagnosis and prevention of more serious consequences later in their lives.
Collapse
|
41
|
Deep learning model for predicting gestational age after the first trimester using fetal MRI. Eur Radiol 2021; 31:3775-3782. [PMID: 33852048 DOI: 10.1007/s00330-021-07915-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 01/26/2021] [Accepted: 03/19/2021] [Indexed: 12/17/2022]
Abstract
OBJECTIVES To evaluate a deep learning model for predicting gestational age from fetal brain MRI acquired after the first trimester in comparison to biparietal diameter (BPD). MATERIALS AND METHODS Our Institutional Review Board approved this retrospective study, and a total of 184 T2-weighted MRI acquisitions from 184 fetuses (mean gestational age: 29.4 weeks) who underwent MRI between January 2014 and June 2019 were included. The reference standard gestational age was based on the last menstruation and ultrasonography measurements in the first trimester. The deep learning model was trained with T2-weighted images from 126 training cases and 29 validation cases. The remaining 29 cases were used as test data, with fetal age estimated by both the model and BPD measurement. The relationship between the estimated gestational age and the reference standard was evaluated with Lin's concordance correlation coefficient (ρc) and a Bland-Altman plot. The ρc was assessed with McBride's definition. RESULTS The ρc of the model prediction was substantial (ρc = 0.964), but the ρc of the BPD prediction was moderate (ρc = 0.920). Both the model and BPD predictions had greater differences from the reference standard at increasing gestational age. However, the upper limit of the model's prediction (2.45 weeks) was significantly shorter than that of BPD (5.62 weeks). CONCLUSIONS Deep learning can accurately predict gestational age from fetal brain MR acquired after the first trimester. KEY POINTS • The prediction of gestational age using ultrasound is accurate in the first trimester but becomes inaccurate as gestational age increases. • Deep learning can accurately predict gestational age from fetal brain MRI acquired in the second and third trimester. • Prediction of gestational age by deep learning may have benefits for prenatal care in pregnancies that are underserved during the first trimester.
Collapse
|
42
|
Calheiros JLL, de Amorim LBV, de Lima LL, de Lima Filho AF, Ferreira Júnior JR, de Oliveira MC. The Effects of Perinodular Features on Solid Lung Nodule Classification. J Digit Imaging 2021; 34:798-810. [PMID: 33791910 DOI: 10.1007/s10278-021-00453-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 02/11/2021] [Accepted: 03/22/2021] [Indexed: 12/09/2022] Open
Abstract
Lung cancer is the most lethal malignant neoplasm worldwide, with an annual estimated rate of 1.8 million deaths. Computed tomography has been widely used to diagnose and detect lung cancer, but its diagnosis remains an intricate and challenging work, even for experienced radiologists. Computer-aided diagnosis tools and radiomics tools have provided support to the radiologist's decision, acting as a second opinion. The main focus of these tools has been to analyze the intranodular zone; nevertheless, recent works indicate that the interaction between the nodule and its surroundings (perinodular zone) could be relevant to the diagnosis process. However, only a few works have investigated the importance of specific attributes of the perinodular zone and have shown how important they are in the classification of lung nodules. In this context, the purpose of this work is to evaluate the impact of using the perinodular zone on the characterization of lung lesions. Motivated by reproducible research, we used a large public dataset of solid lung nodule images and extracted fine-tuned radiomic attributes from the perinodular and intranodular zones. Our best-evaluated model obtained an average AUC of 0.916, an accuracy of 84.26%, a sensitivity of 84.45%, and specificity of 83.84%. The combination of attributes from the perinodular and intranodular zones in the image characterization resulted in an improvement in all the metrics analyzed when compared to intranodular-only characterization. Therefore, our results highlighted the importance of using the perinodular zone in the solid pulmonary nodules classification process.
Collapse
Affiliation(s)
| | | | - Lucas Lins de Lima
- Computing Institute, Federal University of Alagoas (UFAL), Maceió, AL, Brazil
| | | | | | | |
Collapse
|
43
|
Shivakumar N, Chandrashekar A, Handa AI, Lee R. Use of deep learning for detection, characterisation and prediction of metastatic disease from computerised tomography: a systematic review. Postgrad Med J 2021; 98:e20. [PMID: 33688072 DOI: 10.1136/postgradmedj-2020-139620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 02/08/2021] [Accepted: 02/20/2021] [Indexed: 11/16/2022]
Abstract
CT is widely used for diagnosis, staging and management of cancer. The presence of metastasis has significant implications on treatment and prognosis. Deep learning (DL), a form of machine learning, where layers of programmed algorithms interpret and recognise patterns, may have a potential role in CT image analysis. This review aims to provide an overview on the use of DL in CT image analysis in the diagnostic evaluation of metastatic disease. A total of 29 studies were included which could be grouped together into three areas of research: the use of deep learning on the detection of metastatic disease from CT imaging, characterisation of lesions on CT into metastasis and prediction of the presence or development of metastasis based on the primary tumour. In conclusion, DL in CT image analysis could have a potential role in evaluating metastatic disease; however, prospective clinical trials investigating its clinical value are required.
Collapse
Affiliation(s)
- Natesh Shivakumar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Anirudh Chandrashekar
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Ashok Inderraj Handa
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| | - Regent Lee
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, Oxfordshire, UK
| |
Collapse
|
44
|
Improvement in the Convolutional Neural Network for Computed Tomography Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041505] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Background and purpose. This study evaluated a modified specialized convolutional neural network (CNN) to improve the accuracy of medical images. Materials and Methods. We defined computed tomography (CT) images as belonging to one of the following 10 classes: head, neck, chest, abdomen, and pelvis with and without contrast media, with 10,000 images per class. We modified the CNN based on the AlexNet with an input size of 512 × 512. We resized the filter sizes of the convolution layer and max pooling. Using these modified CNNs, various models were created and evaluated. The improved CNN was evaluated to classify the presence or absence of the pancreas in the CT images. We compared the overall accuracy, which was calculated from images not used for training, to that of the ResNet. Results. The overall accuracies of the most improved CNN and ResNet in the 10 classes were 94.8% and 89.3%, respectively. The filter sizes of the improved CNN for the convolution layer were (13, 13), (7, 7), (5, 5), (5, 5), and (5, 5) in order from the first layer, and that of max-pooling was (7, 7). The calculation times of the most improved CNN and ResNet were 56 and 120 min, respectively. Regarding the classification of the pancreas, the overall accuracies of the most improved CNN and ResNet were 75.75% and 58.25%, respectively. The calculation times of the most improved CNN and ResNet were 36 and 55 min, respectively. Conclusion. By optimizing the filter size of the convolution layer and max-pooling of 512 × 512 images, we quickly obtained a highly accurate medical image classification model. This improved CNN can be useful for classifying lesions and anatomies for related diagnostic aid applications.
Collapse
|
45
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
46
|
Abstract
Lung cancer is one of the most common diseases among humans and one of the major causes of growing mortality. Medical experts believe that diagnosing lung cancer in the early phase can reduce death with the illustration of lung nodule through computed tomography (CT) screening. Examining the vast amount of CT images can reduce the risk. However, the CT scan images incorporate a tremendous amount of information about nodules, and with an increasing number of images make their accurate assessment very challenging tasks for radiologists. Recently, various methods are evolved based on handcraft and learned approach to assist radiologists. In this paper, we reviewed different promising approaches developed in the computer-aided diagnosis (CAD) system to detect and classify the nodule through the analysis of CT images to provide radiologists' assistance and present the comprehensive analysis of different methods.
Collapse
Affiliation(s)
- Shailesh Kumar Thakur
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India.
| | - Dhirendra Pratap Singh
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| | - Jaytrilok Choudhary
- Computer Science and Engineering, Maulana Azad National Institute of Technology, Bhopal, India
| |
Collapse
|
47
|
Ye FY, Lyu GR, Li SQ, You JH, Wang KJ, Cai ML, Su QC. Diagnostic Performance of Ultrasound Computer-Aided Diagnosis Software Compared with That of Radiologists with Different Levels of Expertise for Thyroid Malignancy: A Multicenter Prospective Study. ULTRASOUND IN MEDICINE & BIOLOGY 2021; 47:114-124. [PMID: 33239154 DOI: 10.1016/j.ultrasmedbio.2020.09.019] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 09/19/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
The aim of the work described here was to evaluate the diagnostic performance of ultrasound thyroid computer-aided diagnosis (CAD) software. This multicenter prospective study included 494 patients (565 thyroid nodules) who underwent surgery or biopsy after ultrasonography at four hospitals from January 2019 to September 2019. The diagnostic performance metrics of different readers were calculated and compared with the pathologic results. The sensitivity of CAD was outstanding and was equivalent to that of a senior radiologist (90.51% vs. 88.47%, p > 0.05). The area under the curve of CAD was equivalent to that of a junior radiologist (0.748 vs. 0.739, p > 0.05). However, the specificity was only 49.63%, which was lower than those of the three radiologists (75.56%, 85.93% and 90.37% for the junior, intermediate and senior radiologists, respectively). The diagnostic performance of the junior radiologist was significantly improved with the aid of CAD (junior + CAD). The sensitivity and area under the curve of junior + CAD were improved from 72.20% to 89.93% and from 0.739 to 0.816, respectively (both p values <0.05), and the positive predictive value, negative predictive value and κ coefficient improved from 76.3% to 78.6%, 82.0% to 86.8% and 0.394 to 0.511, respectively. Though specificity slightly decreased from 75.56% to 73.33%, the difference was not statistically significant (p > 0.05). In general, the clinical application value of CAD is promising, and its instrumental value for junior radiologists is significant.
Collapse
Affiliation(s)
- Feng-Ying Ye
- Department of Ultrasound, Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Guo-Rong Lyu
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, China.
| | - Shang-Qing Li
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, China
| | - Jian-Hong You
- Department of Ultrasound, Zhongshan Hospital Affiliated to Xiamen University, Xiamen, China
| | - Kang-Jian Wang
- Department of Ultrasound, Zhangzhou Affiliated Hospital of Fujian Medical University, Zhangzhou, China
| | - Ming-Li Cai
- Department of Ultrasound, Jinjiang City Hospital, Jinjiang, China
| | - Qi-Chen Su
- Department of Ultrasound, Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
48
|
Lee J, Nishikawa RM. Cross-organ, cross-modality transfer learning: feasibility study for segmentation and classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:210194-210205. [PMID: 33680628 PMCID: PMC7935042 DOI: 10.1109/access.2020.3038909] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We conducted two analyses by comparing the transferability of a traditionally transfer-learned CNN (TL) to that of a CNN fine-tuned with an unrelated set of medical images (mammograms in this study) first and then fine-tuned a second time using TL, which we call the cross-organ, cross-modality transfer learned (XTL) network, on 1) multiple sclerosis (MS) segmentation of brain magnetic resonance (MR) images and 2) tumor malignancy classification of multi-parametric prostate MR images. We used 2133 screening mammograms and two public challenge datasets (longitudinal MS lesion segmentation and ProstateX) as intermediate and target datasets for XTL, respectively. We used two CNN architectures as basis networks for each analysis and fine-tuned it to match the target image types (volumetric) and tasks (segmentation and classification). We evaluated the XTL networks against the traditional TL networks using Dice coefficient and AUC as figure of merits for each analysis, respectively. For the segmentation test, XTL networks outperformed TL networks in terms of Dice coefficient (Dice coefficients of 0.72 vs [0.70 - 0.71] with p-value < 0.0001 in differences). For the classification test, XTL networks (AUCs = 0.77 - 0.80) outperformed TL networks (AUC = 0.73 - 0.75). The difference in the AUCs (AUCdiff = 0.045 - 0.047) was statistically significant (p-value < 0.03). We showed XTL using mammograms improves the network performance compared to traditional TL, despite the difference in image characteristics (x-ray vs. MRI and 2D vs. 3D) and imaging tasks (classification vs. segmentation for one of the tasks).
Collapse
Affiliation(s)
- Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213 USA
| | | |
Collapse
|
49
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2020; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 125] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
50
|
Nishio M, Koyasu S, Noguchi S, Kiguchi T, Nakatsu K, Akasaka T, Yamada H, Itoh K. Automatic detection of acute ischemic stroke using non-contrast computed tomography and two-stage deep learning model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105711. [PMID: 32858281 DOI: 10.1016/j.cmpb.2020.105711] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 08/11/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Currently, it is challenging to detect acute ischemic stroke (AIS)-related changes on computed tomography (CT) images. Therefore, we aimed to develop and evaluate an automatic AIS detection system involving a two-stage deep learning model. METHODS We included 238 cases from two different institutions. AIS-related findings were annotated on each of the 238 sets of head CT images by referring to head magnetic resonance imaging (MRI) images in which an MRI examination was performed within 24 h following the CT scan. These 238 annotated cases were divided into a training set including 189 cases and test set including 49 cases. Subsequently, a two-stage deep learning detection model was constructed from the training set using the You Only Look Once v3 model and Visual Geometry Group 16 classification model. Then, the two-stage model performed the AIS detection process in the test set. To assess the detection model's results, a board-certified radiologist also evaluated the test set head CT images with and without the aid of the detection model. The sensitivity of AIS detection and number of false positives were calculated for the evaluation of the test set detection results. The sensitivity of the radiologist with and without the software detection results was compared using the McNemar test. A p-value of less than 0.05 was considered statistically significant. RESULTS For the two-stage model and radiologist without and with the use of the software results, the sensitivity was 37.3%, 33.3%, and 41.3%, respectively, and the number of false positives per one case was 1.265, 0.327, and 0.388, respectively. On using the two-stage detection model's results, the board-certified radiologist's detection sensitivity significantly improved (p-value = 0.0313). CONCLUSIONS Our detection system involving the two-stage deep learning model significantly improved the radiologist's sensitivity in AIS detection.
Collapse
Affiliation(s)
- Mizuho Nishio
- Department of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017, Japan.
| | - Sho Koyasu
- Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan; Department of Diagnostic Radiology, Ichinomiya Nishi Hospital, 1-Hira Kaimei, Ichinomiya, Aichi 494-0001, Japan
| | - Shunjiro Noguchi
- Department of Radiology, Osaka Red Cross Hospital, 5-30 Fudegasakicho, Tennoji-ku, Osaka 543-8555, Japan
| | - Takao Kiguchi
- Department of Diagnostic Radiology, Ichinomiya Nishi Hospital, 1-Hira Kaimei, Ichinomiya, Aichi 494-0001, Japan
| | - Kanako Nakatsu
- Department of Radiology, Osaka Red Cross Hospital, 5-30 Fudegasakicho, Tennoji-ku, Osaka 543-8555, Japan
| | - Thai Akasaka
- Department of Radiology, Osaka Red Cross Hospital, 5-30 Fudegasakicho, Tennoji-ku, Osaka 543-8555, Japan
| | - Hiroki Yamada
- Department of Diagnostic Radiology, Ichinomiya Nishi Hospital, 1-Hira Kaimei, Ichinomiya, Aichi 494-0001, Japan
| | - Kyo Itoh
- Department of Radiology, Osaka Red Cross Hospital, 5-30 Fudegasakicho, Tennoji-ku, Osaka 543-8555, Japan
| |
Collapse
|