1
|
Peng Z, Wang X, Li J, Sun J, Wang Y, Li Y, Li W, Zhang S, Wang X, Pei Z. Comparative bibliometric analysis of artificial intelligence-assisted polyp diagnosis and AI-assisted digestive endoscopy: trends and growth in AI gastroenterology (2003-2023). Front Med (Lausanne) 2024; 11:1438979. [PMID: 39359927 PMCID: PMC11445022 DOI: 10.3389/fmed.2024.1438979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 09/02/2024] [Indexed: 10/04/2024] Open
Abstract
Introduction Artificial intelligence is already widely utilized in gastroenterology. This study aims to comprehensively evaluate the research hotspots and development trends within the field of AI in gastroenterology by employing bibliometric techniques to scrutinize geographical distribution, authorship, affiliated institutions, keyword usage, references, and other pertinent data contained within relevant publications. Methods This investigation compiled all pertinent publications related to artificial intelligence in the context of gastrointestinal polyps and digestive endoscopy from 2003 to 2023 within the Web of Science Core Collection database. Furthermore, the study harnessed the tools CiteSpace, VOSviewer, GraphPad Prism and Scimago Graphica for visual data analysis. The study retrieved a total of 2,394 documents in the field of AI in digestive endoscopy and 628 documents specifically related to AI in digestive tract polyps. Results The United States and China are the primary contributors to research in both fields. Since 2019, studies on AI for digestive tract polyps have constituted approximately 25% of the total AI digestive endoscopy studies annually. Six of the top 10 most-cited studies in AI digestive endoscopy also rank among the top 10 most-cited studies in AI for gastrointestinal polyps. Additionally, the number of studies on AI-assisted polyp segmentation is growing the fastest, with significant increases in AI-assisted polyp diagnosis and real-time systems beginning after 2020. Discussion The application of AI in gastroenterology has garnered increasing attention. As theoretical advancements in AI for gastroenterology have progressed, real-time diagnosis and detection of gastrointestinal diseases have become feasible in recent years, highlighting the promising potential of AI in this field.
Collapse
Affiliation(s)
- Ziye Peng
- Medical School, Tianjin University, Tianjin, China
| | - Xiangyu Wang
- Medical School, Tianjin University, Tianjin, China
| | - Jiaxin Li
- Medical School, Tianjin University, Tianjin, China
| | - Jiayi Sun
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Yuwei Wang
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Yanru Li
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Wen Li
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Shuyi Zhang
- Department of Endoscopy, Tianjin Union Medical Center, Tianjin, China
| | - Ximo Wang
- Tianjin Third Central Hospital, Tianjin, China
| | - Zhengcun Pei
- Medical School, Tianjin University, Tianjin, China
| |
Collapse
|
2
|
Yousefpanah K, Ebadi MJ, Sabzekar S, Zakaria NH, Osman NA, Ahmadian A. An emerging network for COVID-19 CT-scan classification using an ensemble deep transfer learning model. Acta Trop 2024; 257:107277. [PMID: 38878849 DOI: 10.1016/j.actatropica.2024.107277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/28/2024] [Accepted: 05/31/2024] [Indexed: 07/09/2024]
Abstract
Over the past few years, the widespread outbreak of COVID-19 has caused the death of millions of people worldwide. Early diagnosis of the virus is essential to control its spread and provide timely treatment. Artificial intelligence methods are often used as powerful tools to reach a COVID-19 diagnosis via computed tomography (CT) samples. In this paper, artificial intelligence-based methods are introduced to diagnose COVID-19. At first, a network called CT6-CNN is designed, and then two ensemble deep transfer learning models are developed based on Xception, ResNet-101, DenseNet-169, and CT6-CNN to reach a COVID-19 diagnosis by CT samples. The publicly available SARS-CoV-2 CT dataset is utilized for our implementation, including 2481 CT scans. The dataset is separated into 2108, 248, and 125 images for training, validation, and testing, respectively. Based on experimental results, the CT6-CNN model achieved 94.66% accuracy, 94.67% precision, 94.67% sensitivity, and 94.65% F1-score rate. Moreover, the ensemble learning models reached 99.2% accuracy. Experimental results affirm the effectiveness of designed models, especially the ensemble deep learning models, to reach a diagnosis of COVID-19.
Collapse
Affiliation(s)
| | - M J Ebadi
- Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186, Roma, Italy.
| | - Sina Sabzekar
- Civil Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Nor Hidayati Zakaria
- Azman Hashim International Business School, Universiti Teknologi Malaysia, Kuala Lumpur, 54100, Malaysia
| | - Nurul Aida Osman
- Computer and Information Sciences Department, Faculty of Science and Information Technology, Universiti Teknologi Petronas, Malaysia
| | - Ali Ahmadian
- Decisions Lab, Mediterranea University of Reggio Calabria, Reggio Calabria, Italy; Faculty of Engineering and Natural Sciences, Istanbul Okan University, Istanbul, Turkey.
| |
Collapse
|
3
|
Talib MA, Afadar Y, Nasir Q, Nassif AB, Hijazi H, Hasasneh A. A tree-based explainable AI model for early detection of Covid-19 using physiological data. BMC Med Inform Decis Mak 2024; 24:179. [PMID: 38915001 PMCID: PMC11194929 DOI: 10.1186/s12911-024-02576-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 06/13/2024] [Indexed: 06/26/2024] Open
Abstract
With the outbreak of COVID-19 in 2020, countries worldwide faced significant concerns and challenges. Various studies have emerged utilizing Artificial Intelligence (AI) and Data Science techniques for disease detection. Although COVID-19 cases have declined, there are still cases and deaths around the world. Therefore, early detection of COVID-19 before the onset of symptoms has become crucial in reducing its extensive impact. Fortunately, wearable devices such as smartwatches have proven to be valuable sources of physiological data, including Heart Rate (HR) and sleep quality, enabling the detection of inflammatory diseases. In this study, we utilize an already-existing dataset that includes individual step counts and heart rate data to predict the probability of COVID-19 infection before the onset of symptoms. We train three main model architectures: the Gradient Boosting classifier (GB), CatBoost trees, and TabNet classifier to analyze the physiological data and compare their respective performances. We also add an interpretability layer to our best-performing model, which clarifies prediction results and allows a detailed assessment of effectiveness. Moreover, we created a private dataset by gathering physiological data from Fitbit devices to guarantee reliability and avoid bias.The identical set of models was then applied to this private dataset using the same pre-trained models, and the results were documented. Using the CatBoost tree-based method, our best-performing model outperformed previous studies with an accuracy rate of 85% on the publicly available dataset. Furthermore, this identical pre-trained CatBoost model produced an accuracy of 81% when applied to the private dataset. You will find the source code in the link: https://github.com/OpenUAE-LAB/Covid-19-detection-using-Wearable-data.git .
Collapse
Affiliation(s)
- Manar Abu Talib
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, P.O. Box 27272, Sharjah, UAE.
| | - Yaman Afadar
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, UAE
| | - Qassim Nasir
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, UAE
| | - Ali Bou Nassif
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, UAE
| | - Haytham Hijazi
- Centre for Informatics and Systems of the University of Coimbra (CISUC), University of Coimbra, Coimbra, P-3030-290, Portugal
- Intelligent Systems Department, Ahliya University, Bethlehem, P-150-199, Palestine
| | - Ahmad Hasasneh
- Department of Natural, Engineering and Technology Sciences, Faculty of Graduate Studies, Arab American University, P.O. Box 240, Ramallah, Palestine
| |
Collapse
|
4
|
Shen H, Jin Z, Chen Q, Zhang L, You J, Zhang S, Zhang B. Image-based artificial intelligence for the prediction of pathological complete response to neoadjuvant chemoradiotherapy in patients with rectal cancer: a systematic review and meta-analysis. LA RADIOLOGIA MEDICA 2024; 129:598-614. [PMID: 38512622 DOI: 10.1007/s11547-024-01796-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/24/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Artificial intelligence (AI) holds enormous potential for noninvasively identifying patients with rectal cancer who could achieve pathological complete response (pCR) following neoadjuvant chemoradiotherapy (nCRT). We aimed to conduct a meta-analysis to summarize the diagnostic performance of image-based AI models for predicting pCR to nCRT in patients with rectal cancer. METHODS This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A literature search of PubMed, Embase, Cochrane Library, and Web of Science was performed from inception to July 29, 2023. Studies that developed or utilized AI models for predicting pCR to nCRT in rectal cancer from medical images were included. The Quality Assessment of Diagnostic Accuracy Studies-AI was used to appraise the methodological quality of the studies. The bivariate random-effects model was used to summarize the individual sensitivities, specificities, and areas-under-the-curve (AUCs). Subgroup and meta-regression analyses were conducted to identify potential sources of heterogeneity. Protocol for this study was registered with PROSPERO (CRD42022382374). RESULTS Thirty-four studies (9933 patients) were identified. Pooled estimates of sensitivity, specificity, and AUC of AI models for pCR prediction were 82% (95% CI: 76-87%), 84% (95% CI: 79-88%), and 90% (95% CI: 87-92%), respectively. Higher specificity was seen for the Asian population, low risk of bias, and deep-learning, compared with the non-Asian population, high risk of bias, and radiomics (all P < 0.05). Single-center had a higher sensitivity than multi-center (P = 0.001). The retrospective design had lower sensitivity (P = 0.012) but higher specificity (P < 0.001) than the prospective design. MRI showed higher sensitivity (P = 0.001) but lower specificity (P = 0.044) than non-MRI. The sensitivity and specificity of internal validation were higher than those of external validation (both P = 0.005). CONCLUSIONS Image-based AI models exhibited favorable performance for predicting pCR to nCRT in rectal cancer. However, further clinical trials are warranted to verify the findings.
Collapse
Affiliation(s)
- Hui Shen
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Zhe Jin
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Qiuying Chen
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Lu Zhang
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Jingjing You
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Shuixing Zhang
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China
| | - Bin Zhang
- Department of Radiology, The First Affiliated Hospital of Jinan University, No. 613 Huangpu West Road, Tianhe District, Guangzhou, 510627, Guangdong, China.
| |
Collapse
|
5
|
Wang Y, Liu A, Yang J, Wang L, Xiong N, Cheng Y, Wu Q. Clinical knowledge-guided deep reinforcement learning for sepsis antibiotic dosing recommendations. Artif Intell Med 2024; 150:102811. [PMID: 38553154 DOI: 10.1016/j.artmed.2024.102811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/27/2023] [Accepted: 02/11/2024] [Indexed: 04/02/2024]
Abstract
Sepsis is the third leading cause of death worldwide. Antibiotics are an important component in the treatment of sepsis. The use of antibiotics is currently facing the challenge of increasing antibiotic resistance (Evans et al., 2021). Sepsis medication prediction can be modeled as a Markov decision process, but existing methods fail to integrate with medical knowledge, making the decision process potentially deviate from medical common sense and leading to underperformance. (Wang et al., 2021). In this paper, we use Deep Q-Network (DQN) to construct a Sepsis Anti-infection DQN (SAI-DQN) model to address the challenge of determining the optimal combination and duration of antibiotics in sepsis treatment. By setting sepsis clinical knowledge as reward functions to guide DQN complying with medical guidelines, we formed personalized treatment recommendations for antibiotic combinations. The results showed that our model had a higher average value for decision-making than clinical decisions. For the test set of patients, our model predicts that 79.07% of patients will achieve a favorable prognosis with the recommended combination of antibiotics. By statistically analyzing decision trajectories and drug action selection, our model was able to provide reasonable medication recommendations that comply with clinical practices. Our model was able to improve patient outcomes by recommending appropriate antibiotic combinations in line with certain clinical knowledge.
Collapse
Affiliation(s)
- Yuan Wang
- Tianjin University of Science and Technology, Tianjin, China
| | - Anqi Liu
- Tianjin University of Science and Technology, Tianjin, China
| | - Jucheng Yang
- Tianjin University of Science and Technology, Tianjin, China
| | - Lin Wang
- Tianjin University of Science and Technology, Tianjin, China
| | - Ning Xiong
- Tianjin University of Science and Technology, Tianjin, China
| | - Yisong Cheng
- Department of Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China.
| | - Qin Wu
- Department of Critical Care Medicine, West China Hospital, Sichuan University, Chengdu, China.
| |
Collapse
|
6
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
7
|
Yim D, Khuntia J, Parameswaran V, Meyers A. Preliminary Evidence of the Use of Generative AI in Health Care Clinical Services: Systematic Narrative Review. JMIR Med Inform 2024; 12:e52073. [PMID: 38506918 PMCID: PMC10993141 DOI: 10.2196/52073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/12/2023] [Accepted: 01/30/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Generative artificial intelligence tools and applications (GenAI) are being increasingly used in health care. Physicians, specialists, and other providers have started primarily using GenAI as an aid or tool to gather knowledge, provide information, train, or generate suggestive dialogue between physicians and patients or between physicians and patients' families or friends. However, unless the use of GenAI is oriented to be helpful in clinical service encounters that can improve the accuracy of diagnosis, treatment, and patient outcomes, the expected potential will not be achieved. As adoption continues, it is essential to validate the effectiveness of the infusion of GenAI as an intelligent technology in service encounters to understand the gap in actual clinical service use of GenAI. OBJECTIVE This study synthesizes preliminary evidence on how GenAI assists, guides, and automates clinical service rendering and encounters in health care The review scope was limited to articles published in peer-reviewed medical journals. METHODS We screened and selected 0.38% (161/42,459) of articles published between January 1, 2020, and May 31, 2023, identified from PubMed. We followed the protocols outlined in the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select highly relevant studies with at least 1 element on clinical use, evaluation, and validation to provide evidence of GenAI use in clinical services. The articles were classified based on their relevance to clinical service functions or activities using the descriptive and analytical information presented in the articles. RESULTS Of 161 articles, 141 (87.6%) reported using GenAI to assist services through knowledge access, collation, and filtering. GenAI was used for disease detection (19/161, 11.8%), diagnosis (14/161, 8.7%), and screening processes (12/161, 7.5%) in the areas of radiology (17/161, 10.6%), cardiology (12/161, 7.5%), gastrointestinal medicine (4/161, 2.5%), and diabetes (6/161, 3.7%). The literature synthesis in this study suggests that GenAI is mainly used for diagnostic processes, improvement of diagnosis accuracy, and screening and diagnostic purposes using knowledge access. Although this solves the problem of knowledge access and may improve diagnostic accuracy, it is oriented toward higher value creation in health care. CONCLUSIONS GenAI informs rather than assisting or automating clinical service functions in health care. There is potential in clinical service, but it has yet to be actualized for GenAI. More clinical service-level evidence that GenAI is used to streamline some functions or provides more automated help than only information retrieval is needed. To transform health care as purported, more studies related to GenAI applications must automate and guide human-performed services and keep up with the optimism that forward-thinking health care organizations will take advantage of GenAI.
Collapse
Affiliation(s)
- Dobin Yim
- Loyola University, Maryland, MD, United States
| | - Jiban Khuntia
- University of Colorado Denver, Denver, CO, United States
| | | | - Arlen Meyers
- University of Colorado Denver, Denver, CO, United States
| |
Collapse
|
8
|
Chavoshi M, Zamani S, Mirshahvalad SA. Diagnostic performance of deep learning models versus radiologists in COVID-19 pneumonia: A systematic review and meta-analysis. Clin Imaging 2024; 107:110092. [PMID: 38301371 DOI: 10.1016/j.clinimag.2024.110092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 01/16/2024] [Accepted: 01/20/2024] [Indexed: 02/03/2024]
Abstract
PURPOSE Although several studies have compared the performance of deep learning (DL) models and radiologists for the diagnosis of COVID-19 pneumonia on CT of the chest, these results have not been collectively evaluated. We performed a meta-analysis of original articles comparing the performance of DL models versus radiologists in detecting COVID-19 pneumonia. METHODS A systematic search was conducted on the three main medical literature databases, Scopus, Web of Science, and PubMed, for articles published as of February 1st, 2023. We included original scientific articles that compared DL models trained to detect COVID-19 pneumonia on CT to radiologists. Meta-analysis was performed to determine DL versus radiologist performance in terms of model sensitivity and specificity, taking into account inter and intra-study heterogeneity. RESULTS Twenty-two articles met the inclusion criteria. Based on the meta-analytic calculations, DL models had significantly higher pooled sensitivity (0.933 vs. 0.829, p < 0.001) compared to radiologists with similar pooled specificity (0.905 vs. 0.897, p = 0.746). In the differentiation of COVID-19 versus community-acquired pneumonia, the DL models had significantly higher sensitivity compared to radiologists (0.915 vs. 0.836, p = 0.001). CONCLUSIONS DL models have high performance for screening of COVID-19 pneumonia on chest CT, offering the possibility of these models for augmenting radiologists in clinical practice.
Collapse
Affiliation(s)
- Mohammadreza Chavoshi
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Zamani
- School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Seyed Ali Mirshahvalad
- Joint Department of Medical Imaging, University Health Network, University of Toronto, Toronto, Canada.
| |
Collapse
|
9
|
Garg A, Alag S, Duncan D. CoSev: Data-Driven Optimizations for COVID-19 Severity Assessment in Low-Sample Regimes. Diagnostics (Basel) 2024; 14:337. [PMID: 38337853 PMCID: PMC10855975 DOI: 10.3390/diagnostics14030337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/06/2024] [Accepted: 01/19/2024] [Indexed: 02/12/2024] Open
Abstract
Given the pronounced impact COVID-19 continues to have on society-infecting 700 million reported individuals and causing 6.96 million deaths-many deep learning works have recently focused on the virus's diagnosis. However, assessing severity has remained an open and challenging problem due to a lack of large datasets, the large dimensionality of images for which to find weights, and the compute limitations of modern graphics processing units (GPUs). In this paper, a new, iterative application of transfer learning is demonstrated on the understudied field of 3D CT scans for COVID-19 severity analysis. This methodology allows for enhanced performance on the MosMed Dataset, which is a small and challenging dataset containing 1130 images of patients for five levels of COVID-19 severity (Zero, Mild, Moderate, Severe, and Critical). Specifically, given the large dimensionality of the input images, we create several custom shallow convolutional neural network (CNN) architectures and iteratively refine and optimize them, paying attention to learning rates, layer types, normalization types, filter sizes, dropout values, and more. After a preliminary architecture design, the models are systematically trained on a simplified version of the dataset-building models for two-class, then three-class, then four-class, and finally five-class classification. The simplified problem structure allows the model to start learning preliminary features, which can then be further modified for more difficult classification tasks. Our final model CoSev boosts classification accuracies from below 60% at first to 81.57% with the optimizations, reaching similar performance to the state-of-the-art on the dataset, with much simpler setup procedures. In addition to COVID-19 severity diagnosis, the explored methodology can be applied to general image-based disease detection. Overall, this work highlights innovative methodologies that advance current computer vision practices for high-dimension, low-sample data as well as the practicality of data-driven machine learning and the importance of feature design for training, which can then be implemented for improvements in clinical practices.
Collapse
Affiliation(s)
- Aksh Garg
- Computer Science Department, Stanford University, Stanford, CA 94305, USA; (A.G.); (S.A.)
| | - Shray Alag
- Computer Science Department, Stanford University, Stanford, CA 94305, USA; (A.G.); (S.A.)
| | - Dominique Duncan
- Laboratory of Neuro Imaging, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA 90033, USA
| |
Collapse
|
10
|
Moosavi AS, Mahboobi A, Arabzadeh F, Ramezani N, Moosavi HS, Mehrpoor G. Segmentation and classification of lungs CT-scan for detecting COVID-19 abnormalities by deep learning technique: U-Net model. J Family Med Prim Care 2024; 13:691-698. [PMID: 38605799 PMCID: PMC11006039 DOI: 10.4103/jfmpc.jfmpc_695_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/12/2023] [Accepted: 09/22/2023] [Indexed: 04/13/2024] Open
Abstract
Background Artificial intelligence (AI) techniques have been ascertained useful in the analysis and description of infectious areas in radiological images promptly. Our aim in this study was to design a web-based application for detecting and labeling infected tissues on CT (computed tomography) lung images of patients based on the deep learning (DL) method as a type of AI. Materials and Methods The U-Net architecture, one of the DL networks, is used as a hybrid model with pre-trained densely connected convolutional network 121 (DenseNet121) architecture for the segmentation process. The proposed model was constructed on 1031 persons' CT-scan images from Ibn Sina Hospital of Iran in 2021 and some publicly available datasets. The network was trained using 6000 slices, validated on 1000 slices images, and tested against the 150 slices. Accuracy, sensitivity, specificity, and area under the receiver operating characteristics (ROC) curve (AUC) were calculated to evaluate model performance. Results The results indicate the acceptable ability of the U-Net-DenseNet121 model in detecting COVID-19 abnormality (accuracy = 0.88 and AUC = 0.96 for thresholds of 0.13 and accuracy = 0.88 and AUC = 0.90 for thresholds of 0.2). Based on this model, we developed the "Imaging-Tech" web-based application for use at hospitals and clinics to make our project's output more practical and attractive in the market. Conclusion We designed a DL-based model for the segmentation of COVID-19 CT scan images and, based on this model, constructed a web-based application that, according to the results, is a reliable detector for infected tissue in lung CT-scans. The availability of such tools would aid in automating, prioritizing, fastening, and broadening the treatment of COVID-19 patients globally.
Collapse
Affiliation(s)
| | - Ashraf Mahboobi
- Department of Radiologist, Babol University of Medical Sciences, Babol, Iran
| | - Farzin Arabzadeh
- Department of Radiologist, Dr. Arabzadeh Radiology and Sonography Clinic, Behbahan, Iran
| | - Nazanin Ramezani
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Helia S. Moosavi
- Computer Science Bachelor Degree, University of Toronto, On, Canada
| | - Golbarg Mehrpoor
- Department of Rheumatologist, Alborz University of Medical Sciences, Karaj, Iran
| |
Collapse
|
11
|
Hoffer O, Brzezinski RY, Ganim A, Shalom P, Ovadia-Blechman Z, Ben-Baruch L, Lewis N, Peled R, Shimon C, Naftali-Shani N, Katz E, Zimmer Y, Rabin N. Smartphone-based detection of COVID-19 and associated pneumonia using thermal imaging and a transfer learning algorithm. JOURNAL OF BIOPHOTONICS 2024:e202300486. [PMID: 38253344 DOI: 10.1002/jbio.202300486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/28/2023] [Accepted: 12/31/2023] [Indexed: 01/24/2024]
Abstract
COVID-19-related pneumonia is typically diagnosed using chest x-ray or computed tomography images. However, these techniques can only be used in hospitals. In contrast, thermal cameras are portable, inexpensive devices that can be connected to smartphones. Thus, they can be used to detect and monitor medical conditions outside hospitals. Herein, a smartphone-based application using thermal images of a human back was developed for COVID-19 detection. Image analysis using a deep learning algorithm revealed a sensitivity and specificity of 88.7% and 92.3%, respectively. The findings support the future use of noninvasive thermal imaging in primary screening for COVID-19 and associated pneumonia.
Collapse
Affiliation(s)
- Oshrit Hoffer
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Rafael Y Brzezinski
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
- Internal Medicine "C" and "E", Tel Aviv Medical Center, Tel Aviv, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Adam Ganim
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Perry Shalom
- School of Software Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Zehava Ovadia-Blechman
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Lital Ben-Baruch
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nir Lewis
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Racheli Peled
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Carmi Shimon
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Nili Naftali-Shani
- Neufeld Cardiac Research Institute, Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Tamman Cardiovascular Research Institute, Leviev Heart Center, Sheba Medical Center Tel Hashomer, Ramat Gan, Israel
| | - Eyal Katz
- School of Electrical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Yair Zimmer
- School of Medical Engineering, Afeka Tel Aviv Academic College of Engineering, Tel Aviv, Israel
| | - Neta Rabin
- Department of Industrial Engineering, Tel-Aviv University, Tel Aviv, Israel
| |
Collapse
|
12
|
Reina-Reina A, Barrera J, Maté A, Trujillo J, Valdivieso B, Gas ME. Developing an interpretable machine learning model for predicting COVID-19 patients deteriorating prior to intensive care unit admission using laboratory markers. Heliyon 2023; 9:e22878. [PMID: 38125502 PMCID: PMC10731083 DOI: 10.1016/j.heliyon.2023.e22878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 11/15/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023] Open
Abstract
Coronavirus disease (COVID-19) remains a significant global health challenge, prompting a transition from emergency response to comprehensive management strategies. Furthermore, the emergence of new variants of concern, such as BA.2.286, underscores the need for early detection and response to new variants, which continues to be a crucial strategy for mitigating the impact of COVID-19, especially among the vulnerable population. This study aims to anticipate patients requiring intensive care or facing elevated mortality risk throughout their COVID-19 infection while also identifying laboratory predictive markers for early diagnosis of patients. Therefore, haematological, biochemical, and demographic variables were retrospectively evaluated in 8,844 blood samples obtained from 2,935 patients before intensive care unit admission using an interpretable machine learning model. Feature selection techniques were applied using precision-recall measures to address data imbalance and evaluate the suitability of the different variables. The model was trained using stratified cross-validation with k=5 and internally validated, achieving an accuracy of 77.27%, sensitivity of 78.55%, and area under the receiver operating characteristic (AUC) of 0.85; successfully identifying patients at increased risk of severe progression. From a medical perspective, the most important features of the progression or severity of patients with COVID-19 were lactate dehydrogenase, age, red blood cell distribution standard deviation, neutrophils, and platelets, which align with findings from several prior investigations. In light of these insights, diagnostic processes can be significantly expedited through the use of laboratory tests, with a greater focus on key indicators. This strategic approach not only improves diagnostic efficiency but also extends its reach to a broader spectrum of patients. In addition, it allows healthcare professionals to take early preventive measures for those most at risk of adverse outcomes, thereby optimising patient care and prognosis.
Collapse
Affiliation(s)
- A. Reina-Reina
- Lucentia Research. Department of Software and Computing System, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain
- Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - J.M. Barrera
- Lucentia Research. Department of Software and Computing System, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain
- Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - A. Maté
- Lucentia Research. Department of Software and Computing System, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain
- Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - J.C. Trujillo
- Lucentia Research. Department of Software and Computing System, University of Alicante, Carretera San Vicente del Raspeig s/n, 03690, Alicante, Spain
- Lucentia Lab, Av. Pintor Pérez Gil, 16, 03540, Alicante, Spain
| | - B. Valdivieso
- The University and Polytechnic La Fe Hospital of Valencia, Avenida Fernando Abril Martorell, 106 Torre H 1st floor, 46026, Valencia, Spain
- The Medical Research Institute of Hospital La Fe, Avenida Fernando Abril Martorell, 106 Torre F 7th floor, 46026, Valencia, Spain
| | - María-Eugenia Gas
- The Medical Research Institute of Hospital La Fe, Avenida Fernando Abril Martorell, 106 Torre F 7th floor, 46026, Valencia, Spain
| |
Collapse
|
13
|
Murphy K, Muhairwe J, Schalekamp S, van Ginneken B, Ayakaka I, Mashaete K, Katende B, van Heerden A, Bosman S, Madonsela T, Gonzalez Fernandez L, Signorell A, Bresser M, Reither K, Glass TR. COVID-19 screening in low resource settings using artificial intelligence for chest radiographs and point-of-care blood tests. Sci Rep 2023; 13:19692. [PMID: 37952026 PMCID: PMC10640556 DOI: 10.1038/s41598-023-46461-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/01/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) systems for detection of COVID-19 using chest X-Ray (CXR) imaging and point-of-care blood tests were applied to data from four low resource African settings. The performance of these systems to detect COVID-19 using various input data was analysed and compared with antigen-based rapid diagnostic tests. Participants were tested using the gold standard of RT-PCR test (nasopharyngeal swab) to determine whether they were infected with SARS-CoV-2. A total of 3737 (260 RT-PCR positive) participants were included. In our cohort, AI for CXR images was a poor predictor of COVID-19 (AUC = 0.60), since the majority of positive cases had mild symptoms and no visible pneumonia in the lungs. AI systems using differential white blood cell counts (WBC), or a combination of WBC and C-Reactive Protein (CRP) both achieved an AUC of 0.74 with a suggested optimal cut-off point at 83% sensitivity and 63% specificity. The antigen-RDT tests in this trial obtained 65% sensitivity at 98% specificity. This study is the first to validate AI tools for COVID-19 detection in an African setting. It demonstrates that screening for COVID-19 using AI with point-of-care blood tests is feasible and can operate at a higher sensitivity level than antigen testing.
Collapse
Affiliation(s)
- Keelin Murphy
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands.
| | | | - Steven Schalekamp
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | | | | | - Alastair van Heerden
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - Shannon Bosman
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Lucia Gonzalez Fernandez
- Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland
- SolidarMed, Partnerships for Health, Lucerne, Switzerland
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Moniek Bresser
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Tracy R Glass
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| |
Collapse
|
14
|
Ghafoori M, Hamidi M, Modegh RG, Aziz-Ahari A, Heydari N, Tavafizadeh Z, Pournik O, Emdadi S, Samimi S, Mohseni A, Khaleghi M, Dashti H, Rabiee HR. Predicting survival of Iranian COVID-19 patients infected by various variants including omicron from CT Scan images and clinical data using deep neural networks. Heliyon 2023; 9:e21965. [PMID: 38058649 PMCID: PMC10696006 DOI: 10.1016/j.heliyon.2023.e21965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 12/08/2023] Open
Abstract
Purpose: The rapid spread of the COVID-19 omicron variant virus has resulted in an overload of hospitals around the globe. As a result, many patients are deprived of hospital facilities, increasing mortality rates. Therefore, mortality rates can be reduced by efficiently assigning facilities to higher-risk patients. Therefore, it is crucial to estimate patients' survival probability based on their conditions at the time of admission so that the minimum required facilities can be provided, allowing more opportunities to be available for those who need them. Although radiologic findings in chest computerized tomography scans show various patterns, considering the individual risk factors and other underlying diseases, it is difficult to predict patient prognosis through routine clinical or statistical analysis. Method: In this study, a deep neural network model is proposed for predicting survival based on simple clinical features, blood tests, axial computerized tomography scan images of lungs, and the patients' planned treatment. The model's architecture combines a Convolutional Neural Network and a Long Short Term Memory network. The model was trained using 390 survivors and 108 deceased patients from the Rasoul Akram Hospital and evaluated 109 surviving and 36 deceased patients infected by the omicron variant. Results: The proposed model reached an accuracy of 87.5% on the test data, indicating survival prediction possibility. The accuracy was significantly higher than the accuracy achieved by classical machine learning methods without considering computerized tomography scan images (p-value <= 4E-5). The images were also replaced with hand-crafted features related to the ratio of infected lung lobes used in classical machine-learning models. The highest-performing model reached an accuracy of 84.5%, which was considerably higher than the models trained on mere clinical information (p-value <= 0.006). However, the performance was still significantly less than the deep model (p-value <= 0.016). Conclusion: The proposed deep model achieved a higher accuracy than classical machine learning methods trained on features other than computerized tomography scan images. This proves the images contain extra information. Meanwhile, Artificial Intelligence methods with multimodal inputs can be more reliable and accurate than computerized tomography severity scores.
Collapse
Affiliation(s)
- Mahyar Ghafoori
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Mehrab Hamidi
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Rassa Ghavami Modegh
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Alireza Aziz-Ahari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Neda Heydari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Zeynab Tavafizadeh
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Omid Pournik
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Sasan Emdadi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Saeed Samimi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Amir Mohseni
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Mohammadreza Khaleghi
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Hamed Dashti
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Hamid R. Rabiee
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| |
Collapse
|
15
|
Ahoor A, Arif F, Sajid MZ, Qureshi I, Abbas F, Jabbar S, Abbas Q. MixNet-LD: An Automated Classification System for Multiple Lung Diseases Using Modified MixNet Model. Diagnostics (Basel) 2023; 13:3195. [PMID: 37892016 PMCID: PMC10606171 DOI: 10.3390/diagnostics13203195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/29/2023] Open
Abstract
The lungs are critical components of the respiratory system because they allow for the exchange of oxygen and carbon dioxide within our bodies. However, a variety of conditions can affect the lungs, resulting in serious health consequences. Lung disease treatment aims to control its severity, which is usually irrevocable. The fundamental objective of this endeavor is to build a consistent and automated approach for establishing the intensity of lung illness. This paper describes MixNet-LD, a unique automated approach aimed at identifying and categorizing the severity of lung illnesses using an upgraded pre-trained MixNet model. One of the first steps in developing the MixNet-LD system was to build a pre-processing strategy that uses Grad-Cam to decrease noise, highlight irregularities, and eventually improve the classification performance of lung illnesses. Data augmentation strategies were used to rectify the dataset's unbalanced distribution of classes and prevent overfitting. Furthermore, dense blocks were used to improve classification outcomes across the four severity categories of lung disorders. In practice, the MixNet-LD model achieves cutting-edge performance while maintaining model size and manageable complexity. The proposed approach was tested using a variety of datasets gathered from credible internet sources as well as a novel private dataset known as Pak-Lungs. A pre-trained model was used on the dataset to obtain important characteristics from lung disease images. The pictures were then categorized into categories such as normal, COVID-19, pneumonia, tuberculosis, and lung cancer using a linear layer of the SVM classifier with a linear activation function. The MixNet-LD system underwent testing in four distinct tests and achieved a remarkable accuracy of 98.5% on the difficult lung disease dataset. The acquired findings and comparisons demonstrate the MixNet-LD system's improved performance and learning capabilities. These findings show that the proposed approach may effectively increase the accuracy of classification models in medicinal image investigations. This research helps to develop new strategies for effective medical image processing in clinical settings.
Collapse
Affiliation(s)
- Ayesha Ahoor
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Fahim Arif
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Muhammad Zaheer Sajid
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Imran Qureshi
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Fakhar Abbas
- Centre for Trusted Internet and Community, National University of Singapore (NUS), Singapore 119228, Singapore;
| | - Sohail Jabbar
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| |
Collapse
|
16
|
Fan S, Wu E, Cao M, Xu T, Liu T, Yang L, Su J, Liu J. Flexible In-Ga-Zn-N-O synaptic transistors for ultralow-power neuromorphic computing and EEG-based brain-computer interfaces. MATERIALS HORIZONS 2023; 10:4317-4328. [PMID: 37431592 DOI: 10.1039/d3mh00759f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Designing low-power and flexible artificial neural devices with artificial neural networks is a promising avenue for creating brain-computer interfaces (BCIs). Herein, we report the development of flexible In-Ga-Zn-N-O synaptic transistors (FISTs) that can simulate essential and advanced biological neural functions. These FISTs are optimized to achieve ultra-low power consumption under a super-low or even zero channel bias, making them suitable for wearable BCI applications. The effective tunability of synaptic behaviors promotes the realization of associative and non-associative learning, facilitating Covid-19 chest CT edge detection. Importantly, FISTs exhibit high tolerance to long-term exposure under an ambient environment and bending deformation, indicating their suitability for wearable BCI systems. We demonstrate that an array of FISTs can classify vision-evoked EEG signals with up to ∼87.9% and 94.8% recognition accuracy for EMNIST-Digits and MindBigdata, respectively. Thus, FISTs have enormous potential to significantly impact the development of various BCI techniques.
Collapse
Affiliation(s)
- Shuangqing Fan
- College of Electronics and Information, Qingdao University, Qingdao 266071, China.
| | - Enxiu Wu
- State Key Laboratory of Precision Measurement Technology and Instruments, School of Precision Instruments and Opto-electronics Engineering, Tianjin University, No. 92 Weijin Road, Tianjin 300072, China.
| | - Minghui Cao
- College of Electronics and Information, Qingdao University, Qingdao 266071, China.
| | - Ting Xu
- State Key Laboratory of Precision Measurement Technology and Instruments, School of Precision Instruments and Opto-electronics Engineering, Tianjin University, No. 92 Weijin Road, Tianjin 300072, China.
| | - Tong Liu
- State Key Laboratory of Precision Measurement Technology and Instruments, School of Precision Instruments and Opto-electronics Engineering, Tianjin University, No. 92 Weijin Road, Tianjin 300072, China.
| | - Lijun Yang
- Key Laboratory of Radiopharmacokinetics for Innovative Drugs, Chinese Academy of Medical Sciences, Tianjin Key Laboratory of Radiation Medicine and Molecular Nuclear Medicine, Institute of Radiation Medicine, Chinese Academy of Medical Sciences & Peking Union Medical College, Tianjin 300192, P. R. China.
| | - Jie Su
- College of Electronics and Information, Qingdao University, Qingdao 266071, China.
| | - Jing Liu
- State Key Laboratory of Precision Measurement Technology and Instruments, School of Precision Instruments and Opto-electronics Engineering, Tianjin University, No. 92 Weijin Road, Tianjin 300072, China.
| |
Collapse
|
17
|
Xu W, Nie L, Chen B, Ding W. Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis. Comput Biol Med 2023; 165:107451. [PMID: 37696184 DOI: 10.1016/j.compbiomed.2023.107451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/17/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.
Collapse
Affiliation(s)
- Weijie Xu
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Lina Nie
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Beijing Chen
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019, China
| |
Collapse
|
18
|
Zhang J, Liu Y, Lei B, Sun D, Wang S, Zhou C, Ding X, Chen Y, Chen F, Wang T, Huang R, Chen K. GIONet: Global information optimized network for multi-center COVID-19 diagnosis via COVID-GAN and domain adversarial strategy. Comput Biol Med 2023; 163:107113. [PMID: 37307643 PMCID: PMC10242645 DOI: 10.1016/j.compbiomed.2023.107113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/14/2023] [Accepted: 05/30/2023] [Indexed: 06/14/2023]
Abstract
The outbreak of coronavirus disease (COVID-19) in 2019 has highlighted the need for automatic diagnosis of the disease, which can develop rapidly into a severe condition. Nevertheless, distinguishing between COVID-19 pneumonia and community-acquired pneumonia (CAP) through computed tomography scans can be challenging due to their similar characteristics. The existing methods often perform poorly in the 3-class classification task of healthy, CAP, and COVID-19 pneumonia, and they have poor ability to handle the heterogeneity of multi-centers data. To address these challenges, we design a COVID-19 classification model using global information optimized network (GIONet) and cross-centers domain adversarial learning strategy. Our approach includes proposing a 3D convolutional neural network with graph enhanced aggregation unit and multi-scale self-attention fusion unit to improve the global feature extraction capability. We also verified that domain adversarial training can effectively reduce feature distance between different centers to address the heterogeneity of multi-center data, and used specialized generative adversarial networks to balance data distribution and improve diagnostic performance. Our experiments demonstrate satisfying diagnosis results, with a mixed dataset accuracy of 99.17% and cross-centers task accuracies of 86.73% and 89.61%.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Yiyao Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Dandan Sun
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Siqi Wang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Changning Zhou
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Xing Ding
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Yang Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Fen Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Ruidong Huang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Kuntao Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China.
| |
Collapse
|
19
|
Reza SMS, Chu WT, Homayounieh F, Blain M, Firouzabadi FD, Anari PY, Lee JH, Worwa G, Finch CL, Kuhn JH, Malayeri A, Crozier I, Wood BJ, Feuerstein IM, Solomon J. Deep-Learning-Based Whole-Lung and Lung-Lesion Quantification Despite Inconsistent Ground Truth: Application to Computerized Tomography in SARS-CoV-2 Nonhuman Primate Models. Acad Radiol 2023; 30:2037-2045. [PMID: 36966070 PMCID: PMC9968618 DOI: 10.1016/j.acra.2023.02.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 02/21/2023] [Accepted: 02/22/2023] [Indexed: 03/01/2023]
Abstract
RATIONALE AND OBJECTIVES Animal modeling of infectious diseases such as coronavirus disease 2019 (COVID-19) is important for exploration of natural history, understanding of pathogenesis, and evaluation of countermeasures. Preclinical studies enable rigorous control of experimental conditions as well as pre-exposure baseline and longitudinal measurements, including medical imaging, that are often unavailable in the clinical research setting. Computerized tomography (CT) imaging provides important diagnostic, prognostic, and disease characterization to clinicians and clinical researchers. In that context, automated deep-learning systems for the analysis of CT imaging have been broadly proposed, but their practical utility has been limited. Manual outlining of the ground truth (i.e., lung-lesions) requires accurate distinctions between abnormal and normal tissues that often have vague boundaries and is subject to reader heterogeneity in interpretation. Indeed, this subjectivity is demonstrated as wide inconsistency in manual outlines among experts and from the same expert. The application of deep-learning data-science tools has been less well-evaluated in the preclinical setting, including in nonhuman primate (NHP) models of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection/COVID-19, in which the translation of human-derived deep-learning tools is challenging. The automated segmentation of the whole lung and lung lesions provides a potentially standardized and automated method to detect and quantify disease. MATERIALS AND METHODS We used deep-learning-based quantification of the whole lung and lung lesions on CT scans of NHPs exposed to SARS-CoV-2. We proposed a novel multi-model ensemble technique to address the inconsistency in the ground truths for deep-learning-based automated segmentation of the whole lung and lung lesions. Multiple models were obtained by training the convolutional neural network (CNN) on different subsets of the training data instead of having a single model using the entire training dataset. Moreover, we employed a feature pyramid network (FPN), a CNN that provides predictions at different resolution levels, enabling the network to predict objects with wide size variations. RESULTS We achieved an average of 99.4 and 60.2% Dice coefficients for whole-lung and lung-lesion segmentation, respectively. The proposed multi-model FPN outperformed well-accepted methods U-Net (50.5%), V-Net (54.5%), and Inception (53.4%) for the challenging lesion-segmentation task. We show the application of segmentation outputs for longitudinal quantification of lung disease in SARS-CoV-2-exposed and mock-exposed NHPs. CONCLUSION Deep-learning methods should be optimally characterized for and targeted specifically to preclinical research needs in terms of impact, automation, and dynamic quantification independently from purely clinical applications.
Collapse
Affiliation(s)
- Syed M S Reza
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Winston T Chu
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Fatemeh Homayounieh
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Maxim Blain
- Center for Interventional Oncology, Radiology and Imaging Sciences, NIH Clinical Center and National Cancer Institute, Center for Cancer Research, National Institutes of Health, Bethesda, Maryland
| | - Fatemeh D Firouzabadi
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Pouria Y Anari
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Ji Hyun Lee
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Gabriella Worwa
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Courtney L Finch
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Jens H Kuhn
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Ashkan Malayeri
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Ian Crozier
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology and Imaging Sciences, NIH Clinical Center and National Cancer Institute, Center for Cancer Research, National Institutes of Health, Bethesda, Maryland
| | - Irwin M Feuerstein
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland
| | - Jeffrey Solomon
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| |
Collapse
|
20
|
Ma FQ, He C, Yang HR, Hu ZW, Mao HR, Fan CY, Qi Y, Zhang JX, Xu B. Interpretable machine-learning model for Predicting the Convalescent COVID-19 patients with pulmonary diffusing capacity impairment. BMC Med Inform Decis Mak 2023; 23:169. [PMID: 37644543 PMCID: PMC10466769 DOI: 10.1186/s12911-023-02192-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 05/04/2023] [Indexed: 08/31/2023] Open
Abstract
INTRODUCTION The COVID-19 patients in the convalescent stage noticeably have pulmonary diffusing capacity impairment (PDCI). The pulmonary diffusing capacity is a frequently-used indicator of the COVID-19 survivors' prognosis of pulmonary function, but the current studies focusing on prediction of the pulmonary diffusing capacity of these people are limited. The aim of this study was to develop and validate a machine learning (ML) model for predicting PDCI in the COVID-19 patients using routinely available clinical data, thus assisting the clinical diagnosis. METHODS Collected from a follow-up study from August to September 2021 of 221 hospitalized survivors of COVID-19 18 months after discharge from Wuhan, including the demographic characteristics and clinical examination, the data in this study were randomly separated into a training (80%) data set and a validation (20%) data set. Six popular machine learning models were developed to predict the pulmonary diffusing capacity of patients infected with COVID-19 in the recovery stage. The performance indicators of the model included area under the curve (AUC), Accuracy, Recall, Precision, Positive Predictive Value(PPV), Negative Predictive Value (NPV) and F1. The model with the optimum performance was defined as the optimal model, which was further employed in the interpretability analysis. The MAHAKIL method was utilized to balance the data and optimize the balance of sample distribution, while the RFECV method for feature selection was utilized to select combined features more favorable to machine learning. RESULTS A total of 221 COVID-19 survivors were recruited in this study after discharge from hospitals in Wuhan. Of these participants, 117 (52.94%) were female, with a median age of 58.2 years (standard deviation (SD) = 12). After feature selection, 31 of the 37 clinical factors were finally selected for use in constructing the model. Among the six tested ML models, the best performance was accomplished in the XGBoost model, with an AUC of 0.755 and an accuracy of 78.01% after experimental verification. The SHAPELY Additive explanations (SHAP) summary analysis exhibited that hemoglobin (Hb), maximal voluntary ventilation (MVV), severity of illness, platelet (PLT), Uric Acid (UA) and blood urea nitrogen (BUN) were the top six most important factors affecting the XGBoost model decision-making. CONCLUSION The XGBoost model reported here showed a good prognostic prediction ability for PDCI of COVID-19 survivors during the recovery period. Among the interpretation methods based on the importance of SHAP values, Hb and MVV contributed the most to the prediction of PDCI outcomes of COVID-19 survivors in the recovery period.
Collapse
Affiliation(s)
- Fu-Qiang Ma
- Hubei University of Chinese Medicine, Wuhan, 430065, China
| | - Cong He
- Hubei Provincial Hospital of Traditional Chinese Medicine, Wuhan, 430061, China
- Affiliated Hospital of Hubei University of Traditional Chinese Medicine, Wuhan, 430061, China
- Hubei Province Academy of Traditional Chinese Medicine, Wuhan, 430074, China
| | - Hao-Ran Yang
- School of Software, HuaZhong University of Science and Technology, Wuhan, 430074, China
| | - Zuo-Wei Hu
- Wuhan No.1 Hospital, Wuhan, 430022, China
| | - He-Rong Mao
- Hubei University of Chinese Medicine, Wuhan, 430065, China
| | - Cun-Yu Fan
- Hubei Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Wuhan, 430015, China
| | - Yu Qi
- Hubei University of Chinese Medicine, Wuhan, 430065, China
| | - Ji-Xian Zhang
- Hubei Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Wuhan, 430015, China.
| | - Bo Xu
- Hubei University of Chinese Medicine, Wuhan, 430065, China.
| |
Collapse
|
21
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
22
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
23
|
Zakariaee SS, Naderi N, Ebrahimi M, Kazemi-Arpanahi H. Comparing machine learning algorithms to predict COVID‑19 mortality using a dataset including chest computed tomography severity score data. Sci Rep 2023; 13:11343. [PMID: 37443373 PMCID: PMC10345104 DOI: 10.1038/s41598-023-38133-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 07/04/2023] [Indexed: 07/15/2023] Open
Abstract
Since the beginning of the COVID-19 pandemic, new and non-invasive digital technologies such as artificial intelligence (AI) had been introduced for mortality prediction of COVID-19 patients. The prognostic performances of the machine learning (ML)-based models for predicting clinical outcomes of COVID-19 patients had been mainly evaluated using demographics, risk factors, clinical manifestations, and laboratory results. There is a lack of information about the prognostic role of imaging manifestations in combination with demographics, clinical manifestations, and laboratory predictors. The purpose of the present study is to develop an efficient ML prognostic model based on a more comprehensive dataset including chest CT severity score (CT-SS). Fifty-five primary features in six main classes were retrospectively reviewed for 6854 suspected cases. The independence test of Chi-square was used to determine the most important features in the mortality prediction of COVID-19 patients. The most relevant predictors were used to train and test ML algorithms. The predictive models were developed using eight ML algorithms including the J48 decision tree (J48), support vector machine (SVM), multi-layer perceptron (MLP), k-nearest neighbourhood (k-NN), Naïve Bayes (NB), logistic regression (LR), random forest (RF), and eXtreme gradient boosting (XGBoost). The performances of the predictive models were evaluated using accuracy, precision, sensitivity, specificity, and area under the ROC curve (AUC) metrics. After applying the exclusion criteria, a total of 815 positive RT-PCR patients were the final sample size, where 54.85% of the patients were male and the mean age of the study population was 57.22 ± 16.76 years. The RF algorithm with an accuracy of 97.2%, the sensitivity of 100%, a precision of 94.8%, specificity of 94.5%, F1-score of 97.3%, and AUC of 99.9% had the best performance. Other ML algorithms with AUC ranging from 81.2 to 93.9% had also good prediction performances in predicting COVID-19 mortality. Results showed that timely and accurate risk stratification of COVID-19 patients could be performed using ML-based predictive models fed by routine data. The proposed algorithm with the more comprehensive dataset including CT-SS could efficiently predict the mortality of COVID-19 patients. This could lead to promptly targeting high-risk patients on admission, the optimal use of hospital resources, and an increased probability of survival of patients.
Collapse
Affiliation(s)
| | - Negar Naderi
- Department of Midwifery, Ilam University of Medical Sciences, Ilam, Iran
| | - Mahdi Ebrahimi
- Department of Emergency Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Hadi Kazemi-Arpanahi
- Department of Health Information Technology, Abadan University of Medical Sciences, Abadan, Iran.
| |
Collapse
|
24
|
Casey AE, Ansari S, Nakisa B, Kelly B, Brown P, Cooper P, Muhammad I, Livingstone S, Reddy S, Makinen VP. Application of a Comprehensive Evaluation Framework to COVID-19 Studies: Systematic Review of Translational Aspects of Artificial Intelligence in Health Care. JMIR AI 2023; 2:e42313. [PMID: 37457747 PMCID: PMC10337329 DOI: 10.2196/42313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 11/23/2022] [Accepted: 03/22/2023] [Indexed: 07/18/2023]
Abstract
Background Despite immense progress in artificial intelligence (AI) models, there has been limited deployment in health care environments. The gap between potential and actual AI applications is likely due to the lack of translatability between controlled research environments (where these models are developed) and clinical environments for which the AI tools are ultimately intended. Objective We previously developed the Translational Evaluation of Healthcare AI (TEHAI) framework to assess the translational value of AI models and to support successful transition to health care environments. In this study, we applied the TEHAI framework to the COVID-19 literature in order to assess how well translational topics are covered. Methods A systematic literature search for COVID-19 AI studies published between December 2019 and December 2020 resulted in 3830 records. A subset of 102 (2.7%) papers that passed the inclusion criteria was sampled for full review. The papers were assessed for translational value and descriptive data collected by 9 reviewers (each study was assessed by 2 reviewers). Evaluation scores and extracted data were compared by a third reviewer for resolution of discrepancies. The review process was conducted on the Covidence software platform. Results We observed a significant trend for studies to attain high scores for technical capability but low scores for the areas essential for clinical translatability. Specific questions regarding external model validation, safety, nonmaleficence, and service adoption received failed scores in most studies. Conclusions Using TEHAI, we identified notable gaps in how well translational topics of AI models are covered in the COVID-19 clinical sphere. These gaps in areas crucial for clinical translatability could, and should, be considered already at the model development stage to increase translatability into real COVID-19 health care environments.
Collapse
Affiliation(s)
- Aaron Edward Casey
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
| | - Saba Ansari
- School of Medicine Deakin University Geelong Australia
| | - Bahareh Nakisa
- School of Information Technology Deakin University Geelong Australia
| | | | | | - Paul Cooper
- School of Medicine Deakin University Geelong Australia
| | | | | | - Sandeep Reddy
- School of Medicine Deakin University Geelong Australia
| | - Ville-Petteri Makinen
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
- Computational Medicine Faculty of Medicine University of Oulu Oulu Finland
- Centre for Life Course Health Research Faculty of Medicine University of Oulu Oulu Finland
| |
Collapse
|
25
|
Bradshaw TJ, Huemann Z, Hu J, Rahmim A. A Guide to Cross-Validation for Artificial Intelligence in Medical Imaging. Radiol Artif Intell 2023; 5:e220232. [PMID: 37529208 PMCID: PMC10388213 DOI: 10.1148/ryai.220232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 05/02/2023] [Accepted: 05/10/2023] [Indexed: 08/03/2023]
Abstract
Artificial intelligence (AI) is being increasingly used to automate and improve technologies within the field of medical imaging. A critical step in the development of an AI algorithm is estimating its prediction error through cross-validation (CV). The use of CV can help prevent overoptimism in AI algorithms and can mitigate certain biases associated with hyperparameter tuning and algorithm selection. This article introduces the principles of CV and provides a practical guide on the use of CV for AI algorithm development in medical imaging. Different CV techniques are described, as well as their advantages and disadvantages under different scenarios. Common pitfalls in prediction error estimation and guidance on how to avoid them are also discussed. Keywords: Education, Research Design, Technical Aspects, Statistics, Supervised Learning, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2023.
Collapse
|
26
|
Dumakude A, Ezugwu AE. Automated COVID-19 detection with convolutional neural networks. Sci Rep 2023; 13:10607. [PMID: 37391527 PMCID: PMC10313722 DOI: 10.1038/s41598-023-37743-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
This paper focuses on addressing the urgent need for efficient and accurate automated screening tools for COVID-19 detection. Inspired by existing research efforts, we propose two framework models to tackle this challenge. The first model combines a conventional CNN architecture as a feature extractor with XGBoost as the classifier. The second model utilizes a classical CNN architecture with a Feedforward Neural Network for classification. The key distinction between the two models lies in their classification layers. Bayesian optimization techniques are employed to optimize the hyperparameters of both models, enabling a "cheat-start" to the training process with optimal configurations. To mitigate overfitting, transfer learning techniques such as Dropout and Batch normalization are incorporated. The CovidxCT-2A dataset is used for training, validation, and testing purposes. To establish a benchmark, we compare the performance of our models with state-of-the-art methods reported in the literature. Evaluation metrics including Precision, Recall, Specificity, Accuracy, and F1-score are employed to assess the efficacy of the models. The hybrid model demonstrates impressive results, achieving high precision (98.43%), recall (98.41%), specificity (99.26%), accuracy (99.04%), and F1-score (98.42%). The standalone CNN model exhibits slightly lower but still commendable performance, with precision (98.25%), recall (98.44%), specificity (99.27%), accuracy (98.97%), and F1-score (98.34%). Importantly, both models outperform five other state-of-the-art models in terms of classification accuracy, as demonstrated by the results of this study.
Collapse
Affiliation(s)
- Aphelele Dumakude
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, 3201, KwaZulu-Natal, South Africa
| | - Absalom E Ezugwu
- Unit for Data Science and Computing, North-West University, 11 Hoffman Street, Potchefstroom, 2520, South Africa.
| |
Collapse
|
27
|
Yu Y, Cao Y, Wang G, Pang Y, Lang L. Optical Diffractive Convolutional Neural Networks Implemented in an All-Optical Way. SENSORS (BASEL, SWITZERLAND) 2023; 23:5749. [PMID: 37420913 DOI: 10.3390/s23125749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 06/12/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Optical neural networks can effectively address hardware constraints and parallel computing efficiency issues inherent in electronic neural networks. However, the inability to implement convolutional neural networks at the all-optical level remains a hurdle. In this work, we propose an optical diffractive convolutional neural network (ODCNN) that is capable of performing image processing tasks in computer vision at the speed of light. We explore the application of the 4f system and the diffractive deep neural network (D2NN) in neural networks. ODCNN is then simulated by combining the 4f system as an optical convolutional layer and the diffractive networks. We also examine the potential impact of nonlinear optical materials on this network. Numerical simulation results show that the addition of convolutional layers and nonlinear functions improves the classification accuracy of the network. We believe that the proposed ODCNN model can be the basic architecture for building optical convolutional networks.
Collapse
Affiliation(s)
- Yaze Yu
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
- Center for Advanced Laser Technology, Hebei University of Technology, Tianjin 300401, China
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, Tianjin 300401, China
| | - Yang Cao
- Center for Advanced Laser Technology, Hebei University of Technology, Tianjin 300401, China
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, Tianjin 300401, China
| | - Gong Wang
- Center for Advanced Laser Technology, Hebei University of Technology, Tianjin 300401, China
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, Tianjin 300401, China
| | - Yajun Pang
- Center for Advanced Laser Technology, Hebei University of Technology, Tianjin 300401, China
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, Tianjin 300401, China
| | - Liying Lang
- Center for Advanced Laser Technology, Hebei University of Technology, Tianjin 300401, China
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, Tianjin 300401, China
| |
Collapse
|
28
|
Meng F, Kottlors J, Shahzad R, Liu H, Fervers P, Jin Y, Rinneburger M, Le D, Weisthoff M, Liu W, Ni M, Sun Y, An L, Huai X, Móré D, Giannakis A, Kaltenborn I, Bucher A, Maintz D, Zhang L, Thiele F, Li M, Perkuhn M, Zhang H, Persigehl T. AI support for accurate and fast radiological diagnosis of COVID-19: an international multicenter, multivendor CT study. Eur Radiol 2023; 33:4280-4291. [PMID: 36525088 PMCID: PMC9755771 DOI: 10.1007/s00330-022-09335-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/03/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES Differentiation between COVID-19 and community-acquired pneumonia (CAP) in computed tomography (CT) is a task that can be performed by human radiologists and artificial intelligence (AI). The present study aims to (1) develop an AI algorithm for differentiating COVID-19 from CAP and (2) evaluate its performance. (3) Evaluate the benefit of using the AI result as assistance for radiological diagnosis and the impact on relevant parameters such as accuracy of the diagnosis, diagnostic time, and confidence. METHODS We included n = 1591 multicenter, multivendor chest CT scans and divided them into AI training and validation datasets to develop an AI algorithm (n = 991 CT scans; n = 462 COVID-19, and n = 529 CAP) from three centers in China. An independent Chinese and German test dataset of n = 600 CT scans from six centers (COVID-19 / CAP; n = 300 each) was used to test the performance of eight blinded radiologists and the AI algorithm. A subtest dataset (180 CT scans; n = 90 each) was used to evaluate the radiologists' performance without and with AI assistance to quantify changes in diagnostic accuracy, reporting time, and diagnostic confidence. RESULTS The diagnostic accuracy of the AI algorithm in the Chinese-German test dataset was 76.5%. Without AI assistance, the eight radiologists' diagnostic accuracy was 79.1% and increased with AI assistance to 81.5%, going along with significantly shorter decision times and higher confidence scores. CONCLUSION This large multicenter study demonstrates that AI assistance in CT-based differentiation of COVID-19 and CAP increases radiological performance with higher accuracy and specificity, faster diagnostic time, and improved diagnostic confidence. KEY POINTS • AI can help radiologists to get higher diagnostic accuracy, make faster decisions, and improve diagnostic confidence. • The China-German multicenter study demonstrates the advantages of a human-machine interaction using AI in clinical radiology for diagnostic differentiation between COVID-19 and CAP in CT scans.
Collapse
Affiliation(s)
- Fanyang Meng
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Jonathan Kottlors
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Haifeng Liu
- Department of Radiology, Wuhan No. 1 Hospital, Wuhan, China
| | - Philipp Fervers
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Yinhua Jin
- Department of Radiology, Ningbo Hwamei Hospital, University of Chinese Academy of Sciences, Wuhan, China
| | - Miriam Rinneburger
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Dou Le
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Mathilda Weisthoff
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Wenyun Liu
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Mengzhe Ni
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Ye Sun
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Liying An
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | | | - Dorottya Móré
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Athanasios Giannakis
- Department of Diagnostic and Interventional Radiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Isabel Kaltenborn
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - Andreas Bucher
- Institute for Diagnostic and Interventional Radiology, Frankfurt University Hospital, Frankfurt, Germany
| | - David Maintz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lei Zhang
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Mingyang Li
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Innovative Technology, Philips Healthcare, Aachen, Germany
| | - Huimao Zhang
- Department of Radiology, The First Hospital of Ji Lin University, No. 1 Xinmin Street, Changchun, 130012, China.
| | - Thorsten Persigehl
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
29
|
Xu J, Cao Z, Miao C, Zhang M, Xu X. Predicting omicron pneumonia severity and outcome: a single-center study in Hangzhou, China. Front Med (Lausanne) 2023; 10:1192376. [PMID: 37305146 PMCID: PMC10250627 DOI: 10.3389/fmed.2023.1192376] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 05/08/2023] [Indexed: 06/13/2023] Open
Abstract
Background In December 2022, there was a large Omicron epidemic in Hangzhou, China. Many people were diagnosed with Omicron pneumonia with variable symptom severity and outcome. Computed tomography (CT) imaging has been proven to be an important tool for COVID-19 pneumonia screening and quantification. We hypothesized that CT-based machine learning algorithms can predict disease severity and outcome in Omicron pneumonia, and we compared its performance with the pneumonia severity index (PSI)-related clinical and biological features. Methods Our study included 238 patients with the Omicron variant who have been admitted to our hospital in China from 15 December 2022 to 16 January 2023 (the first wave after the dynamic zero-COVID strategy stopped). All patients had a positive real-time polymerase chain reaction (PCR) or lateral flow antigen test for SARS-CoV-2 after vaccination and no previous SARS-CoV-2 infections. We recorded patient baseline information pertaining to demographics, comorbid conditions, vital signs, and available laboratory data. All CT images were processed with a commercial artificial intelligence (AI) algorithm to obtain the volume and percentage of consolidation and infiltration related to Omicron pneumonia. The support vector machine (SVM) model was used to predict the disease severity and outcome. Results The receiver operating characteristic (ROC) area under the curve (AUC) of the machine learning classifier using PSI-related features was 0.85 (accuracy = 87.40%, p < 0.001) for predicting severity while that using CT-based features was only 0.70 (accuracy = 76.47%, p = 0.014). If combined, the AUC was not increased, showing 0.84 (accuracy = 84.03%, p < 0.001). Trained on outcome prediction, the classifier reached the AUC of 0.85 using PSI-related features (accuracy = 85.29%, p < 0.001), which was higher than using CT-based features (AUC = 0.67, accuracy = 75.21%, p < 0.001). If combined, the integrated model showed a slightly higher AUC of 0.86 (accuracy = 86.13%, p < 0.001). Oxygen saturation, IL-6, and CT infiltration showed great importance in both predicting severity and outcome. Conclusion Our study provided a comprehensive analysis and comparison between baseline chest CT and clinical assessment in disease severity and outcome prediction in Omicron pneumonia. The predictive model accurately predicts the severity and outcome of Omicron infection. Oxygen saturation, IL-6, and infiltration in chest CT were found to be important biomarkers. This approach has the potential to provide frontline physicians with an objective tool to manage Omicron patients more effectively in time-sensitive, stressful, and potentially resource-constrained environments.
Collapse
Affiliation(s)
- Jingjing Xu
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhengye Cao
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Chunqin Miao
- Party and Hospital Administration Office, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Minming Zhang
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaojun Xu
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
30
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
31
|
Qiao P, Li H, Song G, Han H, Gao Z, Tian Y, Liang Y, Li X, Zhou SK, Chen J. Semi-Supervised CT Lesion Segmentation Using Uncertainty-Based Data Pairing and SwapMix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1546-1562. [PMID: 37015649 DOI: 10.1109/tmi.2022.3232572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Semi-supervised learning (SSL) methods show their powerful performance to deal with the issue of data shortage in the field of medical image segmentation. However, existing SSL methods still suffer from the problem of unreliable predictions on unannotated data due to the lack of manual annotations for them. In this paper, we propose an unreliability-diluted consistency training (UDiCT) mechanism to dilute the unreliability in SSL by assembling reliable annotated data into unreliable unannotated data. Specifically, we first propose an uncertainty-based data pairing module to pair annotated data with unannotated data based on a complementary uncertainty pairing rule, which avoids two hard samples being paired off. Secondly, we develop SwapMix, a mixed sample data augmentation method, to integrate annotated data into unannotated data for training our model in a low-unreliability manner. Finally, UDiCT is trained by minimizing a supervised loss and an unreliability-diluted consistency loss, which makes our model robust to diverse backgrounds. Extensive experiments on three chest CT datasets show the effectiveness of our method for semi-supervised CT lesion segmentation.
Collapse
|
32
|
Karbasi Z, Gohari SH, Sabahi A. Bibliometric analysis of the use of artificial intelligence in COVID-19 based on scientific studies. Health Sci Rep 2023; 6:e1244. [PMID: 37152228 PMCID: PMC10158785 DOI: 10.1002/hsr2.1244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 04/11/2023] [Accepted: 04/16/2023] [Indexed: 05/09/2023] Open
Abstract
Background and Aims One such strategy is citation analysis used by researchers for research planning an article referred to by another article receives a "citation." By using bibliometric analysis, the development of research areas and authors' influence can be investigated. The current study aimed to identify and analyze the characteristics of 100 highly cited articles on the use of artificial intelligence concerning COVID-19. Methods On July 27, 2022, this database was searched using the keywords "artificial intelligence" and "COVID-19" in the topic. After extensive searching, all retrieved articles were sorted by the number of citations, and 100 highly cited articles were included based on the number of citations. The following data were extracted: year of publication, type of study, name of journal, country, number of citations, language, and keywords. Results The average number of citations for 100 highly cited articles was 138.54. The top three cited articles with 745, 596, and 549 citations. The top 100 articles were all in English and were published in 2020 and 2021. China was the most prolific country with 19 articles, followed by the United States with 15 articles and India with 10 articles. Conclusion The current bibliometric analysis demonstrated the significant growth of the use of artificial intelligence for COVID-19. Using these results, research priorities are more clearly defined, and researchers can focus on hot topics.
Collapse
Affiliation(s)
- Zahra Karbasi
- Medical Informatics Research Center, Institute for Futures Studies in HealthKerman University of Medical SciencesKermanIran
- Department of Health Information Sciences, Faculty of Management and Medical Information SciencesKerman University of Medical SciencesKermanIran
| | - Sadrieh H. Gohari
- Medical Informatics Research Center, Institute for Futures Studies in HealthKerman University of Medical SciencesKermanIran
| | - Azam Sabahi
- Department of Health Information Technology, Ferdows School of Health and Allied Medical SciencesBirjand University of Medical SciencesBirjandIran
| |
Collapse
|
33
|
Rehman A, Xing H, Adnan Khan M, Hussain M, Hussain A, Gulzar N. Emerging technologies for COVID (ET-CoV) detection and diagnosis: Recent advancements, applications, challenges, and future perspectives. Biomed Signal Process Control 2023; 83:104642. [PMID: 36818992 PMCID: PMC9917176 DOI: 10.1016/j.bspc.2023.104642] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 11/29/2022] [Accepted: 01/25/2023] [Indexed: 02/12/2023]
Abstract
In light of the constantly changing terrain of the COVID outbreak, medical specialists have implemented proactive schemes for vaccine production. Despite the remarkable COVID-19 vaccine development, the virus has mutated into new variants, including delta and omicron. Currently, the situation is critical in many parts of the world, and precautions are being taken to stop the virus from spreading and mutating. Early identification and diagnosis of COVID-19 are the main challenges faced by emerging technologies during the outbreak. In these circumstances, emerging technologies to tackle Coronavirus have proven magnificent. Artificial intelligence (AI), big data, the internet of medical things (IoMT), robotics, blockchain technology, telemedicine, smart applications, and additive manufacturing are suspicious for detecting, classifying, monitoring, and locating COVID-19. Henceforth, this research aims to glance at these COVID-19 defeating technologies by focusing on their strengths and limitations. A CiteSpace-based bibliometric analysis of the emerging technology was established. The most impactful keywords and the ongoing research frontiers were compiled. Emerging technologies were unstable due to data inconsistency, redundant and noisy datasets, and the inability to aggregate the data due to disparate data formats. Moreover, the privacy and confidentiality of patient medical records are not guaranteed. Hence, Significant data analysis is required to develop an intelligent computational model for effective and quick clinical diagnosis of COVID-19. Remarkably, this article outlines how emerging technology has been used to counteract the virus disaster and offers ongoing research frontiers, directing readers to concentrate on the real challenges and thus facilitating additional explorations to amplify emerging technologies.
Collapse
Affiliation(s)
- Amir Rehman
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Huanlai Xing
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning, Department of Software, Gachon University, Seongnam 13557, Republic of Korea
- Riphah School of Computing & Innovation, Faculty of Computing, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| | - Mehboob Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Abid Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Nighat Gulzar
- School of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| |
Collapse
|
34
|
Han J, Montagna M, Grammenos A, Xia T, Bondareva E, Siegele-Brown C, Chauhan J, Dang T, Spathis D, Floto A, Cicuta P, Mascolo C. Evaluating Listening Performance for COVID-19 Detection by Clinicians and Machine Learning: A Comparative Study. J Med Internet Res 2023; 25:e44804. [PMID: 37126593 DOI: 10.2196/44804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 04/26/2023] [Accepted: 04/28/2023] [Indexed: 05/03/2023] Open
Abstract
BACKGROUND To date, performance comparisons between men and machines have been performed in many health domains. Yet, machine learning models and human performance comparisons in audio-based respiratory diagnosis remain largely unexplored. OBJECTIVE The primary objective of this study is to compare human clinicians and a machine learning model in predicting COVID-19 from respiratory sound recordings. METHODS In this study, we compare human clinicians and a machine learning model in predicting COVID-19 from respiratory sound recordings. Prediction performance on 24 audio samples (12 tested positive) made by 36 clinicians with experience in treating COVID-19 or other respiratory illnesses is compared with predictions made by a machine learning model trained on 1,162 samples. Each sample consists of voice, cough, and breathing sound recordings from one subject, and the length of each sample is around 20 seconds. We also investigated whether combining the predictions of the model and human experts could further enhance the performance, in terms of both accuracy and confidence. RESULTS The machine learning model outperformed the clinicians, yielding a sensitivity of 0.75 and a specificity of 0.83, while the best performance achieved by the clinician was 0.67 in terms of sensitivity and 0.75 in terms of specificity. Integrating clinicians' and model's predictions, however, could enhance performance further, achieving a sensitivity of 0.83 and a specificity of 0.92. CONCLUSIONS Our findings suggest that the clinicians and the machine learning model could make better clinical decisions via a cooperative approach and achieve higher confidence in audio-based respiratory diagnosis.
Collapse
Affiliation(s)
- Jing Han
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | | | - Andreas Grammenos
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Tong Xia
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Erika Bondareva
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | | | | | - Ting Dang
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Dimitris Spathis
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| | - Andres Floto
- Department of Medicine, University of Cambridge, Cambridge, GB
| | - Pietro Cicuta
- Department of Physics, University of Cambridge, Cambridge, GB
| | - Cecilia Mascolo
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Ave, Cambridge, GB
| |
Collapse
|
35
|
Alqaissi E, Alotaibi F, Ramzan MS. Graph data science and machine learning for the detection of COVID-19 infection from symptoms. PeerJ Comput Sci 2023; 9:e1333. [PMID: 37346701 PMCID: PMC10280642 DOI: 10.7717/peerj-cs.1333] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/16/2023] [Indexed: 06/23/2023]
Abstract
Background COVID-19 is an infectious disease caused by SARS-CoV-2. The symptoms of COVID-19 vary from mild-to-moderate respiratory illnesses, and it sometimes requires urgent medication. Therefore, it is crucial to detect COVID-19 at an early stage through specific clinical tests, testing kits, and medical devices. However, these tests are not always available during the time of the pandemic. Therefore, this study developed an automatic, intelligent, rapid, and real-time diagnostic model for the early detection of COVID-19 based on its symptoms. Methods The COVID-19 knowledge graph (KG) constructed based on literature from heterogeneous data is imported to understand the COVID-19 different relations. We added human disease ontology to the COVID-19 KG and applied a node-embedding graph algorithm called fast random projection to extract an extra feature from the COVID-19 dataset. Subsequently, experiments were conducted using two machine learning (ML) pipelines to predict COVID-19 infection from its symptoms. Additionally, automatic tuning of the model hyperparameters was adopted. Results We compared two graph-based ML models, logistic regression (LR) and random forest (RF) models. The proposed graph-based RF model achieved a small error rate = 0.0064 and the best scores on all performance metrics, including specificity = 98.71%, accuracy = 99.36%, precision = 99.65%, recall = 99.53%, and F1-score = 99.59%. Furthermore, the Matthews correlation coefficient achieved by the RF model was higher than that of the LR model. Comparative analysis with other ML algorithms and with studies from the literature showed that the proposed RF model exhibited the best detection accuracy. Conclusion The graph-based RF model registered high performance in classifying the symptoms of COVID-19 infection, thereby indicating that the graph data science, in conjunction with ML techniques, helps improve performance and accelerate innovations.
Collapse
Affiliation(s)
- Eman Alqaissi
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
- Information Systems, King Khalid University, Abha, Saudi Arabia
| | - Fahd Alotaibi
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Muhammad Sher Ramzan
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
36
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
37
|
Li K, Chen C, Cao W, Wang H, Han S, Wang R, Ye Z, Wu Z, Wang W, Cai L, Ding D, Yuan Z. DeAF: A multimodal deep learning framework for disease prediction. Comput Biol Med 2023; 156:106715. [PMID: 36867898 DOI: 10.1016/j.compbiomed.2023.106715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 02/05/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
Multimodal deep learning models have been applied for disease prediction tasks, but difficulties exist in training due to the conflict between sub-models and fusion modules. To alleviate this issue, we propose a framework for decoupling feature alignment and fusion (DeAF), which separates the multimodal model training into two stages. In the first stage, unsupervised representation learning is conducted, and the modality adaptation (MA) module is used to align the features from various modalities. In the second stage, the self-attention fusion (SAF) module combines the medical image features and clinical data using supervised learning. Moreover, we apply the DeAF framework to predict the postoperative efficacy of CRS for colorectal cancer and whether the MCI patients change to Alzheimer's disease. The DeAF framework achieves a significant improvement in comparison to the previous methods. Furthermore, extensive ablation experiments are conducted to demonstrate the rationality and effectiveness of our framework. In conclusion, our framework enhances the interaction between the local medical image features and clinical data, and derive more discriminative multimodal features for disease prediction. The framework implementation is available at https://github.com/cchencan/DeAF.
Collapse
Affiliation(s)
- Kangshun Li
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China.
| | - Can Chen
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Wuteng Cao
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, 510000, China
| | - Hui Wang
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China
| | - Shuai Han
- General Surgery Center, Department of Gastrointestinal Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510000, China
| | - Renjie Wang
- Department of Colorectal Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200000, China
| | - Zaisheng Ye
- Department of Gastrointestinal Surgical Oncology, Fujian Cancer Hospital and Fujian Medical University Cancer Hospital, Fuzhou, 350000, China
| | - Zhijie Wu
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China
| | - Wenxiang Wang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Leng Cai
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Deyu Ding
- Department of Economics, University of Konstanz, Konstanz, 350000, Germany
| | - Zixu Yuan
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China.
| |
Collapse
|
38
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
39
|
Han X, Chen J, Chen L, Jia X, Fan Y, Zheng Y, Alwalid O, Liu J, Li Y, Li N, Gu J, Wang J, Shi H. Comparative Analysis of Clinical and CT Findings in Patients with SARS-CoV-2 Original Strain, Delta and Omicron Variants. Biomedicines 2023; 11:biomedicines11030901. [PMID: 36979880 PMCID: PMC10046064 DOI: 10.3390/biomedicines11030901] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/07/2023] [Accepted: 03/10/2023] [Indexed: 03/17/2023] Open
Abstract
Objectives: To compare the clinical characteristics and chest CT findings of patients infected with Omicron and Delta variants and the original strain of COVID-19. Methods: A total of 503 patients infected with the original strain (245 cases), Delta variant (90 cases), and Omicron variant (168 cases) were retrospectively analyzed. The differences in clinical severity and chest CT findings were analyzed. We also compared the infection severity of patients with different vaccination statuses and quantified pneumonia by a deep-learning approach. Results: The rate of severe disease decreased significantly from the original strain to the Delta variant and Omicron variant (27% vs. 10% vs. 4.8%, p < 0.001). In the Omicron group, 44% (73/168) of CT scans were categorized as abnormal compared with 81% (73/90) in the Delta group and 96% (235/245, p < 0.05) in the original group. Trends of a gradual decrease in total CT score, lesion volume, and lesion CT value of AI evaluation were observed across the groups (p < 0.001 for all). Omicron patients who received the booster vaccine had less clinical severity (p = 0.015) and lower lung involvement rate than those without the booster vaccine (36% vs. 57%, p = 0.009). Conclusions: Compared with the original strain and Delta variant, the Omicron variant had less clinical severity and less lung injury on CT scans.
Collapse
Affiliation(s)
- Xiaoyu Han
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Jingze Chen
- Department of Pharmacy, Wuhan Jinyintan Hospital, Wuhan 430022, China
| | - Lu Chen
- Department of Radiology, Wuhan Jinyintan hospital, Wuhan 430022, China
| | - Xi Jia
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Yanqing Fan
- Department of Radiology, Wuhan Jinyintan hospital, Wuhan 430022, China
| | - Yuting Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Osamah Alwalid
- Department of Diagnostic Imaging, Sidra Medicine, Doha 26999, Qatar
| | - Jie Liu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Yumin Li
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Na Li
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Jin Gu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Jiangtao Wang
- Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang 441021, China
- Correspondence: (J.W.); (H.S.)
| | - Heshui Shi
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
- Correspondence: (J.W.); (H.S.)
| |
Collapse
|
40
|
Patel RK, Kashyap M. Machine learning- based lung disease diagnosis from CT images using Gabor features in Littlewood Paley empirical wavelet transform (LPEWT) and LLE. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2187244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Affiliation(s)
- Rajneesh Kumar Patel
- Department of Electronics & Communication, Maulana Azad National Institute of Technology, Bhopal (M.P.), India
| | - Manish Kashyap
- Department of Electronics & Communication, Maulana Azad National Institute of Technology, Bhopal (M.P.), India
| |
Collapse
|
41
|
da Silveira TLT, Pinto PGL, Lermen TS, Jung CR. Omnidirectional 2.5D representation for COVID-19 diagnosis using chest CTs. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION 2023; 91:103775. [PMID: 36741546 PMCID: PMC9886432 DOI: 10.1016/j.jvcir.2023.103775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 06/18/2023]
Abstract
The Coronavirus Disease 2019 (COVID-19) has drastically overwhelmed most countries in the last two years, and image-based approaches using computerized tomography (CT) have been used to identify pulmonary infections. Recent methods based on deep learning either require time-consuming per-slice annotations (2D) or are highly data- and hardware-demanding (3D). This work proposes a novel omnidirectional 2.5D representation of volumetric chest CTs that allows exploring efficient 2D deep learning architectures while requiring volume-level annotations only. Our learning approach uses a siamese feature extraction backbone applied to each lung. It combines these features into a classification head that explores a novel combination of Squeeze-and-Excite strategies with Class Activation Maps. We experimented with public and in-house datasets and compared our results with state-of-the-art techniques. Our analyses show that our method provides better or comparable prediction quality and accurately distinguishes COVID-19 infections from other kinds of pneumonia and healthy lungs.
Collapse
Affiliation(s)
- Thiago L T da Silveira
- Institute of Informatics - Federal University of Rio Grande do Sul, Porto Alegre, 91501-970, Brazil
| | - Paulo G L Pinto
- Institute of Informatics - Federal University of Rio Grande do Sul, Porto Alegre, 91501-970, Brazil
| | - Thiago S Lermen
- Institute of Informatics - Federal University of Rio Grande do Sul, Porto Alegre, 91501-970, Brazil
| | - Cláudio R Jung
- Institute of Informatics - Federal University of Rio Grande do Sul, Porto Alegre, 91501-970, Brazil
| |
Collapse
|
42
|
D S, R K. Prognosticating various acute covid lung disorders from COVID-19 patient using chest CT Images. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 119:105820. [PMID: 36644478 PMCID: PMC9829610 DOI: 10.1016/j.engappai.2023.105820] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 12/12/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
The global spread of coronavirus illness has surged dramatically, resulting in a catastrophic pandemic situation. Despite this, accurate screening remains a significant challenge due to difficulties in categorizing infection regions and the minuscule difference between typical pneumonia and COVID (Coronavirus Disease) pneumonia. Diagnosing COVID-19 using the Mask Regional-Convolutional Neural Network (Mask R-CNN) is proposed to classify the chest computerized tomographic (CT) images into COVID-positive and COVID-negative. Covid-19 has a direct effect on the lungs, causing damage to the alveoli, which leads to various lung complications. By fusing multi-class data, the severity level of the patients can be classified using the meta-learning few-shot learning technique with the residual network with 50 layers deep (ResNet-50) as the base classifier. It has been tested with the outcome of COVID positive chest CT image data. From these various classes, it is possible to predict the onset possibilities of acute COVID lung disorders such as sepsis, acute respiratory distress syndrome (ARDS), COVID pneumonia, COVID bronchitis, etc. The first method of classification is proposed to diagnose whether the patient is affected by COVID-19 or not; it achieves a mean Average Precision (mAP) of 91.52% and G-mean of 97.69% with 98.60% of classification accuracy. The second method of classification is proposed for the detection of various acute lung disorders based on severity provide better performance in all the four stages, the average accuracy is of 95.4%, the G-mean for multiclass achieves 94.02%, and the AUC is 93.27% compared with the cutting-edge techniques. It enables healthcare professionals to correctly detect severity for potential treatments.
Collapse
Affiliation(s)
- Suganya D
- Department of Computer Science and Engineering, Puducherry Technological University, Puducherry 605014, India
| | - Kalpana R
- Department of Computer Science and Engineering, Puducherry Technological University, Puducherry 605014, India
| |
Collapse
|
43
|
An J, Du Y, Hong P, Zhang L, Weng X. Insect recognition based on complementary features from multiple views. Sci Rep 2023; 13:2966. [PMID: 36806209 PMCID: PMC9940688 DOI: 10.1038/s41598-023-29600-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 02/07/2023] [Indexed: 02/22/2023] Open
Abstract
Insect pest recognition has always been a significant branch of agriculture and ecology. The slight variance among different kinds of insects in appearance makes it hard for human experts to recognize. It is increasingly imperative to finely recognize specific insects by employing machine learning methods. In this study, we proposed a feature fusion network to synthesize feature presentations in different backbone models. Firstly, we employed one CNN-based backbone ResNet, and two attention-based backbones Vision Transformer and Swin Transformer to localize the important regions of insect images with Grad-CAM. During this process, we designed new architectures for these two Transformers to enable Grad-CAM to be applicable in such attention-based models. Then we further proposed an attention-selection mechanism to reconstruct the attention area by delicately integrating the important regions, enabling these partial but key expressions to complement each other. We only need part of the image scope that represents the most crucial decision-making information for insect recognition. We randomly selected 20 species of insects from the IP102 dataset and then adopted all 102 kinds of insects to test the classification performance. Experimental results show that the proposed approach outperforms other advanced CNN-based models. More importantly, our attention-selection mechanism demonstrates good robustness to augmented images.
Collapse
Affiliation(s)
- Jingmin An
- grid.412243.20000 0004 1760 1136School of Life Sciences, Northeast Agricultural University, Harbin, China ,grid.458458.00000 0004 1792 6416State Key Laboratory of Membrane Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, China
| | - Yong Du
- grid.33763.320000 0004 1761 2484College of Intelligence and Computing, Tianjin University, Tianjin, China ,grid.412243.20000 0004 1760 1136School of Electrical and Information Engineering, Northeast Agricultural University, Harbin, China
| | - Peng Hong
- grid.412252.20000 0004 0368 6968Software College, Northeastern University, Shenyang, China ,Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., Shenyang, China
| | - Lei Zhang
- grid.411024.20000 0001 2175 4264Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD USA
| | - Xiaogang Weng
- School of Life Sciences, Northeast Agricultural University, Harbin, China.
| |
Collapse
|
44
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
45
|
Bhattacharjya U, Sarma KK, Medhi JP, Choudhury BK, Barman G. Automated diagnosis of COVID-19 using radiological modalities and Artificial Intelligence functionalities: A retrospective study based on chest HRCT database. Biomed Signal Process Control 2023; 80:104297. [PMID: 36275840 PMCID: PMC9576693 DOI: 10.1016/j.bspc.2022.104297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 09/12/2022] [Accepted: 10/08/2022] [Indexed: 11/16/2022]
Abstract
Background and Objective The spread of coronavirus has been challenging for the healthcare system's proper management and diagnosis during the rapid spread and control of the infection. Real-time reverse transcription-polymerase chain reaction (RT-PCR), though considered the standard testing measure, has low sensitivity and is time-consuming, which restricts the fast screening of individuals. Therefore, computer tomography (CT) is used to complement the traditional approaches and provide fast and effective screening over other diagnostic methods. This work aims to appraise the importance of chest CT findings of COVID-19 and post-COVID in the diagnosis and prognosis of infected patients and to explore the ways and means to integrate CT findings for the development of advanced Artificial Intelligence (AI) tool-based predictive diagnostic techniques. Methods The retrospective study includes a 188 patient database with COVID-19 infection confirmed by RT-PCR testing, including post-COVID patients. Patients underwent chest high-resolution computer tomography (HRCT), where the images were evaluated for common COVID-19 findings and involvement of the lung and its lobes based on the coverage region. The radiological modalities analyzed in this study may help the researchers in generating a predictive model based on AI tools for further classification with a high degree of reliability. Results Mild to moderate ground glass opacities (GGO) with or without consolidation, crazy paving patterns, and halo signs were common COVID-19 related findings. A CT score is assigned to every patient based on the severity of lung lobe involvement. Conclusion Typical multifocal, bilateral, and peripheral distributions of GGO are the main characteristics related to COVID-19 pneumonia. Chest HRCT can be considered a standard method for timely and efficient assessment of disease progression and management severity. With its fusion with AI tools, chest HRCT can be used as a one-stop platform for radiological investigation and automated diagnosis system.
Collapse
Affiliation(s)
- Upasana Bhattacharjya
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, India
| | - Kandarpa Kumar Sarma
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, India
| | - Jyoti Prakash Medhi
- Department of Electronics and Communication Engineering, Gauhati University, Guwahati, Assam, India
| | - Binoy Kumar Choudhury
- Department of Radio Diagnosis and Imaging, Dr. Bhubaneswar Borooah Cancer Institute, Guwahati, Assam, India
| | - Geetanjali Barman
- Department of Radio Diagnosis and Imaging, Dr. Bhubaneswar Borooah Cancer Institute, Guwahati, Assam, India
| |
Collapse
|
46
|
Zorzi G, Berta L, Rizzetto F, De Mattia C, Felisi MMJ, Carrazza S, Nerini Molteni S, Vismara C, Scaglione F, Vanzulli A, Torresin A, Colombo PE. Artificial intelligence for differentiating COVID-19 from other viral pneumonias on CT: comparative analysis of different models based on quantitative and radiomic approaches. Eur Radiol Exp 2023; 7:3. [PMID: 36690869 PMCID: PMC9870776 DOI: 10.1186/s41747-022-00317-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 12/15/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND To develop a pipeline for automatic extraction of quantitative metrics and radiomic features from lung computed tomography (CT) and develop artificial intelligence (AI) models supporting differential diagnosis between coronavirus disease 2019 (COVID-19) and other viral pneumonia (non-COVID-19). METHODS Chest CT of 1,031 patients (811 for model building; 220 as independent validation set (IVS) with positive swab for severe acute respiratory syndrome coronavirus-2 (647 COVID-19) or other respiratory viruses (384 non-COVID-19) were segmented automatically. A Gaussian model, based on the HU histogram distribution describing well-aerated and ill portions, was optimised to calculate quantitative metrics (QM, n = 20) in both lungs (2L) and four geometrical subdivisions (GS) (upper front, lower front, upper dorsal, lower dorsal; n = 80). Radiomic features (RF) of first (RF1, n = 18) and second (RF2, n = 120) order were extracted from 2L using PyRadiomics tool. Extracted metrics were used to develop four multilayer-perceptron classifiers, built with different combinations of QM and RF: Model1 (RF1-2L); Model2 (QM-2L, QM-GS); Model3 (RF1-2L, RF2-2L); Model4 (RF1-2L, QM-2L, GS-2L, RF2-2L). RESULTS The classifiers showed accuracy from 0.71 to 0.80 and area under the receiving operating characteristic curve (AUC) from 0.77 to 0.87 in differentiating COVID-19 versus non-COVID-19 pneumonia. Best results were associated with Model3 (AUC 0.867 ± 0.008) and Model4 (AUC 0.870 ± 0.011. For the IVS, the AUC values were 0.834 ± 0.008 for Model3 and 0.828 ± 0.011 for Model4. CONCLUSIONS Four AI-based models for classifying patients as COVID-19 or non-COVID-19 viral pneumonia showed good diagnostic performances that could support clinical decisions.
Collapse
Affiliation(s)
- Giulia Zorzi
- Postgraduate School of Medical Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
- Department of Physics, INFN Sezione di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
| | - Luca Berta
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy.
| | - Francesco Rizzetto
- Postgraduate School of Diagnostic and Interventional Radiology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy.
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy.
| | - Cristina De Mattia
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
| | - Marco Maria Jacopo Felisi
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
| | - Stefano Carrazza
- Department of Physics, INFN Sezione di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
- Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
| | - Silvia Nerini Molteni
- Chemical-Clinical and Microbiological Analyses, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Chiara Vismara
- Chemical-Clinical and Microbiological Analyses, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
| | - Francesco Scaglione
- Chemical-Clinical and Microbiological Analyses, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy
- Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy
| | - Angelo Vanzulli
- Department of Radiology, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
- Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, via Festa del Perdono 7, 20122, Milan, Italy
| | - Alberto Torresin
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
- Department of Physics, INFN Sezione di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
- Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
| | - Paola Enrica Colombo
- Department of Medical Physics, ASST Grande Ospedale Metropolitano Niguarda, Piazza Ospedale Maggiore 3, 20162, Milan, Italy
- Department of Physics, Università degli Studi di Milano, via Giovanni Celoria 16, 20133, Milan, Italy
| |
Collapse
|
47
|
Comparison of the Diagnostic Performance of Deep Learning Algorithms for Reducing the Time Required for COVID-19 RT-PCR Testing. Viruses 2023; 15:v15020304. [PMID: 36851519 PMCID: PMC9966023 DOI: 10.3390/v15020304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/13/2023] [Accepted: 01/19/2023] [Indexed: 01/24/2023] Open
Abstract
(1) Background: Rapid and accurate negative discrimination enables efficient management of scarce isolated bed resources and adequate patient accommodation in the majority of areas experiencing an explosion of confirmed cases due to Omicron mutations. Until now, methods for artificial intelligence or deep learning to replace time-consuming RT-PCR have relied on CXR, chest CT, blood test results, or clinical information. (2) Methods: We proposed and compared five different types of deep learning algorithms (RNN, LSTM, Bi-LSTM, GRU, and transformer) for reducing the time required for RT-PCR diagnosis by learning the change in fluorescence value derived over time during the RT-PCR process. (3) Results: Among the five deep learning algorithms capable of training time series data, Bi-LSTM and GRU were shown to be able to decrease the time required for RT-PCR diagnosis by half or by 25% without significantly impairing the diagnostic performance of the COVID-19 RT-PCR test. (4) Conclusions: The diagnostic performance of the model developed in this study when 40 cycles of RT-PCR are used for diagnosis shows the possibility of nearly halving the time required for RT-PCR diagnosis.
Collapse
|
48
|
Topff L, Groot Lipman KBW, Guffens F, Wittenberg R, Bartels-Rutten A, van Veenendaal G, Hess M, Lamerigts K, Wakkie J, Ranschaert E, Trebeschi S, Visser JJ, Beets-Tan RGH, Snoeckx A, Kint P, Van Hoe L, Quattrocchi CC, Dickerscheid D, Lounis S, Schulze E, Sjer AEB, van Vucht N, Tielbeek JA, Raat F, Eijspaart D, Abbas A. Is the generalizability of a developed artificial intelligence algorithm for COVID-19 on chest CT sufficient for clinical use? Results from the International Consortium for COVID-19 Imaging AI (ICOVAI). Eur Radiol 2023; 33:4249-4258. [PMID: 36651954 PMCID: PMC9848031 DOI: 10.1007/s00330-022-09303-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 10/14/2022] [Accepted: 11/18/2022] [Indexed: 01/19/2023]
Abstract
OBJECTIVES Only few published artificial intelligence (AI) studies for COVID-19 imaging have been externally validated. Assessing the generalizability of developed models is essential, especially when considering clinical implementation. We report the development of the International Consortium for COVID-19 Imaging AI (ICOVAI) model and perform independent external validation. METHODS The ICOVAI model was developed using multicenter data (n = 1286 CT scans) to quantify disease extent and assess COVID-19 likelihood using the COVID-19 Reporting and Data System (CO-RADS). A ResUNet model was modified to automatically delineate lung contours and infectious lung opacities on CT scans, after which a random forest predicted the CO-RADS score. After internal testing, the model was externally validated on a multicenter dataset (n = 400) by independent researchers. CO-RADS classification performance was calculated using linearly weighted Cohen's kappa and segmentation performance using Dice Similarity Coefficient (DSC). RESULTS Regarding internal versus external testing, segmentation performance of lung contours was equally excellent (DSC = 0.97 vs. DSC = 0.97, p = 0.97). Lung opacities segmentation performance was adequate internally (DSC = 0.76), but significantly worse on external validation (DSC = 0.59, p < 0.0001). For CO-RADS classification, agreement with radiologists on the internal set was substantial (kappa = 0.78), but significantly lower on the external set (kappa = 0.62, p < 0.0001). CONCLUSION In this multicenter study, a model developed for CO-RADS score prediction and quantification of COVID-19 disease extent was found to have a significant reduction in performance on independent external validation versus internal testing. The limited reproducibility of the model restricted its potential for clinical use. The study demonstrates the importance of independent external validation of AI models. KEY POINTS • The ICOVAI model for prediction of CO-RADS and quantification of disease extent on chest CT of COVID-19 patients was developed using a large sample of multicenter data. • There was substantial performance on internal testing; however, performance was significantly reduced on external validation, performed by independent researchers. The limited generalizability of the model restricts its potential for clinical use. • Results of AI models for COVID-19 imaging on internal tests may not generalize well to external data, demonstrating the importance of independent external validation.
Collapse
Affiliation(s)
- Laurens Topff
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands. .,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.
| | - Kevin B W Groot Lipman
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands.,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.,Department of Thoracic Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Frederic Guffens
- Department of Radiology, University Hospitals Leuven, Herestraat 49, 3000, Leuven, Belgium
| | - Rianne Wittenberg
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Annemarieke Bartels-Rutten
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | | | | | | | | | - Erik Ranschaert
- Department of Radiology, St. Nikolaus Hospital, Hufengasse 4-8, 4700, Eupen, Belgium.,Ghent University, C. Heymanslaan 10, 9000, Ghent, Belgium
| | - Stefano Trebeschi
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands
| | - Jacob J Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center Rotterdam, Dr. Molewaterplein 40, 3015, GD, Rotterdam, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066, CX, Amsterdam, The Netherlands.,GROW School for Oncology and Reproduction, Maastricht University, Universiteitssingel 40, 6229 ER, Maastricht, The Netherlands.,Institute of Regional Health Research, University of Southern Denmark, Campusvej 55, 5230, Odense, Denmark
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Vinod DN, Prabaharan SRS. COVID-19-The Role of Artificial Intelligence, Machine Learning, and Deep Learning: A Newfangled. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:2667-2682. [PMID: 36685135 PMCID: PMC9843670 DOI: 10.1007/s11831-023-09882-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 01/05/2023] [Indexed: 05/29/2023]
Abstract
The absolute previously infected novel coronavirus (COVID-19) was found in Wuhan, China, in December 2019. The COVID-19 epidemic has spread to more than 220 nations and territories globally and has altogether influenced each part of our day-to-day lives. As of 9th March 2022, a total aggregate of 44,78,82,185 (60,07,317) contaminated (dead) COVID-19 cases were accounted for all over the world. The quantities of contaminated cases passing despite everything increment essentially and do not indicate a controlled circumstance. The scope of this paper is to address this issue by presenting a comprehensive and comparative analysis of the existing Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) based approaches used in significance in reacting to the COVID-19 epidemic and diagnosing the severe impacts. The paper provides, firstly, an overview of COVID-19 infection and highlights of this article; Secondly, an overview of exploring various executive innovations by utilizing different resources to stop the spread of COVID-19; Thirdly, a comparison of existing predicting methods of COVID-19 in the literature, with focus on ML, DL and AI-driven techniques with performance metrics; and finally, a discussion on the results of the work as well as future scope.
Collapse
Affiliation(s)
- Dasari Naga Vinod
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamil Nadu 600062 India
| | - S. R. S. Prabaharan
- Sathyabama Centre for Advanced Studies, Sathyabama Institute of Science and Technology, Rajiv Gandhi Salai, Chennai, Tamil Nadu 600119 India
| |
Collapse
|
50
|
Chen H, Jiang Y, Ko H, Loew M. A teacher-student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|