1
|
Xu Y, Wang T, Xu Z, Su B, Li J, Nie Z. Lightweight triangular mesh deformable reconstruction for low quality 3D organ models: Thickness noise and uneven topology. Comput Biol Med 2025; 193:110328. [PMID: 40409029 DOI: 10.1016/j.compbiomed.2025.110328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Revised: 04/03/2025] [Accepted: 04/30/2025] [Indexed: 05/25/2025]
Abstract
Lightweight triangular mesh models have great potential for real-time 3D visualization of lesions during minimally invasive surgery (MIS). However, the blurred tissue boundaries, high imaging noise, and unoriented points in medical images seriously affect the accuracy and topological quality of surface reconstruction, which can lead to inaccurate lesion localization. In this paper, we present a robust and high-topology-quality triangular mesh reconstruction method that aims to provide a deformable expression model for real-time 3D visualization during surgery. Our approach begins by approximating the model prototype under the guidance of an unsigned distance field by simulating inflation. Then, we introduce a variance-controlled cylindrical domain projection search (VC-CDPS) method to achieve the final surface fitting. Additionally, we incorporate topology optimization into the iterative reconstruction process to ensure smoothness and good topology of the reconstruction model. To validate our method, we conduct experiments on a geometric model with high noise and a human organ model manually segmented by novice doctors. The results demonstrate that our reconstructed model exhibits better surface quality and noise immunity. Furthermore, we conduct a comparison experiment of model deformation and propose a metric to measure the topological quality of the model. Through in vitro tissue experiments, we explored the relationship between topological quality and deformation accuracy. The results reveal a positive correlation between deformation accuracy and topological quality.
Collapse
Affiliation(s)
- Yanjie Xu
- Department of Mechanical Engineering, Tsinghua University, Haidian District, Beijing, 100084, China; State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Haidian District, Beijing, 100084, China; Beijing Key Laboratory of Transformative High-end Manufacturing Equipment and Technology, Tsinghua University, Haidian District, Beijing, 100084, China.
| | - Tianmu Wang
- Department of Mechanical Engineering, Tsinghua University, Haidian District, Beijing, 100084, China; State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Haidian District, Beijing, 100084, China; Beijing Key Laboratory of Transformative High-end Manufacturing Equipment and Technology, Tsinghua University, Haidian District, Beijing, 100084, China.
| | - Zheng Xu
- Department of Urology, Beijing Tsinghua Changgung Hospital, Changping District, Beijing, 102218, China; School of Clinical Medicine, Tsinghua University, Haidian District, Beijing, 100084, China.
| | - Boxing Su
- Department of Urology, Beijing Tsinghua Changgung Hospital, Changping District, Beijing, 102218, China; School of Clinical Medicine, Tsinghua University, Haidian District, Beijing, 100084, China.
| | - Jianxing Li
- Department of Urology, Beijing Tsinghua Changgung Hospital, Changping District, Beijing, 102218, China; School of Clinical Medicine, Tsinghua University, Haidian District, Beijing, 100084, China.
| | - Zhenguo Nie
- Department of Mechanical Engineering, Tsinghua University, Haidian District, Beijing, 100084, China; State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Haidian District, Beijing, 100084, China; Beijing Key Laboratory of Transformative High-end Manufacturing Equipment and Technology, Tsinghua University, Haidian District, Beijing, 100084, China.
| |
Collapse
|
2
|
Fan W, Jager MJ, Dai W, Heindl LM. Deep learning-based system for automatic identification of benign and malignant eyelid tumours. Br J Ophthalmol 2025:bjo-2025-327127. [PMID: 40348397 DOI: 10.1136/bjo-2025-327127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Accepted: 04/10/2025] [Indexed: 05/14/2025]
Abstract
AIMS Our aim is to develop a deep learning-based system for automatically identifying and classifying benign and malignant tumours of the eyelid to improve diagnostic accuracy and efficiency. METHODS The dataset includes photographs of normal eyelids, benign and malignant eyelid tumours and was randomly divided into a training and validation dataset in a ratio of 8:2. We used the training dataset to train eight convolutional neural network models to classify normal eyelids, benign and malignant eyelid tumours. These models included VGG16, ResNet50, Inception-v4, EfficientNet-V2-M and their variants. The validation dataset was used to evaluate and compare the performance of the different deep learning models. RESULTS All eight models achieved an average accuracy greater than 0.746 for identifying normal eyelids, benign and malignant eyelid tumours, with an average sensitivity and specificity exceeding 0.790 and 0.866, respectively. The mean area under the receiver operating characteristic curve (AUC) for the eight models was more than 0.904 in correctly identifying normal eyelids, benign and malignant eyelid tumours. The dual-path Inception-v4 network demonstrated the highest performance, with an AUC of 0.930 (95% CI 0.900 to 0.954) and an F1-score of 0.838 (95% CI 0.787 to 0.882). CONCLUSION The deep learning-based system shows significant potential in improving the diagnosis of eyelid tumours, providing a reliable and efficient tool for clinical practice. Future work will validate the model with more extensive and diverse datasets and integrate it into clinical workflows for real-time diagnostic support.
Collapse
Affiliation(s)
- Wanlin Fan
- Department of Ophthalmology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
| | - Martine Johanna Jager
- Department of Ophthalmology, Leiden University Medical Center, Leiden, The Netherlands
| | - Weiwei Dai
- Changsha Aier Eye Hospital, Hunan, China
| | - Ludwig M Heindl
- Department of Ophthalmology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne, Germany
- Center for Integrated Oncology (CIO), Aachen-Bonn-Cologne-Duesseldorf, Cologne, Germany
| |
Collapse
|
3
|
Liu F, Zhou H, Wang K, Yu Y, Gao Y, Sun Z, Liu S, Sun S, Zou Z, Li Z, Li B, Miao H, Liu Y, Hou T, Fok M, Patil NG, Xue K, Li T, Oermann E, Yin Y, Duan L, Qu J, Huang X, Jin S, Zhang K. MetaGP: A generative foundation model integrating electronic health records and multimodal imaging for addressing unmet clinical needs. Cell Rep Med 2025; 6:102056. [PMID: 40187356 PMCID: PMC12047458 DOI: 10.1016/j.xcrm.2025.102056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 01/17/2025] [Accepted: 03/11/2025] [Indexed: 04/07/2025]
Abstract
Artificial intelligence makes strides in specialized diagnostics but faces challenges in complex clinical scenarios, such as rare disease diagnosis and emergency condition identification. To address these limitations, we develop Meta General Practitioner (MetaGP), a 32-billion-parameter generative foundation model trained on extensive datasets, including over 8 million electronic health records, biomedical literature, and medical textbooks. MetaGP demonstrates robust diagnostic capabilities, achieving accuracy comparable to experienced clinicians. In rare disease cases, it achieves an average diagnostic score of 1.57, surpassing GPT-4's 0.93. For emergency conditions, it improves diagnostic accuracy for junior and mid-level clinicians by 53% and 46%, respectively. MetaGP also excels in generating medical imaging reports, producing high-quality outputs for chest X-rays and computed tomography, often rated comparable to or superior to physician-authored reports. These findings highlight MetaGP's potential to transform clinical decision-making across diverse medical contexts.
Collapse
Affiliation(s)
- Fei Liu
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China; State Key Laboratory of Quality Research in Chinese Medicine/Macau Institute for Applied Research in Medicine and Health, Macau University of Science and Technology, Macau SAR 999078, China
| | - Hongyu Zhou
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Kai Wang
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China; Department of Big Data and Biomedical AI, College of Future Technology, Peking University and Peking-Tsinghua Center for Life Sciences, Beijing, China
| | - Yunfang Yu
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yuanxu Gao
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China; Department of Big Data and Biomedical AI, College of Future Technology, Peking University and Peking-Tsinghua Center for Life Sciences, Beijing, China
| | - Zhuo Sun
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; Department of Ophthalmology, The Third People's Hospital of Changzhou, Changzhou, China
| | - Sian Liu
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Shanshan Sun
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; Department of Ophthalmology, The Third People's Hospital of Changzhou, Changzhou, China
| | - Zixing Zou
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China; Guangzhou National Laboratory, Guangzhou, China
| | - Zhuomin Li
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Bingzhou Li
- State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Hanpei Miao
- Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Yang Liu
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China
| | - Taiwa Hou
- Conde S. Januário Hospital, Macau, China
| | - Manson Fok
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China
| | - Nivritti Gajanan Patil
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China
| | - Kanmin Xue
- Nuffield Department of Neuroscience, Oxford University, Oxford, UK
| | - Ting Li
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; State Key Laboratory of Quality Research in Chinese Medicine/Macau Institute for Applied Research in Medicine and Health, Macau University of Science and Technology, Macau SAR 999078, China
| | - Eric Oermann
- NYU Langone Medical Center, New York University, New York, NY, USA
| | - Yun Yin
- Faculty of Health and Wellness, Faculty of Business, City University of Macau, Macau, China
| | - Lian Duan
- Faculty of Pediatrics and Department of Pediatric Surgery of the Seventh Medical Center, the Chinese PLA General Hospital, Beijing, China.
| | - Jia Qu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| | - Xiaoying Huang
- Division of Pulmonary Medicine, the First Affiliated Hospital, Wenzhou Medical University, Wenzhou Key Laboratory of Interdisciplinary and Translational Medicine, Wenzhou Key Laboratory of Heart and Lung, Wenzhou, Zhejiang, China.
| | - Shengwei Jin
- Department of Anesthesia and Critical Care, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Key Laboratory of Pediatric Anesthesiology, Ministry of Education, Wenzhou Medical University, Wenzhou, Zhejiang, China.
| | - Kang Zhang
- Institute for AI in Medicine and Faculty of Medicine, Macau University of Science and Technology, Macau, China; State Key Laboratory of Eye Health, Eye Hospital and Institute for Advanced Study on Eye Health and Diseases, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China; Guangzhou National Laboratory, Guangzhou, China.
| |
Collapse
|
4
|
Kawata N, Iwao Y, Matsuura Y, Higashide T, Okamoto T, Sekiguchi Y, Nagayoshi M, Takiguchi Y, Suzuki T, Haneishi H. Generation of short-term follow-up chest CT images using a latent diffusion model in COVID-19. Jpn J Radiol 2025; 43:622-633. [PMID: 39585556 PMCID: PMC11953082 DOI: 10.1007/s11604-024-01699-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 11/02/2024] [Indexed: 11/26/2024]
Abstract
PURPOSE Despite a global decrease in the number of COVID-19 patients, early prediction of the clinical course for optimal patient care remains challenging. Recently, the usefulness of image generation for medical images has been investigated. This study aimed to generate short-term follow-up chest CT images using a latent diffusion model in patients with COVID-19. MATERIALS AND METHODS We retrospectively enrolled 505 patients with COVID-19 for whom the clinical parameters (patient background, clinical symptoms, and blood test results) upon admission were available and chest CT imaging was performed. Subject datasets (n = 505) were allocated for training (n = 403), and the remaining (n = 102) were reserved for evaluation. The image underwent variational autoencoder (VAE) encoding, resulting in latent vectors. The information consisting of initial clinical parameters and radiomic features were formatted as a table data encoder. Initial and follow-up latent vectors and the initial table data encoders were utilized for training the diffusion model. The evaluation data were used to generate prognostic images. Then, similarity of the prognostic images (generated images) and the follow-up images (real images) was evaluated by zero-mean normalized cross-correlation (ZNCC), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM). Visual assessment was also performed using a numerical rating scale. RESULTS Prognostic chest CT images were generated using the diffusion model. Image similarity showed reasonable values of 0.973 ± 0.028 for the ZNCC, 24.48 ± 3.46 for the PSNR, and 0.844 ± 0.075 for the SSIM. Visual evaluation of the images by two pulmonologists and one radiologist yielded a reasonable mean score. CONCLUSIONS The similarity and validity of generated predictive images for the course of COVID-19-associated pneumonia using a diffusion model were reasonable. The generation of prognostic images may suggest potential utility for early prediction of the clinical course in COVID-19-associated pneumonia and other respiratory diseases.
Collapse
Affiliation(s)
- Naoko Kawata
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan.
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
| | - Yuma Iwao
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba-Shi, Chiba, 263-8555, Japan
| | - Yukiko Matsuura
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Takashi Higashide
- Department of Radiology, Chiba University Hospital, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan
- Department of Radiology, Japanese Red Cross Narita Hospital, 90-1, Iida-Cho, Narita-Shi, Chiba, 286-8523, Japan
| | - Takayuki Okamoto
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
| | - Yuki Sekiguchi
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Masaru Nagayoshi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Yasuo Takiguchi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Takuji Suzuki
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
| |
Collapse
|
5
|
Okada N, Inoue S, Liu C, Mitarai S, Nakagawa S, Matsuzawa Y, Fujimi S, Yamamoto G, Kuroda T. Unified total body CT image with multiple organ specific windowings: validating improved diagnostic accuracy and speed in trauma cases. Sci Rep 2025; 15:5654. [PMID: 39955327 PMCID: PMC11830084 DOI: 10.1038/s41598-024-83346-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Accepted: 12/13/2024] [Indexed: 02/17/2025] Open
Abstract
Total-body CT scans are useful in saving trauma patients; however, interpreting numerous images with varied window settings slows injury detection. We developed an algorithm for "unified total-body CT image with multiple organ-specific windowings (Uni-CT)", and assessing its impact on physician accuracy and speed in trauma CT interpretation. From November 7, 2008, to June 19, 2020, 40 cases of total-body CT images for blunt trauma with multiple injuries, were collected from the emergency department of Osaka General Medical Center and randomly divided into two groups. In half of the cases, the Uni-CT algorithm using semantic segmentation assigned visibility-friendly window settings to each organ. Four physicians with varying levels of experience interpreted 20 cases using the algorithm and 20 cases in conventional settings. The performance was analyzed based on the accuracy, sensitivity, specificity of the target findings, and diagnosis speed. In the proposal and conventional groups, patients had an average of 2.6 and 2.5 targeting findings, mean ages of 51.8 and 57.7 years, and male proportions of 60% and 45%, respectively. The agreement rate for physicians' diagnoses was κ = 0.70. Average accuracy, sensitivity, and specificity of target findings were 84.8%, 74.3%, 96.9% and 85.5%, 81.2%, 91.5%, respectively, with no significant differences. Diagnostic speed per case averaged 71.9 and 110.4 s in each group (p < 0.05). The Uni-CT algorithm improved the diagnostic speed of total-body CT for trauma, maintaining accuracy comparable to that of conventional methods.
Collapse
Affiliation(s)
- Naoki Okada
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan.
- Osaka General Medical Center, Osaka-shi, Osaka, Japan.
| | | | - Chang Liu
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Sho Mitarai
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | | | | | | | - Goshiro Yamamoto
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Tomohiro Kuroda
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| |
Collapse
|
6
|
Pham NT, Ko J, Shah M, Rakkiyappan R, Woo HG, Manavalan B. Leveraging deep transfer learning and explainable AI for accurate COVID-19 diagnosis: Insights from a multi-national chest CT scan study. Comput Biol Med 2025; 185:109461. [PMID: 39631112 DOI: 10.1016/j.compbiomed.2024.109461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2024] [Revised: 11/03/2024] [Accepted: 11/19/2024] [Indexed: 12/07/2024]
Abstract
The COVID-19 pandemic has emerged as a global health crisis, impacting millions worldwide. Although chest computed tomography (CT) scan images are pivotal in diagnosing COVID-19, their manual interpretation by radiologists is time-consuming and potentially subjective. Automated computer-aided diagnostic (CAD) frameworks offer efficient and objective solutions. However, machine or deep learning methods often face challenges in their reproducibility due to underlying biases and methodological flaws. To address these issues, we propose XCT-COVID, an explainable, transferable, and reproducible CAD framework based on deep transfer learning to predict COVID-19 infection from CT scan images accurately. This is the first study to develop three distinct models within a unified framework by leveraging a previously unexplored large dataset and two widely used smaller datasets. We employed five known convolutional neural network architectures, both with and without pretrained weights, on the larger dataset. We optimized hyperparameters through extensive grid search and 5-fold cross-validation (CV), significantly enhancing the model performance. Experimental results from the larger dataset showed that the VGG16 architecture (XCT-COVID-L) with pretrained weights consistently outperformed other architectures, achieving the best performance, on both 5-fold CV and independent test. When evaluated with the external datasets, XCT-COVID-L performed well with data with similar distributions, demonstrating its transferability. However, its performance significantly decreased on smaller datasets with lower-quality images. To address this, we developed other models, XCT-COVID-S1 and XCT-COVID-S2, specifically for the smaller datasets, outperforming existing methods. Moreover, eXplainable Artificial Intelligence (XAI) analyses were employed to interpret the models' functionalities. For prediction and reproducibility purposes, the implementation of XCT-COVID is publicly accessible at https://github.com/cbbl-skku-org/XCT-COVID/.
Collapse
Affiliation(s)
- Nhat Truong Pham
- Department of Integrative Biotechnology, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon, 16419, Gyeonggi-do, Republic of Korea
| | - Jinsol Ko
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea; Department of Biomedical Science, Graduate School, Ajou University, Suwon, Republic of Korea
| | - Masaud Shah
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea
| | - Rajan Rakkiyappan
- Department of Mathematics, Bharathiar University, Coimbatore, 641046, Tamil Nadu, India
| | - Hyun Goo Woo
- Department of Physiology, Ajou University School of Medicine, Suwon, 16499, Republic of Korea; Department of Biomedical Science, Graduate School, Ajou University, Suwon, Republic of Korea; Ajou Translational Omics Center (ATOC), Ajou University Medical Center, Republic of Korea.
| | - Balachandran Manavalan
- Department of Integrative Biotechnology, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon, 16419, Gyeonggi-do, Republic of Korea.
| |
Collapse
|
7
|
Wang A, Li K, Sun H, Wang Y, Liu H. Assessing the impact of comorbidities on disease severity in COVID-19 patients requires consideration of age. Medicine (Baltimore) 2025; 104:e41360. [PMID: 39889259 PMCID: PMC11789873 DOI: 10.1097/md.0000000000041360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 12/19/2024] [Accepted: 01/08/2025] [Indexed: 02/02/2025] Open
Abstract
Older age and comorbidities are risk factors for increased coronavirus disease 2019 (COVID-19) severity, but few studies have explored their interaction. This study aimed to assess the actual impacts of these factors on disease severity in COVID-19. The enrolled COVID-19 patients were divided into 4 age subgroups (≤44, 45-59, 60-74, and ≥75 years). Logistic regression analysis was conducted to determine the association between comorbidities and disease severity; Kappa consistency test was implemented to verify the study results. Of the 1663 patients with COVID-19, 287 had severe disease. The disease severity was correlated with the age-adjusted Charlson Comorbidity Index in each age group. In the 4 subgroups, the odds ratio of age-adjusted Charlson Comorbidity Index declined with age. After removing age interference, diabetes and cardio-cerebrovascular diseases were the main risk factors for severe disease in patients aged <75 years, whereas only chronic lung disease was associated with disease severity in patients aged ≥75 years. When comorbidities alone were used to predict disease severity, only the predictions were consistent with real outcomes in patients aged ≥75 years, compared with the predictions of high-risk comorbidities mentioned in World Health Organization and Chinese guidelines (Kappa 0.106, P < .05). Although older age and comorbidities were risk factors for severe COVID-19, their effects on disease severity varied across age groups. Additionally, comorbidities had a greater impact on COVID-19 severity in younger patients.
Collapse
Affiliation(s)
- Aili Wang
- Department of Infectious Diseases, The First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| | - Kun Li
- Department of General Surgery, Research Center of Digestive Diseases, Zhongnan Hospital, Wuhan University, Wuhan, Hubei Province, China
| | - Hui Sun
- Department of Infectious Diseases, The First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| | - Yuan Wang
- Department of Infectious Diseases, The First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| | - Huaie Liu
- Department of Infectious Diseases, The First Affiliated Hospital, Kunming Medical University, Kunming, Yunnan Province, China
| |
Collapse
|
8
|
Kondepudi A, Pekmezci M, Hou X, Scotford K, Jiang C, Rao A, Harake ES, Chowdury A, Al-Holou W, Wang L, Pandey A, Lowenstein PR, Castro MG, Koerner LI, Roetzer-Pejrimovsky T, Widhalm G, Camelo-Piragua S, Movahed-Ezazi M, Orringer DA, Lee H, Freudiger C, Berger M, Hervey-Jumper S, Hollon T. Foundation models for fast, label-free detection of glioma infiltration. Nature 2025; 637:439-445. [PMID: 39537921 PMCID: PMC11711092 DOI: 10.1038/s41586-024-08169-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 10/08/2024] [Indexed: 11/16/2024]
Abstract
A critical challenge in glioma treatment is detecting tumour infiltration during surgery to achieve safe maximal resection1-3. Unfortunately, safely resectable residual tumour is found in the majority of patients with glioma after surgery, causing early recurrence and decreased survival4-6. Here we present FastGlioma, a visual foundation model for fast (<10 s) and accurate detection of glioma infiltration in fresh, unprocessed surgical tissue. FastGlioma was pretrained using large-scale self-supervision (around 4 million images) on rapid, label-free optical microscopy, and fine-tuned to output a normalized score that indicates the degree of tumour infiltration within whole-slide optical images. In a prospective, multicentre, international testing cohort of patients with diffuse glioma (n = 220), FastGlioma was able to detect and quantify the degree of tumour infiltration with an average area under the receiver operating characteristic curve of 92.1 ± 0.9%. FastGlioma outperformed image-guided and fluorescence-guided adjuncts for detecting tumour infiltration during surgery by a wide margin in a head-to-head, prospective study (n = 129). The performance of FastGlioma remained high across diverse patient demographics, medical centres and diffuse glioma molecular subtypes as defined by the World Health Organization. FastGlioma shows zero-shot generalization to other adult and paediatric brain tumour diagnoses, demonstrating the potential for our foundation model to be used as a general-purpose adjunct for guiding brain tumour surgeries. These findings represent the transformative potential of medical foundation models to unlock the role of artificial intelligence in the care of patients with cancer.
Collapse
Affiliation(s)
- Akhil Kondepudi
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
- Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Melike Pekmezci
- Department of Pathology, University of California, San Francisco, San Francisco, CA, USA
| | - Xinhai Hou
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
- Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Katie Scotford
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Cheng Jiang
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
- Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Akshay Rao
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | - Edward S Harake
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | - Asadur Chowdury
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | - Wajd Al-Holou
- Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | - Lin Wang
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | - Aditya Pandey
- Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | | | - Maria G Castro
- Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA
| | | | - Thomas Roetzer-Pejrimovsky
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, Vienna, Austria
- Division of Neuropathology and Neurochemistry, Department of Neurology, Medical University Vienna, Vienna, Austria
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Vienna, Austria
| | | | | | | | - Honglak Lee
- Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
| | | | - Mitchel Berger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
| | - Todd Hollon
- Machine Learning in Neurosurgery Laboratory, Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA.
- Department of Neurosurgery, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
9
|
Guo Z, Lai A, Thygesen JH, Farrington J, Keen T, Li K. Large Language Models for Mental Health Applications: Systematic Review. JMIR Ment Health 2024; 11:e57400. [PMID: 39423368 PMCID: PMC11530718 DOI: 10.2196/57400] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 05/17/2024] [Accepted: 09/03/2024] [Indexed: 10/21/2024] Open
Abstract
BACKGROUND Large language models (LLMs) are advanced artificial neural networks trained on extensive datasets to accurately understand and generate natural language. While they have received much attention and demonstrated potential in digital health, their application in mental health, particularly in clinical settings, has generated considerable debate. OBJECTIVE This systematic review aims to critically assess the use of LLMs in mental health, specifically focusing on their applicability and efficacy in early screening, digital interventions, and clinical settings. By systematically collating and assessing the evidence from current studies, our work analyzes models, methodologies, data sources, and outcomes, thereby highlighting the potential of LLMs in mental health, the challenges they present, and the prospects for their clinical use. METHODS Adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, this review searched 5 open-access databases: MEDLINE (accessed by PubMed), IEEE Xplore, Scopus, JMIR, and ACM Digital Library. Keywords used were (mental health OR mental illness OR mental disorder OR psychiatry) AND (large language models). This study included articles published between January 1, 2017, and April 30, 2024, and excluded articles published in languages other than English. RESULTS In total, 40 articles were evaluated, including 15 (38%) articles on mental health conditions and suicidal ideation detection through text analysis, 7 (18%) on the use of LLMs as mental health conversational agents, and 18 (45%) on other applications and evaluations of LLMs in mental health. LLMs show good effectiveness in detecting mental health issues and providing accessible, destigmatized eHealth services. However, assessments also indicate that the current risks associated with clinical use might surpass their benefits. These risks include inconsistencies in generated text; the production of hallucinations; and the absence of a comprehensive, benchmarked ethical framework. CONCLUSIONS This systematic review examines the clinical applications of LLMs in mental health, highlighting their potential and inherent risks. The study identifies several issues: the lack of multilingual datasets annotated by experts, concerns regarding the accuracy and reliability of generated content, challenges in interpretability due to the "black box" nature of LLMs, and ongoing ethical dilemmas. These ethical concerns include the absence of a clear, benchmarked ethical framework; data privacy issues; and the potential for overreliance on LLMs by both physicians and patients, which could compromise traditional medical practices. As a result, LLMs should not be considered substitutes for professional mental health services. However, the rapid development of LLMs underscores their potential as valuable clinical aids, emphasizing the need for continued research and development in this area. TRIAL REGISTRATION PROSPERO CRD42024508617; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=508617.
Collapse
Affiliation(s)
- Zhijun Guo
- Institute of Health Informatics University College, London, London, United Kingdom
| | - Alvina Lai
- Institute of Health Informatics University College, London, London, United Kingdom
| | - Johan H Thygesen
- Institute of Health Informatics University College, London, London, United Kingdom
| | - Joseph Farrington
- Institute of Health Informatics University College, London, London, United Kingdom
| | - Thomas Keen
- Institute of Health Informatics University College, London, London, United Kingdom
- Great Ormond Street Institute of Child Health, University College London, London, United Kingdom
| | - Kezhi Li
- Institute of Health Informatics University College, London, London, United Kingdom
| |
Collapse
|
10
|
Dasa O, Bai C, Sajdeya R, Kimmel SE, Pepine CJ, Gurka J MJ, Laubenbacher R, Pearson TA, Mardini MT. Identifying Potential Factors Associated With Racial Disparities in COVID-19 Outcomes: Retrospective Cohort Study Using Machine Learning on Real-World Data. JMIR Public Health Surveill 2024; 10:e54421. [PMID: 39326040 PMCID: PMC11467607 DOI: 10.2196/54421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 04/01/2024] [Accepted: 05/29/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Racial disparities in COVID-19 incidence and outcomes have been widely reported. Non-Hispanic Black patients endured worse outcomes disproportionately compared with non-Hispanic White patients, but the epidemiological basis for these observations was complex and multifaceted. OBJECTIVE This study aimed to elucidate the potential reasons behind the worse outcomes of COVID-19 experienced by non-Hispanic Black patients compared with non-Hispanic White patients and how these variables interact using an explainable machine learning approach. METHODS In this retrospective cohort study, we examined 28,943 laboratory-confirmed COVID-19 cases from the OneFlorida Research Consortium's data trust of health care recipients in Florida through April 28, 2021. We assessed the prevalence of pre-existing comorbid conditions, geo-socioeconomic factors, and health outcomes in the structured electronic health records of COVID-19 cases. The primary outcome was a composite of hospitalization, intensive care unit admission, and mortality at index admission. We developed and validated a machine learning model using Extreme Gradient Boosting to evaluate predictors of worse outcomes of COVID-19 and rank them by importance. RESULTS Compared to non-Hispanic White patients, non-Hispanic Blacks patients were younger, more likely to be uninsured, had a higher prevalence of emergency department and inpatient visits, and were in regions with higher area deprivation index rankings and pollutant concentrations. Non-Hispanic Black patients had the highest burden of comorbidities and rates of the primary outcome. Age was a key predictor in all models, ranking highest in non-Hispanic White patients. However, for non-Hispanic Black patients, congestive heart failure was a primary predictor. Other variables, such as food environment measures and air pollution indicators, also ranked high. By consolidating comorbidities into the Elixhauser Comorbidity Index, this became the top predictor, providing a comprehensive risk measure. CONCLUSIONS The study reveals that individual and geo-socioeconomic factors significantly influence the outcomes of COVID-19. It also highlights varying risk profiles among different racial groups. While these findings suggest potential disparities, further causal inference and statistical testing are needed to fully substantiate these observations. Recognizing these relationships is vital for creating effective, tailored interventions that reduce disparities and enhance health outcomes across all racial and socioeconomic groups.
Collapse
Affiliation(s)
- Osama Dasa
- Department of Epidemiology, College of Public Health and Health Professions and College of Medicine, University of Florida, Gainesville, FL, United States
- Division of Cardiovascular Medicine, Department of Medicine, University of Florida, Gainesville, FL, United States
| | - Chen Bai
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, United States
| | - Ruba Sajdeya
- Department of Epidemiology, College of Public Health and Health Professions and College of Medicine, University of Florida, Gainesville, FL, United States
| | - Stephen E Kimmel
- Department of Epidemiology, College of Public Health and Health Professions and College of Medicine, University of Florida, Gainesville, FL, United States
| | - Carl J Pepine
- Division of Cardiovascular Medicine, Department of Medicine, University of Florida, Gainesville, FL, United States
| | - Matthew J Gurka J
- Department of Public Health Sciences, School of Medicine, University of Virginia, Charlottesville, VA, United States
| | - Reinhard Laubenbacher
- Laboratory for Systems Medicine, Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine, University of Florida, Gainesville, FL, United States
| | - Thomas A Pearson
- Department of Epidemiology, College of Public Health and Health Professions and College of Medicine, University of Florida, Gainesville, FL, United States
| | - Mamoun T Mardini
- Department of Health Outcomes and Biomedical Informatics, University of Florida, Gainesville, FL, United States
| |
Collapse
|
11
|
Yuan S, Yin T, He H, Liu X, Long X, Dong P, Zhu Z. Phenotypic, Metabolic and Genetic Adaptations of the Ficus Species to Abiotic Stress Response: A Comprehensive Review. Int J Mol Sci 2024; 25:9520. [PMID: 39273466 PMCID: PMC11394708 DOI: 10.3390/ijms25179520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 08/27/2024] [Accepted: 08/30/2024] [Indexed: 09/15/2024] Open
Abstract
The Ficus genus, having radiated from the tropics and subtropics to the temperate zone worldwide, is the largest genus among woody plants, comprising over 800 species. Evolution of the Ficus species results in genetic diversity, global radiation and geographical differentiations, suggesting adaption to diverse environments and coping with stresses. Apart from familiar physiological changes, such as stomatal closure and alteration in plant hormone levels, the Ficus species exhibit a unique mechanism in response to abiotic stress, such as regulation of leaf temperature and retention of drought memory. The stress-resistance genes harbored by Ficus result in effective responses to abiotic stress. Understanding the stress-resistance mechanisms in Ficus provides insights into the genetic breeding toward stress-tolerant crop cultivars. Following upon these issues, we comprehensively reviewed recent progress concerning the Ficus genes and relevant mechanisms that play important roles in the abiotic stress responses. These highlight prospectively important application potentials of the stress-resistance genes in Ficus.
Collapse
Affiliation(s)
- Shengyun Yuan
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Tianxiang Yin
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Hourong He
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Xinyi Liu
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Xueyan Long
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Pan Dong
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| | - Zhenglin Zhu
- School of Life Sciences, Chongqing University, Chongqing 401331, China
| |
Collapse
|
12
|
Lima TE, Ferraz MVF, Brito CAA, Ximenes PB, Mariz CA, Braga C, Wallau GL, Viana IFT, Lins RD. Determination of prognostic markers for COVID-19 disease severity using routine blood tests and machine learning. AN ACAD BRAS CIENC 2024; 96:e20230894. [PMID: 38922277 DOI: 10.1590/0001-376520242023089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/22/2024] [Indexed: 06/27/2024] Open
Abstract
The need for the identification of risk factors associated to COVID-19 disease severity remains urgent. Patients' care and resource allocation can be potentially different and are defined based on the current classification of disease severity. This classification is based on the analysis of clinical parameters and routine blood tests, which are not standardized across the globe. Some laboratory test alterations have been associated to COVID-19 severity, although these data are conflicting partly due to the different methodologies used across different studies. This study aimed to construct and validate a disease severity prediction model using machine learning (ML). Seventy-two patients admitted to a Brazilian hospital and diagnosed with COVID-19 through RT-PCR and/or ELISA, and with varying degrees of disease severity, were included in the study. Their electronic medical records and the results from daily blood tests were used to develop a ML model to predict disease severity. Using the above data set, a combination of five laboratorial biomarkers was identified as accurate predictors of COVID-19 severe disease with a ROC-AUC of 0.80 ± 0.13. Those biomarkers included prothrombin activity, ferritin, serum iron, ATTP and monocytes. The application of the devised ML model may help rationalize clinical decision and care.
Collapse
Affiliation(s)
- Tayná E Lima
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Virologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| | - Matheus V F Ferraz
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Virologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
- Universidade Federal de Pernambuco, Departamento de Química Fundamental, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-560 Recife, PE, Brazil
| | - Carlos A A Brito
- Universidade Federal de Pernambuco, Hospital das Clínicas, Av. Professor Moraes Rego, 1235, Cidade Universitária, 50670-901 Recife, PE, Brazil
| | - Pamella B Ximenes
- Hospital dos Servidores Públicos do Estado de Pernambuco, Av. Conselheiro Rosa e Silva, s/n, Espinheiro, 52020-020 Recife, PE, Brazil
| | - Carolline A Mariz
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Parasitologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| | - Cynthia Braga
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Parasitologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| | - Gabriel L Wallau
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Entomologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| | - Isabelle F T Viana
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Virologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| | - Roberto D Lins
- Fundação Oswaldo Cruz, Instituto Aggeu Magalhães, Departamento de Virologia, Av. Professor Moraes Rego, s/n, Cidade Universitária, 50740-465 Recife, PE, Brazil
| |
Collapse
|
13
|
Okada N, Umemura Y, Shi S, Inoue S, Honda S, Matsuzawa Y, Hirano Y, Kikuyama A, Yamakawa M, Gyobu T, Hosomi N, Minami K, Morita N, Watanabe A, Yamasaki H, Fukaguchi K, Maeyama H, Ito K, Okamoto K, Harano K, Meguro N, Unita R, Koshiba S, Endo T, Yamamoto T, Yamashita T, Shinba T, Fujimi S. "KAIZEN" method realizing implementation of deep-learning models for COVID-19 CT diagnosis in real world hospitals. Sci Rep 2024; 14:1672. [PMID: 38243054 PMCID: PMC10799049 DOI: 10.1038/s41598-024-52135-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 01/14/2024] [Indexed: 01/21/2024] Open
Abstract
Numerous COVID-19 diagnostic imaging Artificial Intelligence (AI) studies exist. However, none of their models were of potential clinical use, primarily owing to methodological defects and the lack of implementation considerations for inference. In this study, all development processes of the deep-learning models are performed based on strict criteria of the "KAIZEN checklist", which is proposed based on previous AI development guidelines to overcome the deficiencies mentioned above. We develop and evaluate two binary-classification deep-learning models to triage COVID-19: a slice model examining a Computed Tomography (CT) slice to find COVID-19 lesions; a series model examining a series of CT images to find an infected patient. We collected 2,400,200 CT slices from twelve emergency centers in Japan. Area Under Curve (AUC) and accuracy were calculated for classification performance. The inference time of the system that includes these two models were measured. For validation data, the slice and series models recognized COVID-19 with AUCs and accuracies of 0.989 and 0.982, 95.9% and 93.0% respectively. For test data, the models' AUCs and accuracies were 0.958 and 0.953, 90.0% and 91.4% respectively. The average inference time per case was 2.83 s. Our deep-learning system realizes accuracy and inference speed high enough for practical use. The systems have already been implemented in four hospitals and eight are under progression. We released an application software and implementation code for free in a highly usable state to allow its use in Japan and globally.
Collapse
Affiliation(s)
| | | | - Shoi Shi
- University of Tsukuba, Tsukuba, Japan
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Ken Okamoto
- Juntendo University Urayasu Hospital, Urayasu, Japan
| | | | | | - Ryo Unita
- National Hospital Organization Kyoto Medical Center, Kyoto, Japan
| | | | - Takuro Endo
- International University of Health and Welfare, School of Medicine, Narita Hospital, Narita, Japan
| | | | | | | | | |
Collapse
|
14
|
Rahman MA, Victoros E, Ernest J, Davis R, Shanjana Y, Islam MR. Impact of Artificial Intelligence (AI) Technology in Healthcare Sector: A Critical Evaluation of Both Sides of the Coin. CLINICAL PATHOLOGY (THOUSAND OAKS, VENTURA COUNTY, CALIF.) 2024; 17:2632010X241226887. [PMID: 38264676 PMCID: PMC10804900 DOI: 10.1177/2632010x241226887] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 12/27/2023] [Indexed: 01/25/2024]
Abstract
The influence of artificial intelligence (AI) has drastically risen in recent years, especially in the field of medicine. Its influence has spread so greatly that it is determined to become a pillar in the future medical world. A comprehensive literature search related to AI in healthcare was performed in the PubMed database and retrieved the relevant information from suitable ones. AI excels in aspects such as rapid adaptation, high diagnostic accuracy, and data management that can help improve workforce productivity. With this potential in sight, the FDA has continuously approved more machine learning (ML) software to be used by medical workers and scientists. However, there are few controversies such as increased chances of data breaches, concern for clinical implementation, and potential healthcare dilemmas. In this article, the positive and negative aspects of AI implementation in healthcare are discussed, as well as recommended some potential solutions to the potential issues at hand.
Collapse
Affiliation(s)
| | | | - Julianne Ernest
- Nesbitt School of Pharmacy Wilkes University, Wilkes-Barre, PA, USA
| | - Rob Davis
- Nesbitt School of Pharmacy Wilkes University, Wilkes-Barre, PA, USA
| | - Yeasna Shanjana
- Department of Environmental Sciences, North South University, Bashundhara, Dhaka, Bangladesh
| | | |
Collapse
|
15
|
Li C, Ye G, Jiang Y, Wang Z, Yu H, Yang M. Artificial Intelligence in battling infectious diseases: A transformative role. J Med Virol 2024; 96:e29355. [PMID: 38179882 DOI: 10.1002/jmv.29355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/01/2023] [Accepted: 12/17/2023] [Indexed: 01/06/2024]
Abstract
It is widely acknowledged that infectious diseases have wrought immense havoc on human society, being regarded as adversaries from which humanity cannot elude. In recent years, the advancement of Artificial Intelligence (AI) technology has ushered in a revolutionary era in the realm of infectious disease prevention and control. This evolution encompasses early warning of outbreaks, contact tracing, infection diagnosis, drug discovery, and the facilitation of drug design, alongside other facets of epidemic management. This article presents an overview of the utilization of AI systems in the field of infectious diseases, with a specific focus on their role during the COVID-19 pandemic. The article also highlights the contemporary challenges that AI confronts within this domain and posits strategies for their mitigation. There exists an imperative to further harness the potential applications of AI across multiple domains to augment its capacity in effectively addressing future disease outbreaks.
Collapse
Affiliation(s)
- Chunhui Li
- School of Life Science, Advanced Research Institute of Multidisciplinary Science, Key Laboratory of Molecular Medicine and Biotherapy, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Guoguo Ye
- Shenzhen Key Laboratory of Pathogen and Immunity, National Clinical Research Center for Infectious Disease, The Third People's Hospital of Shenzhen, Second Hospital Affiliated to Southern University of Science and Technology, Shenzhen, China
| | - Yinghan Jiang
- School of Life Science, Advanced Research Institute of Multidisciplinary Science, Key Laboratory of Molecular Medicine and Biotherapy, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Zhiming Wang
- School of Life Science, Advanced Research Institute of Multidisciplinary Science, Key Laboratory of Molecular Medicine and Biotherapy, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Haiyang Yu
- Hangzhou Yalla Information Technology Service Co., Ltd., Hangzhou, People's Republic of China
| | - Minghui Yang
- School of Life Science, Advanced Research Institute of Multidisciplinary Science, Key Laboratory of Molecular Medicine and Biotherapy, Beijing Institute of Technology, Beijing, People's Republic of China
| |
Collapse
|
16
|
Kawata N, Iwao Y, Matsuura Y, Suzuki M, Ema R, Sekiguchi Y, Sato H, Nishiyama A, Nagayoshi M, Takiguchi Y, Suzuki T, Haneishi H. Prediction of oxygen supplementation by a deep-learning model integrating clinical parameters and chest CT images in COVID-19. Jpn J Radiol 2023; 41:1359-1372. [PMID: 37440160 PMCID: PMC10687147 DOI: 10.1007/s11604-023-01466-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 06/28/2023] [Indexed: 07/14/2023]
Abstract
PURPOSE As of March 2023, the number of patients with COVID-19 worldwide is declining, but the early diagnosis of patients requiring inpatient treatment and the appropriate allocation of limited healthcare resources remain unresolved issues. In this study we constructed a deep-learning (DL) model to predict the need for oxygen supplementation using clinical information and chest CT images of patients with COVID-19. MATERIALS AND METHODS We retrospectively enrolled 738 patients with COVID-19 for whom clinical information (patient background, clinical symptoms, and blood test findings) was available and chest CT imaging was performed. The initial data set was divided into 591 training and 147 evaluation data. We developed a DL model that predicted oxygen supplementation by integrating clinical information and CT images. The model was validated at two other facilities (n = 191 and n = 230). In addition, the importance of clinical information for prediction was assessed. RESULTS The proposed DL model showed an area under the curve (AUC) of 89.9% for predicting oxygen supplementation. Validation from the two other facilities showed an AUC > 80%. With respect to interpretation of the model, the contribution of dyspnea and the lactate dehydrogenase level was higher in the model. CONCLUSIONS The DL model integrating clinical information and chest CT images had high predictive accuracy. DL-based prediction of disease severity might be helpful in the clinical management of patients with COVID-19.
Collapse
Affiliation(s)
- Naoko Kawata
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan.
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
- Medical Mycology Research Center (MMRC), Chiba University, Chiba, 260-8673, Japan.
| | - Yuma Iwao
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-ku, Chiba-shi, Chiba, 263-8555, Japan
| | - Yukiko Matsuura
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Masaki Suzuki
- Department of Respirology, Kashiwa Kousei General Hospital, 617 Shikoda, Kashiwa-shi, Chiba, 277-8551, Japan
| | - Ryogo Ema
- Department of Respirology, Eastern Chiba Medical Center, 3-6-2, Okayamadai, Togane-shi, Chiba, 283-8686, Japan
| | - Yuki Sekiguchi
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hirotaka Sato
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
- Department of Radiology, Soka Municipal Hospital, 2-21-1, Souka, Souka-shi, Saitama, 340-8560, Japan
| | - Akira Nishiyama
- Department of Radiology, Chiba University Hospital, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
| | - Masaru Nagayoshi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Yasuo Takiguchi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-cho, Chuo-ku, Chiba-shi, Chiba, 260-0852, Japan
| | - Takuji Suzuki
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-ku, Chiba-shi, Chiba, 260-8677, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-cho, Inage-ku, Chiba-shi, Chiba, 263-8522, Japan
| |
Collapse
|
17
|
Nijiati M, Guo L, Tuersun A, Damola M, Abulizi A, Dong J, Xia L, Hong K, Zou X. Deep learning on longitudinal CT scans: automated prediction of treatment outcomes in hospitalized tuberculosis patients. iScience 2023; 26:108326. [PMID: 37965132 PMCID: PMC10641748 DOI: 10.1016/j.isci.2023.108326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 08/17/2023] [Accepted: 10/20/2023] [Indexed: 11/16/2023] Open
Abstract
Three deep learning (DL)-based prediction models (PMs) using longitudinal CT images were developed to predict tuberculosis (TB) treatment outcomes. The internal dataset consists of 493 bacteriologically confirmed TB patients who completed the anti-tuberculosis treatment with three-time CT scans, including a pretreatment CT scan and two follow-up CT scans. PM1 was trained using only pretreatment CT scans, and PM2 and PM3 were developed by adding follow-up scans. An independent testing was performed on external dataset comprising 86 TB patients. The area under the curve for classifying success and drug-resistant (DR)-TB was improved on both internal (0.609 vs. 0.625 vs. 0.815) and external (0.627 vs. 0.705 vs. 0.735) dataset by adding follow-up scans. The accuracy and F1-score also showed an increasing tendency in the external test. Regular follow-up CT scans can aid in the treatment prediction, and special attention should be given to early intensive phase of treatment to identify high-risk DR-TB patients.
Collapse
Affiliation(s)
- Mayidili Nijiati
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Abudouresuli Tuersun
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Maihemitijiang Damola
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | | | - Jiake Dong
- Department of Radiology, The First People’s Hospital of Kashi Prefecture, Kashi, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Kunlei Hong
- Shenzhen Zhiying Medical Imaging, Shenzhen, China
| | - Xiaoguang Zou
- Clinical Medical Research Center, The First People’s Hospital of Kashi Prefecture, Kashi, China
| |
Collapse
|
18
|
Tan M, Xia J, Luo H, Meng G, Zhu Z. Applying the digital data and the bioinformatics tools in SARS-CoV-2 research. Comput Struct Biotechnol J 2023; 21:4697-4705. [PMID: 37841328 PMCID: PMC10568291 DOI: 10.1016/j.csbj.2023.09.044] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/29/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023] Open
Abstract
Bioinformatics has been playing a crucial role in the scientific progress to fight against the pandemic of the coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The advances in novel algorithms, mega data technology, artificial intelligence and deep learning assisted the development of novel bioinformatics tools to analyze daily increasing SARS-CoV-2 data in the past years. These tools were applied in genomic analyses, evolutionary tracking, epidemiological analyses, protein structure interpretation, studies in virus-host interaction and clinical performance. To promote the in-silico analysis in the future, we conducted a review which summarized the databases, web services and software applied in SARS-CoV-2 research. Those digital resources applied in SARS-CoV-2 research may also potentially contribute to the research in other coronavirus and non-coronavirus viruses.
Collapse
Affiliation(s)
- Meng Tan
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Jiaxin Xia
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Haitao Luo
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Geng Meng
- College of Veterinary Medicine, China Agricultural University, Beijing, China
| | - Zhenglin Zhu
- School of Life Sciences, Chongqing University, Chongqing, China
| |
Collapse
|
19
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
20
|
Lin M, Hou B, Mishra S, Yao T, Huo Y, Yang Q, Wang F, Shih G, Peng Y. Enhancing thoracic disease detection using chest X-rays from PubMed Central Open Access. Comput Biol Med 2023; 159:106962. [PMID: 37094464 PMCID: PMC10349296 DOI: 10.1016/j.compbiomed.2023.106962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/26/2023] [Accepted: 04/18/2023] [Indexed: 04/26/2023]
Abstract
Large chest X-rays (CXR) datasets have been collected to train deep learning models to detect thorax pathology on CXR. However, most CXR datasets are from single-center studies and the collected pathologies are often imbalanced. The aim of this study was to automatically construct a public, weakly-labeled CXR database from articles in PubMed Central Open Access (PMC-OA) and to assess model performance on CXR pathology classification by using this database as additional training data. Our framework includes text extraction, CXR pathology verification, subfigure separation, and image modality classification. We have extensively validated the utility of the automatically generated image database on thoracic disease detection tasks, including Hernia, Lung Lesion, Pneumonia, and pneumothorax. We pick these diseases due to their historically poor performance in existing datasets: the NIH-CXR dataset (112,120 CXR) and the MIMIC-CXR dataset (243,324 CXR). We find that classifiers fine-tuned with additional PMC-CXR extracted by the proposed framework consistently and significantly achieved better performance than those without (e.g., Hernia: 0.9335 vs 0.9154; Lung Lesion: 0.7394 vs. 0.7207; Pneumonia: 0.7074 vs. 0.6709; Pneumothorax 0.8185 vs. 0.7517, all in AUC with p< 0.0001) for CXR pathology detection. In contrast to previous approaches that manually submit the medical images to the repository, our framework can automatically collect figures and their accompanied figure legends. Compared to previous studies, the proposed framework improved subfigure segmentation and incorporates our advanced self-developed NLP technique for CXR pathology verification. We hope it complements existing resources and improves our ability to make biomedical image data findable, accessible, interoperable, and reusable.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Population Health Sciences, Weill Cornell Medicine, New York, USA
| | - Bojian Hou
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, USA
| | - Swati Mishra
- Department of Information Science, Cornell University, New York, USA
| | - Tianyuan Yao
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Qian Yang
- Department of Information Science, Cornell University, New York, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, New York, USA
| | - George Shih
- Department of Radiology, Weill Cornell Medicine, New York, USA
| | - Yifan Peng
- Department of Population Health Sciences, Weill Cornell Medicine, New York, USA.
| |
Collapse
|
21
|
Predicting gene mutation status via artificial intelligence technologies based on multimodal integration (MMI) to advance precision oncology. Semin Cancer Biol 2023; 91:1-15. [PMID: 36801447 DOI: 10.1016/j.semcancer.2023.02.006] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 01/30/2023] [Accepted: 02/15/2023] [Indexed: 02/21/2023]
Abstract
Personalized treatment strategies for cancer frequently rely on the detection of genetic alterations which are determined by molecular biology assays. Historically, these processes typically required single-gene sequencing, next-generation sequencing, or visual inspection of histopathology slides by experienced pathologists in a clinical context. In the past decade, advances in artificial intelligence (AI) technologies have demonstrated remarkable potential in assisting physicians with accurate diagnosis of oncology image-recognition tasks. Meanwhile, AI techniques make it possible to integrate multimodal data such as radiology, histology, and genomics, providing critical guidance for the stratification of patients in the context of precision therapy. Given that the mutation detection is unaffordable and time-consuming for a considerable number of patients, predicting gene mutations based on routine clinical radiological scans or whole-slide images of tissue with AI-based methods has become a hot issue in actual clinical practice. In this review, we synthesized the general framework of multimodal integration (MMI) for molecular intelligent diagnostics beyond standard techniques. Then we summarized the emerging applications of AI in the prediction of mutational and molecular profiles of common cancers (lung, brain, breast, and other tumor types) pertaining to radiology and histology imaging. Furthermore, we concluded that there truly exist multiple challenges of AI techniques in the way of its real-world application in the medical field, including data curation, feature fusion, model interpretability, and practice regulations. Despite these challenges, we still prospect the clinical implementation of AI as a highly potential decision-support tool to aid oncologists in future cancer treatment management.
Collapse
|
22
|
Wang W, Li M, Fan P, Wang H, Cai J, Wang K, Zhang T, Xiao Z, Yan J, Chen C, Lv Q. Prototype early diagnostic model for invasive pulmonary aspergillosis based on deep learning and big data training. Mycoses 2023; 66:118-127. [PMID: 36271699 DOI: 10.1111/myc.13540] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/16/2022] [Accepted: 10/19/2022] [Indexed: 01/12/2023]
Abstract
BACKGROUND Currently, the diagnosis of invasive pulmonary aspergillosis (IPA) mainly depends on the integration of clinical, radiological and microbiological data. Artificial intelligence (AI) has shown great advantages in dealing with data-rich biological and medical challenges, but the literature on IPA diagnosis is rare. OBJECTIVE This study aimed to provide a non-invasive, objective and easy-to-use AI approach for the early diagnosis of IPA. METHODS We generated a prototype diagnostic deep learning model (IPA-NET) comprising three interrelated computation modules for the automatic diagnosis of IPA. First, IPA-NET was subjected to transfer learning using 300,000 CT images of non-fungal pneumonia from an online database. Second, training and internal test sets, including clinical features and chest CT images of patients with IPA and non-fungal pneumonia in the early stage of the disease, were independently constructed for model training and internal verification. Third, the model was further validated using an external test set. RESULTS IPA-NET showed a marked diagnostic performance for IPA as verified by the internal test set, with an accuracy of 96.8%, a sensitivity of 0.98, a specificity of 0.96 and an area under the curve (AUC) of 0.99. When further validated using the external test set, IPA-NET showed an accuracy of 89.7%, a sensitivity of 0.88, a specificity of 0.91 and an AUC of 0.95. CONCLUSION This novel deep learning model provides a non-invasive, objective and reliable method for the early diagnosis of IPA.
Collapse
Affiliation(s)
- Wei Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.,Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Mujiao Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.,Department of Information, Guangzhou First People's Hospital, Guangzhou, China
| | - Peimin Fan
- Department of Information Center, Guangzhou Chest Hospital, Guangzhou, China
| | - Hua Wang
- Department of Critical Care Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jing Cai
- Department of Critical Care Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Kai Wang
- Department of Critical Care Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Tao Zhang
- Department of Information, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Zelin Xiao
- Department of Surgery, Guangzhou Chest Hospital, Guangzhou, China
| | - Jingdong Yan
- Department of Information, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Chaomin Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qingwen Lv
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
23
|
Ma L, Song S, Guo L, Tan W, Xu L. COVID-19 lung infection segmentation from chest CT images based on CAPA-ResUNet. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:6-17. [PMID: 36713026 PMCID: PMC9874448 DOI: 10.1002/ima.22819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 09/13/2022] [Accepted: 10/02/2022] [Indexed: 06/18/2023]
Abstract
Coronavirus disease 2019 (COVID-19) epidemic has devastating effects on personal health around the world. It is significant to achieve accurate segmentation of pulmonary infection regions, which is an early indicator of disease. To solve this problem, a deep learning model, namely, the content-aware pre-activated residual UNet (CAPA-ResUNet), was proposed for segmenting COVID-19 lesions from CT slices. In this network, the pre-activated residual block was used for down-sampling to solve the problems of complex foreground and large fluctuations of distribution in datasets during training and to avoid gradient disappearance. The area loss function based on the false segmentation regions was proposed to solve the problem of fuzzy boundary of the lesion area. This model was evaluated by the public dataset (COVID-19 Lung CT Lesion Segmentation Challenge-2020) and compared its performance with those of classical models. Our method gains an advantage over other models in multiple metrics. Such as the Dice coefficient, specificity (Spe), and intersection over union (IoU), our CAPA-ResUNet obtained 0.775 points, 0.972 points, and 0.646 points, respectively. The Dice coefficient of our model was 2.51% higher than Content-aware residual UNet (CARes-UNet). The code is available at https://github.com/malu108/LungInfectionSeg.
Collapse
Affiliation(s)
- Lu Ma
- School of ScienceNortheastern UniversityShenyangChina
| | - Shuni Song
- Guangdong Peizheng CollegeGuangzhouChina
| | - Liting Guo
- College of Medicine and Biological Information EngineeringNortheastern UniversityShenyangChina
| | - Wenjun Tan
- School of Computer Science and EngineeringNortheastern UniversityShenyangChina
- Key Laboratory of Medical Image ComputingMinistry of EducationShenyangLiaoningChina
| | - Lisheng Xu
- College of Medicine and Biological Information EngineeringNortheastern UniversityShenyangChina
- Key Laboratory of Medical Image ComputingMinistry of EducationShenyangLiaoningChina
- Neusoft Research of Intelligent Healthcare Technology, Co. Ltd.ShenyangLiaoningChina
| |
Collapse
|
24
|
Hybrid intelligent model for classifying chest X-ray images of COVID-19 patients using genetic algorithm and neutrosophic logic. Soft comput 2023; 27:3427-3442. [PMID: 34421342 PMCID: PMC8371596 DOI: 10.1007/s00500-021-06103-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/30/2021] [Indexed: 12/23/2022]
Abstract
The highly spreading virus, COVID-19, created a huge need for an accurate and speedy diagnosis method. The famous RT-PCR test is costly and not available for many suspected cases. This article proposes a neurotrophic model to diagnose COVID-19 patients based on their chest X-ray images. The proposed model has five main phases. First, the speeded up robust features (SURF) method is applied to each X-ray image to extract robust invariant features. Second, three sampling algorithms are applied to treat imbalanced dataset. Third, the neutrosophic rule-based classification system is proposed to generate a set of rules based on the three neutrosophic values < T; I; F>, the degrees of truth, indeterminacy falsity. Fourth, a genetic algorithm is applied to select the optimal neutrosophic rules to improve the classification performance. Fifth, in this phase, the classification-based neutrosophic logic is proposed. The testing rule matrix is constructed with no class label, and the goal of this phase is to determine the class label for each testing rule using intersection percentage between testing and training rules. The proposed model is referred to as GNRCS. It is compared with six state-of-the-art classifiers such as multilayer perceptron (MLP), support vector machines (SVM), linear discriminant analysis (LDA), decision tree (DT), naive Bayes (NB), and random forest classifiers (RFC) with quality measures of accuracy, precision, sensitivity, specificity, and F1-score. The results show that the proposed model is powerful for COVID-19 recognition with high specificity and high sensitivity and less computational complexity. Therefore, the proposed GNRCS model could be used for real-time automatic early recognition of COVID-19.
Collapse
|
25
|
Aminu M, Yadav D, Hong L, Young E, Edelkamp P, Saad M, Salehjahromi M, Chen P, Sujit SJ, Chen MM, Sabloff B, Gladish G, de Groot PM, Godoy MCB, Cascone T, Vokes NI, Zhang J, Brock KK, Daver N, Woodman SE, Tawbi HA, Sheshadri A, Lee JJ, Jaffray D, D3CODE Team, Wu CC, Chung C, Wu J. Habitat Imaging Biomarkers for Diagnosis and Prognosis in Cancer Patients Infected with COVID-19. Cancers (Basel) 2022; 15:275. [PMID: 36612278 PMCID: PMC9818576 DOI: 10.3390/cancers15010275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 01/03/2023] Open
Abstract
OBJECTIVES Cancer patients have worse outcomes from the COVID-19 infection and greater need for ventilator support and elevated mortality rates than the general population. However, previous artificial intelligence (AI) studies focused on patients without cancer to develop diagnosis and severity prediction models. Little is known about how the AI models perform in cancer patients. In this study, we aim to develop a computational framework for COVID-19 diagnosis and severity prediction particularly in a cancer population and further compare it head-to-head to a general population. METHODS We have enrolled multi-center international cohorts with 531 CT scans from 502 general patients and 420 CT scans from 414 cancer patients. In particular, the habitat imaging pipeline was developed to quantify the complex infection patterns by partitioning the whole lung regions into phenotypically different subregions. Subsequently, various machine learning models nested with feature selection were built for COVID-19 detection and severity prediction. RESULTS These models showed almost perfect performance in COVID-19 infection diagnosis and predicting its severity during cross validation. Our analysis revealed that models built separately on the cancer population performed significantly better than those built on the general population and locked to test on the cancer population. This may be because of the significant difference among the habitat features across the two different cohorts. CONCLUSIONS Taken together, our habitat imaging analysis as a proof-of-concept study has highlighted the unique radiologic features of cancer patients and demonstrated effectiveness of CT-based machine learning model in informing COVID-19 management in the cancer population.
Collapse
Affiliation(s)
- Muhammad Aminu
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Divya Yadav
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Lingzhi Hong
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Elliana Young
- Department of Enterprise Data Engineering & Analytics, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Paul Edelkamp
- Department of Enterprise Data Engineering & Analytics, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Maliazurina Saad
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Morteza Salehjahromi
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Pingjun Chen
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Sheeba J. Sujit
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Melissa M. Chen
- Department of Neuroradiology, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Bradley Sabloff
- Department of Thoracic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Gregory Gladish
- Department of Thoracic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Patricia M. de Groot
- Department of Thoracic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Myrna C. B. Godoy
- Department of Thoracic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Tina Cascone
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Natalie I. Vokes
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Jianjun Zhang
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Kristy K. Brock
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
| | - Naval Daver
- Department of Leukemia, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Scott E. Woodman
- Department of Genomic Medicine, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Hussein A. Tawbi
- Department of Melanoma Medical Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Ajay Sheshadri
- Department of Pulmonary Medicine, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - J. Jack Lee
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - David Jaffray
- Office of the Chief Technology and Digital Officer, MD Anderson Cancer Center, Houston, TX 77054, USA
| | | | - Carol C. Wu
- Department of Thoracic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Caroline Chung
- Office of the Chief Data Officer, MD Anderson Cancer Center, Houston, TX 77054, USA
| | - Jia Wu
- Department of Imaging Physics, MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030, USA
- Department of Thoracic/Head and Neck Medical Oncology, MD Anderson Cancer Center, Houston, TX 77054, USA
| |
Collapse
|
26
|
Sun W, Pang Y, Zhang G. CCT: Lightweight compact convolutional transformer for lung disease CT image classification. Front Physiol 2022; 13:1066999. [DOI: 10.3389/fphys.2022.1066999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 10/25/2022] [Indexed: 11/06/2022] Open
Abstract
Computed tomography (CT) imaging results are an important criterion for the diagnosis of lung disease. CT images can clearly show the characteristics of lung lesions. Early and accurate detection of lung diseases helps clinicians to improve patient care effectively. Therefore, in this study, we used a lightweight compact convolutional transformer (CCT) to build a prediction model for lung disease classification using chest CT images. We added a position offset term and changed the attention mechanism of the transformer encoder to an axial attention mechanism module. As a result, the classification performance of the model was improved in terms of height and width. We show that the model effectively classifies COVID-19, community pneumonia, and normal conditions on the CC-CCII dataset. The proposed model outperforms other comparable models in the test set, achieving an accuracy of 98.5% and a sensitivity of 98.6%. The results show that our method achieves a larger field of perception on CT images, which positively affects the classification of CT images. Thus, the method can provide adequate assistance to clinicians.
Collapse
|
27
|
Luo J, Sun Y, Chi J, Liao X, Xu C. A novel deep learning-based method for COVID-19 pneumonia detection from CT images. BMC Med Inform Decis Mak 2022; 22:284. [PMID: 36324135 PMCID: PMC9629767 DOI: 10.1186/s12911-022-02022-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/17/2022] [Indexed: 11/07/2022] Open
Abstract
Background The sensitivity of RT-PCR in diagnosing COVID-19 is only 60–70%, and chest CT plays an indispensable role in the auxiliary diagnosis of COVID-19 pneumonia, but the results of CT imaging are highly dependent on professional radiologists. Aims This study aimed to develop a deep learning model to assist radiologists in detecting COVID-19 pneumonia.
Methods The total study population was 437. The training dataset contained 26,477, 2468, and 8104 CT images of normal, CAP, and COVID-19, respectively. The validation dataset contained 14,076, 1028, and 3376 CT images of normal, CAP, and COVID-19 patients, respectively. The test set included 51 normal cases, 28 CAP patients, and 51 COVID-19 patients. We designed and trained a deep learning model to recognize normal, CAP, and COVID-19 patients based on U-Net and ResNet-50. Moreover, the diagnoses of the deep learning model were compared with different levels of radiologists. Results In the test set, the sensitivity of the deep learning model in diagnosing normal cases, CAP, and COVID-19 patients was 98.03%, 89.28%, and 92.15%, respectively. The diagnostic accuracy of the deep learning model was 93.84%. In the validation set, the accuracy was 92.86%, which was better than that of two novice doctors (86.73% and 87.75%) and almost equal to that of two experts (94.90% and 93.88%). The AI model performed significantly better than all four radiologists in terms of time consumption (35 min vs. 75 min, 93 min, 79 min, and 82 min). Conclusion The AI model we obtained had strong decision-making ability, which could potentially assist doctors in detecting COVID-19 pneumonia.
Collapse
Affiliation(s)
- Ju Luo
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China
| | - Yuhao Sun
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Jingshu Chi
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China
| | - Xin Liao
- College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
| | - Canxia Xu
- Third Xiangya Hospital, Central South University, NO.138, Tongzipo Road, Changsha, 410013, Hunan, China.
| |
Collapse
|
28
|
Zhang X, Jiang R, Huang P, Wang T, Hu M, Scarsbrook AF, Frangi AF. Dynamic feature learning for COVID-19 segmentation and classification. Comput Biol Med 2022; 150:106136. [PMID: 36240599 PMCID: PMC9523910 DOI: 10.1016/j.compbiomed.2022.106136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/25/2022] [Accepted: 09/18/2022] [Indexed: 11/28/2022]
Abstract
Since December 2019, coronavirus SARS-CoV-2 (COVID-19) has rapidly developed into a global epidemic, with millions of patients affected worldwide. As part of the diagnostic pathway, computed tomography (CT) scans are used to help patient management. However, parenchymal imaging findings in COVID-19 are non-specific and can be seen in other diseases. In this work, we propose to first segment lesions from CT images, and further, classify COVID-19 patients from healthy persons and common pneumonia patients. In detail, a novel Dynamic Fusion Segmentation Network (DFSN) that automatically segments infection-related pixels is first proposed. Within this network, low-level features are aggregated to high-level ones to effectively capture context characteristics of infection regions, and high-level features are dynamically fused to model multi-scale semantic information of lesions. Based on DFSN, Dynamic Transfer-learning Classification Network (DTCN) is proposed to distinguish COVID-19 patients. Within DTCN, a pre-trained DFSN is transferred and used as the backbone to extract pixel-level information. Then the pixel-level information is dynamically selected and used to make a diagnosis. In this way, the pre-trained DFSN is utilized through transfer learning, and clinical significance of segmentation results is comprehensively considered. Thus DTCN becomes more sensitive to typical signs of COVID-19. Extensive experiments are conducted to demonstrate effectiveness of the proposed DFSN and DTCN frameworks. The corresponding results indicate that these two models achieve state-of-the-art performance in terms of segmentation and classification.
Collapse
Affiliation(s)
- Xiaoqin Zhang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China.
| | - Runhua Jiang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Pengcheng Huang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Tao Wang
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Mingjun Hu
- College of Computer Science and Artificial Intelligence, Wenzhou University, China
| | - Andrew F Scarsbrook
- Radiology Department, Leeds Teaching Hospitals NHS Trust, UK; Leeds Institute of Medical Research, University of Leeds, UK
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, Leeds Institute for Cardiovascular and Metabolic Medicine, University of Leeds, Leeds, UK; Department of Electrical Engineering, Department of Cardiovascular Sciences, KU Leuven, Belgium
| |
Collapse
|
29
|
Doraiswami PR, Sarveshwaran V, Swamidason ITJ, Sorna SCD. Jaya-tunicate swarm algorithm based generative adversarial network for COVID-19 prediction with chest computed tomography images. CONCURRENCY AND COMPUTATION : PRACTICE & EXPERIENCE 2022; 34:e7211. [PMID: 35945987 PMCID: PMC9353441 DOI: 10.1002/cpe.7211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/30/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
A novel corona virus (COVID-19) has materialized as the respiratory syndrome in recent decades. Chest computed tomography scanning is the significant technology for monitoring and predicting COVID-19. To predict the patients of COVID-19 at early stage poses an open challenge in the research community. Therefore, an effective prediction mechanism named Jaya-tunicate swarm algorithm driven generative adversarial network (Jaya-TSA with GAN) is proposed in this research to find patients of COVID-19 infections. The developed Jaya-TSA is the incorporation of Jaya algorithm with tunicate swarm algorithm (TSA). However, lungs lobs are segmented using Bayesian fuzzy clustering, which effectively find the boundary regions of lung lobes. Based on the extracted features, the process of COVID-19 prediction is accomplished using GAN. The optimal solution is obtained by training GAN using proposed Jaya-TSA with respect to fitness measure. The dimensionality of features is reduced by extracting the optimal features, which enable to increase the speed of training process. Moreover, the developed Jaya-TSA based GAN attained outstanding effectiveness by considering the factors, like, specificity, accuracy, and sensitivity that captured the importance as 0.8857, 0.8727, and 0.85 by varying training data.
Collapse
Affiliation(s)
| | - Velliangiri Sarveshwaran
- Department of Computational IntelligenceSRM Institute of Science and Technology, Kattankulathur CampusChennaiIndia
| | | | | |
Collapse
|
30
|
Radiogenomic System for Non-Invasive Identification of Multiple Actionable Mutations and PD-L1 Expression in Non-Small Cell Lung Cancer Based on CT Images. Cancers (Basel) 2022; 14:cancers14194823. [PMID: 36230746 PMCID: PMC9563625 DOI: 10.3390/cancers14194823] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/29/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
PURPOSE Personalized treatments such as targeted therapy and immunotherapy have revolutionized the predominantly therapeutic paradigm for non-small cell lung cancer (NSCLC). However, these treatment decisions require the determination of targetable genomic and molecular alterations through invasive genetic or immunohistochemistry (IHC) tests. Numerous previous studies have demonstrated that artificial intelligence can accurately predict the single-gene status of tumors based on radiologic imaging, but few studies have achieved the simultaneous evaluation of multiple genes to reflect more realistic clinical scenarios. METHODS We proposed a multi-label multi-task deep learning (MMDL) system for non-invasively predicting actionable NSCLC mutations and PD-L1 expression utilizing routinely acquired computed tomography (CT) images. This radiogenomic system integrated transformer-based deep learning features and radiomic features of CT volumes from 1096 NSCLC patients based on next-generation sequencing (NGS) and IHC tests. RESULTS For each task cohort, we randomly split the corresponding dataset into training (80%), validation (10%), and testing (10%) subsets. The area under the receiver operating characteristic curves (AUCs) of the MMDL system achieved 0.862 (95% confidence interval (CI), 0.758-0.969) for discrimination of a panel of 8 mutated genes, including EGFR, ALK, ERBB2, BRAF, MET, ROS1, RET and KRAS, 0.856 (95% CI, 0.663-0.948) for identification of a 10-molecular status panel (previous 8 genes plus TP53 and PD-L1); and 0.868 (95% CI, 0.641-0.972) for classifying EGFR / PD-L1 subtype, respectively. CONCLUSIONS To the best of our knowledge, this study is the first deep learning system to simultaneously analyze 10 molecular expressions, which might be utilized as an assistive tool in conjunction with or in lieu of ancillary testing to support precision treatment options.
Collapse
|
31
|
Karthik R, Menaka R, Hariharan M, Kathiresan GS. AI for COVID-19 Detection from Radiographs: Incisive Analysis of State of the Art Techniques, Key Challenges and Future Directions. Ing Rech Biomed 2022; 43:486-510. [PMID: 34336141 PMCID: PMC8312058 DOI: 10.1016/j.irbm.2021.07.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 07/19/2021] [Indexed: 12/24/2022]
Abstract
Background and objective In recent years, Artificial Intelligence has had an evident impact on the way research addresses challenges in different domains. It has proven to be a huge asset, especially in the medical field, allowing for time-efficient and reliable solutions. This research aims to spotlight the impact of deep learning and machine learning models in the detection of COVID-19 from medical images. This is achieved by conducting a review of the state-of-the-art approaches proposed by the recent works in this field. Methods The main focus of this study is the recent developments of classification and segmentation approaches to image-based COVID-19 detection. The study reviews 140 research papers published in different academic research databases. These papers have been screened and filtered based on specified criteria, to acquire insights prudent to image-based COVID-19 detection. Results The methods discussed in this review include different types of imaging modality, predominantly X-rays and CT scans. These modalities are used for classification and segmentation tasks as well. This review seeks to categorize and discuss the different deep learning and machine learning architectures employed for these tasks, based on the imaging modality utilized. It also hints at other possible deep learning and machine learning architectures that can be proposed for better results towards COVID-19 detection. Along with that, a detailed overview of the emerging trends and breakthroughs in Artificial Intelligence-based COVID-19 detection has been discussed as well. Conclusion This work concludes by stipulating the technical and non-technical challenges faced by researchers and illustrates the advantages of image-based COVID-19 detection with Artificial Intelligence techniques.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - M Hariharan
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - G S Kathiresan
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| |
Collapse
|
32
|
Solimando AG, Marziliano D, Ribatti D. SARS-CoV-2 and Endothelial Cells: Vascular Changes, Intussusceptive Microvascular Growth and Novel Therapeutic Windows. Biomedicines 2022; 10:2242. [PMID: 36140343 PMCID: PMC9496230 DOI: 10.3390/biomedicines10092242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 09/02/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Endothelial activation in infectious diseases plays a crucial role in understanding and predicting the outcomes and future treatments of several clinical conditions. COVID-19 is no exception. Moving from basic principles to novel approaches, an evolving view of endothelial activation provides insights into a better knowledge of the upstream actors in COVID-19 as a crucial future direction for managing SARS-CoV-2 and other infections. Assessing the function of resting and damaged endothelial cells in infection, particularly in COVID-19, five critical processes emerged controlling thrombo-resistance: vascular integrity, blood flow regulation, immune cell trafficking, angiogenesis and intussusceptive microvascular growth. Endothelial cell injury is associated with thrombosis, increased vessel contraction and a crucial phenomenon identified as intussusceptive microvascular growth, an unprecedented event of vessel splitting into two lumens through the integration of circulating pro-angiogenic cells. An essential awareness of endothelial cells and their phenotypic changes in COVID-19 inflammation is pivotal to understanding the vascular biology of infections and may offer crucial new therapeutic windows.
Collapse
Affiliation(s)
- Antonio Giovanni Solimando
- Guido Baccelli Unit of Internal Medicine, Department of Biomedical Sciences and Human Oncology, School of Medicine, Aldo Moro University of Bari, 70124 Bari, Italy
| | - Donatello Marziliano
- Guido Baccelli Unit of Internal Medicine, Department of Biomedical Sciences and Human Oncology, School of Medicine, Aldo Moro University of Bari, 70124 Bari, Italy
| | - Domenico Ribatti
- Department of Basic Medical Sciences, Neurosciences, and Sensory Organs, University of Bari Medical School, 70124 Bari, Italy
| |
Collapse
|
33
|
M Allayla N, Nazar Ibraheem F, Adnan Jaleel R. Enabling image optimisation and artificial intelligence technologies for better Internet of Things framework to predict COVID. IET NETWORKS 2022. [PMCID: PMC9537994 DOI: 10.1049/ntw2.12052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Sensor technology advancements have provided a viable solution to fight COVID and to develop healthcare systems based on Internet of Things (IoTs). In this study, image processing and Artificial Intelligence (AI) are used to improve the IoT framework. Computed Tomography (CT) image‐based forecasting of COVID disease is among the important activities in medicine for measuring the severity of variability in the human body. In COVID CT images, the optimal gamma correction value was optimised using the Whale Optimisation Algorithm (WOA). During the search for the optimal solution, WOA was found to be a highly efficient algorithm, which has the characteristics of high precision and fast convergence. Whale Optimisation Algorithm is used to find best gamma correction value to present detailed information about a lung CT image, Also, in this study, analysis of important AI techniques has been done, such as Support Vector Machine (SVM) and Deep‐Learning (Deep‐Learning (DL)) for COVID disease forecasting in terms of amount of data training and computational power. Many experiments have been implemented to investigate the optimisation: SVM and DL with WOA and without WOA are compared by using confusion matrix parameters. From the results, we find that the DL model outperforms the SVM with WOA and without WOA.
Collapse
Affiliation(s)
- Noor M Allayla
- Department of Computer Engineering University of Mosul Mosul Iraq
| | | | - Refed Adnan Jaleel
- Department of Information and Communication Engineering Al‐Nahrain University Baghdad Iraq
| |
Collapse
|
34
|
Development and validation of an abnormality-derived deep-learning diagnostic system for major respiratory diseases. NPJ Digit Med 2022; 5:124. [PMID: 35999467 PMCID: PMC9395860 DOI: 10.1038/s41746-022-00648-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 07/04/2022] [Indexed: 01/05/2023] Open
Abstract
Respiratory diseases impose a tremendous global health burden on large patient populations. In this study, we aimed to develop DeepMRDTR, a deep learning-based medical image interpretation system for the diagnosis of major respiratory diseases based on the automated identification of a wide range of radiological abnormalities through computed tomography (CT) and chest X-ray (CXR) from real-world, large-scale datasets. DeepMRDTR comprises four networks (two CT-Nets and two CXR-Nets) that exploit contrastive learning to generate pre-training parameters that are fine-tuned on the retrospective dataset collected from a single institution. The performance of DeepMRDTR was evaluated for abnormality identification and disease diagnosis on data from two different institutions: one was an internal testing dataset from the same institution as the training data and the second was collected from an external institution to evaluate the model generalizability and robustness to an unrelated population dataset. In such a difficult multi-class diagnosis task, our system achieved the average area under the receiver operating characteristic curve (AUC) of 0.856 (95% confidence interval (CI):0.843–0.868) and 0.841 (95%CI:0.832–0.887) for abnormality identification, and 0.900 (95%CI:0.872–0.958) and 0.866 (95%CI:0.832–0.887) for major respiratory diseases’ diagnosis on CT and CXR datasets, respectively. Furthermore, to achieve a clinically actionable diagnosis, we deployed a preliminary version of DeepMRDTR into the clinical workflow, which was performed on par with senior experts in disease diagnosis, with an AUC of 0.890 and a Cohen’s k of 0.746–0.877 at a reasonable timescale; these findings demonstrate the potential to accelerate the medical workflow to facilitate early diagnosis as a triage tool for respiratory diseases which supports improved clinical diagnoses and decision-making.
Collapse
|
35
|
Liang S, Ma J, Wang G, Shao J, Li J, Deng H, Wang C, Li W. The Application of Artificial Intelligence in the Diagnosis and Drug Resistance Prediction of Pulmonary Tuberculosis. Front Med (Lausanne) 2022; 9:935080. [PMID: 35966878 PMCID: PMC9366014 DOI: 10.3389/fmed.2022.935080] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.
Collapse
Affiliation(s)
- Shufan Liang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jiechao Ma
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Gang Wang
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Jingwei Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Hui Deng
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Key Laboratory of Sichuan Province, Precision Medicine Research Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Hui Deng,
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Chengdi Wang,
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, Frontiers Science Center for Disease-Related Molecular Network, West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
- Weimin Li,
| |
Collapse
|
36
|
Tan P, Huang W, Wang L, Deng G, Yuan Y, Qiu S, Ni D, Du S, Cheng J. Deep learning predicts immune checkpoint inhibitor-related pneumonitis from pretreatment computed tomography images. Front Physiol 2022; 13:978222. [PMID: 35957985 PMCID: PMC9358138 DOI: 10.3389/fphys.2022.978222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 07/05/2022] [Indexed: 11/13/2022] Open
Abstract
Immune checkpoint inhibitors (ICIs) have revolutionized the treatment of lung cancer, including both non-small cell lung cancer and small cell lung cancer. Despite the promising results of immunotherapies, ICI-related pneumonitis (ICIP) is a potentially fatal adverse event. Therefore, early detection of patients at risk for developing ICIP before the initiation of immunotherapy is critical for alleviating future complications with early interventions and improving treatment outcomes. In this study, we present the first reported work that explores the potential of deep learning to predict patients who are at risk for developing ICIP. To this end, we collected the pretreatment baseline CT images and clinical information of 24 patients who developed ICIP after immunotherapy and 24 control patients who did not. A multimodal deep learning model was constructed based on 3D CT images and clinical data. To enhance performance, we employed two-stage transfer learning by pre-training the model sequentially on a large natural image dataset and a large CT image dataset, as well as transfer learning. Extensive experiments were conducted to verify the effectiveness of the key components used in our method. Using five-fold cross-validation, our method accurately distinguished ICIP patients from non-ICIP patients, with area under the receiver operating characteristic curve of 0.918 and accuracy of 0.920. This study demonstrates the promising potential of deep learning to identify patients at risk for developing ICIP. The proposed deep learning model enables efficient risk stratification, close monitoring, and prompt management of ICIP, ultimately leading to better treatment outcomes.
Collapse
Affiliation(s)
- Peixin Tan
- Department of Radiation Oncology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wei Huang
- Department of Radiation Oncology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Lingling Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
- Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Guanhua Deng
- Department of Oncology, Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Ye Yuan
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
- Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Shili Qiu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
- Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
- Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Shasha Du
- Department of Radiation Oncology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- *Correspondence: Shasha Du, ; Jun Cheng,
| | - Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China
- Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
- *Correspondence: Shasha Du, ; Jun Cheng,
| |
Collapse
|
37
|
Han X, Yu Z, Zhuo Y, Zhao B, Ren Y, Lamm L, Xue X, Feng J, Marr C, Shan F, Peng T, Zhang XY. The value of longitudinal clinical data and paired CT scans in predicting the deterioration of COVID-19 revealed by an artificial intelligence system. iScience 2022; 25:104227. [PMID: 35434542 PMCID: PMC8989658 DOI: 10.1016/j.isci.2022.104227] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 03/10/2022] [Accepted: 04/05/2022] [Indexed: 01/09/2023] Open
Abstract
The respective value of clinical data and CT examinations in predicting COVID-19 progression is unclear, because the CT scans and clinical data previously used are not synchronized in time. To address this issue, we collected 119 COVID-19 patients with 341 longitudinal CT scans and paired clinical data, and we developed an AI system for the prediction of COVID-19 deterioration. By combining features extracted from CT and clinical data with our system, we can predict whether a patient will develop severe symptoms during hospitalization. Complementary to clinical data, CT examinations show significant add-on values for the prediction of COVID-19 progression in the early stage of COVID-19, especially in the 6th to 8th day after the symptom onset, indicating that this is the ideal time window for the introduction of CT examinations. We release our AI system to provide clinicians with additional assistance to optimize CT usage in the clinical workflow. COVID-19 patients with 341 longitudinal CT scans and paired clinical data included A new AI model for the prediction of COVID-19 progression was developed CT scans show significant add-on value over clinical data for the prediction Day 6–8 after the onset of COVID-19 symptoms is an ideal time window for a CT scan
Collapse
Affiliation(s)
- Xiaoyang Han
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, Shanghai 200433, China
| | - Ziqi Yu
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, Shanghai 200433, China
| | - Yaoyao Zhuo
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai 200032, China.,Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Botao Zhao
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, Shanghai 200433, China
| | - Yan Ren
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai 200433, China
| | - Lorenz Lamm
- Institute of AI for Health, Helmholtz Zentrum München, Ingolstädter Landstraße 1, D-85764 Neuherberg, Germany.,Helmholtz AI, Helmholtz Zentrum München, Ingolstädter Landstraße 1, D-85764 Neuherberg, Germany
| | - Xiangyang Xue
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai 200433, China
| | - Jianfeng Feng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, Shanghai 200433, China
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Zentrum München, Ingolstädter Landstraße 1, D-85764 Neuherberg, Germany
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201508, China
| | - Tingying Peng
- Helmholtz AI, Helmholtz Zentrum München, Ingolstädter Landstraße 1, D-85764 Neuherberg, Germany
| | - Xiao-Yong Zhang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China.,Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Ministry of Education, Shanghai 200433, China.,MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
| |
Collapse
|
38
|
Hasan MK, Alam MA, Dahal L, Roy S, Wahid SR, Elahi MTE, Martí R, Khanal B. Challenges of deep learning methods for COVID-19 detection using public datasets. INFORMATICS IN MEDICINE UNLOCKED 2022; 30:100945. [PMID: 35434261 PMCID: PMC9005223 DOI: 10.1016/j.imu.2022.100945] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 04/03/2022] [Accepted: 04/04/2022] [Indexed: 02/07/2023] Open
Abstract
Since the COVID-19 pandemic, several research studies have proposed Deep Learning (DL)-based automated COVID-19 detection, reporting high cross-validation accuracy when classifying COVID-19 patients from normal or other common Pneumonia. Although the reported outcomes are very high in most cases, these results were obtained without an independent test set from a separate data source(s). DL models are likely to overfit training data distribution when independent test sets are not utilized or are prone to learn dataset-specific artifacts rather than the actual disease characteristics and underlying pathology. This study aims to assess the promise of such DL methods and datasets by investigating the key challenges and issues by examining the compositions of the available public image datasets and designing different experimental setups. A convolutional neural network-based network, called CVR-Net (COVID-19 Recognition Network), has been proposed for conducting comprehensive experiments to validate our hypothesis. The presented end-to-end CVR-Net is a multi-scale-multi-encoder ensemble model that aggregates the outputs from two different encoders and their different scales to convey the final prediction probability. Three different classification tasks, such as 2-, 3-, 4-classes, are designed where the train-test datasets are from the single, multiple, and independent sources. The obtained binary classification accuracy is 99.8% for a single train-test data source, where the accuracies fall to 98.4% and 88.7% when multiple and independent train-test data sources are utilized. Similar outcomes are noticed in multi-class categorization tasks for single, multiple, and independent data sources, highlighting the challenges in developing DL models with the existing public datasets without an independent test set from a separate dataset. Such a result concludes a requirement for a better-designed dataset for developing DL tools applicable in actual clinical settings. The dataset should have an independent test set; for a single machine or hospital source, have a more balanced set of images for all the prediction classes; and have a balanced dataset from several hospitals and demography. Our source codes and model are publicly available for the research community for further improvements.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Md Ashraful Alam
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Lavsen Dahal
- Nepal Applied Mathematics and Informatics Institute for Research (NAAMII), Nepal
| | - Shidhartho Roy
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Sifat Redwan Wahid
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Md Toufick E Elahi
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh
| | - Robert Martí
- Computer Vision and Robotics Institute, University of Girona, Spain
| | - Bishesh Khanal
- Nepal Applied Mathematics and Informatics Institute for Research (NAAMII), Nepal
| |
Collapse
|
39
|
Schultze JL, Gabriel M, Nicotera P. Time for a voluntary crisis research service. Cell Death Differ 2022; 29:888-890. [PMID: 35314771 PMCID: PMC8935265 DOI: 10.1038/s41418-022-00968-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/27/2022] [Accepted: 01/28/2022] [Indexed: 11/20/2022] Open
Affiliation(s)
- Joachim L Schultze
- Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE) e.V., Bonn, Germany.
- PRECISE Platform for Single Cell Genomics and Epigenomics, DZNE and University of Bonn, Bonn, Germany.
| | - Markus Gabriel
- International Center for Philosophy, University of Bonn, Bonn, Germany
| | - Pierluigi Nicotera
- Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE) e.V., Bonn, Germany
| |
Collapse
|
40
|
Xu F, Lou K, Chen C, Chen Q, Wang D, Wu J, Zhu W, Tan W, Zhou Y, Liu Y, Wang B, Zhang X, Zhang Z, Zhang J, Sun M, Zhang G, Dai G, Hu H. An original deep learning model using limited data for COVID-19 discrimination: A multi-center study. Med Phys 2022; 49:3874-3885. [PMID: 35305027 PMCID: PMC9088453 DOI: 10.1002/mp.15549] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/24/2021] [Accepted: 02/07/2022] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES Artificial intelligence (AI) has been proved to be a highly efficient tool for COVID-19 diagnosis, but the large data size and heavy label force required for algorithm development and the poor generalizability of AI algorithms, to some extent, limit the application of AI technology in clinical practice. The aim of this study is to develop an AI algorithm with high robustness using limited chest CT data for COVID-19 discrimination. METHODS A three dimensional algorithm that combined multi-instance learning (MIL) with the long and short-term memory (LSTM) architecture (3DMTM) was developed for differentiating COVID-19 from community acquired pneumonia (CAP) while logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM) and a three dimensional convolutional neural network (3D CNN) set for comparison. Totally, 515 patients with or without COVID-19 between December 2019 and March 2020 from 5 different hospitals were recruited and divided into relatively large (150 COVID-19 and 183 CAP cases) and relatively small datasets (17 COVID-19 and 35 CAP cases) for either training or validation and another independent dataset (37 COVID-19 and 93 CAP cases) for external test. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, precision, accuracy, F1 score, and G-Mean were utilized for performance evaluation. RESULTS In the external test cohort, the relatively large data-based 3DMTM-LD achieved an AUC of 0.956 (95%CI, 0.929∼0.982) with 86.2% and 98.0% for its sensitivity and specificity. 3DMTM-SD got an AUC of 0.937 (95%CI, 0.909∼0.965) while the AUC of 3DCM-SD decreased dramatically to 0.714 (95%CI, 0.649∼0.780) with training data reduction. KNN-MMSD, LR-MMSD, SVM-MMSD and 3DCM-MMSD benefited significantly from the inclusion of clinical information while models trained with relatively large dataset got slight performance improvement in COVID-19 discrimination. 3DMTM, trained with either CT or multi-modal data, presented comparably excellent performance in COVID-19 discrimination. CONCLUSIONS The 3DMTM algorithm presented excellent robustness for COVID-19 discrimination with limited CT data. 3DMTM based on CT data performed comparably in COVID-19 discrimination with that trained with multi-modal information. Clinical information could improve the performance of KNN, LR, SVM and 3DCM in COVID-19 discrimination, especially in the scenario with limited data for training. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fangyi Xu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Kaihua Lou
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Chao Chen
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Qingqing Chen
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Dawei Wang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd., 18F, Building E. Yuanyang International Center, Chaoyang District, Beijing, China
| | - Jiangfen Wu
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd., 18F, Building E. Yuanyang International Center, Chaoyang District, Beijing, China
| | - Wenchao Zhu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Weixiong Tan
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd., 18F, Building E. Yuanyang International Center, Chaoyang District, Beijing, China
| | - Yong Zhou
- Department of Pulmonary and Critical Care Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China.,China National Respiratory Regional Medical Center (East China), No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| | - Yongjiu Liu
- Department of Radiology, JINGMEN NO.1 PEOPLE'S HOSPITAL, No.168, Xiangshan Road, Dongbao District, Jingmen, Hubei, China
| | - Bing Wang
- Department of Radiology, JINGMEN NO.1 PEOPLE'S HOSPITAL, No.168, Xiangshan Road, Dongbao District, Jingmen, Hubei, China
| | - Xiaoguo Zhang
- Department of respiratory medicine, Jinan Infectious Disease Hospital, Shandong University, No.22029, Jingshi Road, Shizhong District, Jinan, China
| | - Zhongfa Zhang
- Department of respiratory medicine, Jinan Infectious Disease Hospital, Shandong University, No.22029, Jingshi Road, Shizhong District, Jinan, China
| | - Jianjun Zhang
- Department of Radiology, Zhejiang Hospital, No.12, Lingyin Road, Xihu District, Hangzhou, China
| | - Mingxia Sun
- Department of Radiology, Zhejiang Hospital, No.12, Lingyin Road, Xihu District, Hangzhou, China
| | - Guohua Zhang
- Department of Radiology, TAIZHOU NO.1 PEOPLE'S HOSPITAL, No.218, Hengjie Road, Huangyan District, Taizhou, Zhejiang, China
| | - Guojiao Dai
- Department of Radiology, TAIZHOU NO.1 PEOPLE'S HOSPITAL, No.218, Hengjie Road, Huangyan District, Taizhou, Zhejiang, China
| | - Hongjie Hu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No.3, Qingchun East Road, Shangcheng District, Hangzhou, Zhejiang, China
| |
Collapse
|
41
|
Li Z, Qiang W, Chen H, Pei M, Yu X, Wang L, Li Z, Xie W, Wu X, Jiang J, Wu G. Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med 2022; 5:23. [PMID: 35236921 PMCID: PMC8891262 DOI: 10.1038/s41746-022-00571-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/04/2022] [Indexed: 11/23/2022] Open
Abstract
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Hongyun Chen
- Zunyi First People's Hospital, Zunyi Medical University, Zunyi, 563000, China
| | - Mengjie Pei
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Xiaomei Yu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Layi Wang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Zhen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Weiwei Xie
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guizhou, 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China.
| | - Guohai Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China.
| |
Collapse
|
42
|
Ye Q, Gao Y, Ding W, Niu Z, Wang C, Jiang Y, Wang M, Fang EF, Menpes-Smith W, Xia J, Yang G. Robust weakly supervised learning for COVID-19 recognition using multi-center CT images. Appl Soft Comput 2022; 116:108291. [PMID: 34934410 PMCID: PMC8667427 DOI: 10.1016/j.asoc.2021.108291] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 10/18/2021] [Accepted: 12/06/2021] [Indexed: 12/20/2022]
Abstract
The world is currently experiencing an ongoing pandemic of an infectious disease named coronavirus disease 2019 (i.e., COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computed Tomography (CT) plays an important role in assessing the severity of the infection and can also be used to identify those symptomatic and asymptomatic COVID-19 carriers. With a surge of the cumulative number of COVID-19 patients, radiologists are increasingly stressed to examine the CT scans manually. Therefore, an automated 3D CT scan recognition tool is highly in demand since the manual analysis is time-consuming for radiologists and their fatigue can cause possible misjudgment. However, due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches. The multi-domain shift problem for the multi-center and multi-scanner studies is therefore nontrivial that is also crucial for a dependable recognition and critical for reproducible and objective diagnosis and prognosis. In this paper, we proposed a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net) that can efficiently handle the multi-domain shift problem via a new robust weakly supervised learning paradigm. Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Qinghao Ye
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Yuan Gao
- Institute of Biomedical Engineering, University of Oxford, UK
- Aladdin Healthcare Technologies Ltd, UK
| | | | | | - Chengjia Wang
- BHF Center for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Yinghui Jiang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Minhao Wang
- Hangzhou Ocean's Smart Boya Co., Ltd, China
- Mind Rank Ltd, China
| | - Evandro Fei Fang
- Department of Clinical Molecular Biology, University of Oslo, Norway
| | | | - Jun Xia
- Radiology Department, Shenzhen Second People's Hospital, Shenzhen, China
| | - Guang Yang
- Royal Brompton Hospital, London, UK
- National Heart and Lung Institute, Imperial College London, London, UK
| |
Collapse
|
43
|
Jia H, Liu C, Li D, Huang Q, Liu D, Zhang Y, Ye C, Zhou D, Wang Y, Tan Y, Li K, Lin F, Zhang H, Lin J, Xu Y, Liu J, Zeng Q, Hong J, Chen G, Zhang H, Zheng L, Deng X, Ke C, Gao Y, Fan J, Di B, Liang H. Metabolomic analyses reveal new stage-specific features of COVID-19. Eur Respir J 2022; 59:2100284. [PMID: 34289974 PMCID: PMC8311281 DOI: 10.1183/13993003.00284-2021] [Citation(s) in RCA: 70] [Impact Index Per Article: 23.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 06/28/2021] [Indexed: 01/10/2023]
Abstract
The current pandemic of coronavirus disease 2019 (COVID-19) has affected >160 million individuals to date, and has caused millions of deaths worldwide, at least in part due to the unclarified pathophysiology of this disease. Identifying the underlying molecular mechanisms of COVID-19 is critical to overcome this pandemic. Metabolites mirror the disease progression of an individual and can provide extensive insights into their pathophysiological significance at each stage of disease. We provide a comprehensive view of metabolic characterisation of sera from COVID-19 patients at all stages using untargeted and targeted metabolomic analysis. As compared with the healthy controls, we observed different alteration patterns of circulating metabolites from the mild, severe and recovery stages, in both the discovery cohort and the validation cohort, which suggests that metabolic reprogramming of glucose metabolism and the urea cycle are potential pathological mechanisms for COVID-19 progression. Our findings suggest that targeting glucose metabolism and the urea cycle may be a viable approach to fight COVID-19 at various stages along the disease course.
Collapse
Affiliation(s)
- Hongling Jia
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
- Dept of Medical Biochemistry and Molecular Biology, School of Medicine, Jinan University, Guangzhou, China
- These authors contributed equally to this study
| | - Chaowu Liu
- Guangdong Institute of Microbiology, Guangdong Academy of Sciences, State Key Laboratory of Applied Microbiology Southern China, Guangzhou, China
- These authors contributed equally to this study
| | - Dantong Li
- Clinical Data Center, Guangdong Provincial People's Hospital/Guangdong Academy of Medical Sciences, Guangzhou, China
- These authors contributed equally to this study
| | - Qingsheng Huang
- Clinical Data Center, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
- These authors contributed equally to this study
| | - Dong Liu
- Big Data and Machine Learning Laboratory, Chongqing University of Technology, Chongqing, China
- These authors contributed equally to this study
| | - Ying Zhang
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
- These authors contributed equally to this study
| | - Chang Ye
- Clinical Data Center, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Di Zhou
- Metabo-Profile Biotechnology (Shanghai) Co. Ltd, Shanghai, China
| | - Yang Wang
- Metabo-Profile Biotechnology (Shanghai) Co. Ltd, Shanghai, China
| | - Yanlian Tan
- Dept of Medical Biochemistry and Molecular Biology, School of Medicine, Jinan University, Guangzhou, China
| | - Kuibiao Li
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
| | - Fangqin Lin
- Clinical Data Center, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Haiqing Zhang
- Dept of Occupational and Environmental Health, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jingchao Lin
- Metabo-Profile Biotechnology (Shanghai) Co. Ltd, Shanghai, China
| | - Yang Xu
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
| | - Jingwen Liu
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
| | - Qing Zeng
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
| | - Jian Hong
- Dept of Pathophysiology, School of Medicine, Jinan University, Guangzhou, China
| | - Guobing Chen
- Institute of Geriatric Immunology, Dept of Microbiology and Immunology, School of Medicine, Dept of Neurology, Affiliated Huaqiao Hospital, Jinan University, Guangzhou, China
| | - Hao Zhang
- Institute of Precision Cancer Medicine and Pathology, School of Medicine, Jinan University, Guangzhou, China
| | - Lingling Zheng
- Clinical Data Center, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Xilong Deng
- Institute of Infectious Diseases, Guangzhou Eighth People's Hospital, Guangzhou, China
| | - Changwen Ke
- Guangdong Provincial Center for Disease Control and Prevention, Guangzhou, China
| | - Yunfei Gao
- Zhuhai Precision Medical Center, Zhuhai People's Hospital (Zhuhai Hospital Affiliated with Jinan University), Jinan University, Zhuhai, China
- The Biomedical Translational Research Institute, Jinan University Faculty of Medical Science, Jinan University, Guangzhou, China
- Yunfei Gao, Jun Fan, Biao Di and Huiying Liang are joint lead authors
| | - Jun Fan
- Dept of Medical Biochemistry and Molecular Biology, School of Medicine, Jinan University, Guangzhou, China
- Yunfei Gao, Jun Fan, Biao Di and Huiying Liang are joint lead authors
| | - Biao Di
- Guangzhou Center for Disease Control and Prevention, Guangzhou, China
- Yunfei Gao, Jun Fan, Biao Di and Huiying Liang are joint lead authors
| | - Huiying Liang
- Clinical Data Center, Guangdong Provincial People's Hospital/Guangdong Academy of Medical Sciences, Guangzhou, China
- Yunfei Gao, Jun Fan, Biao Di and Huiying Liang are joint lead authors
| |
Collapse
|
44
|
Xu Z, Su C, Xiao Y, Wang F. Artificial intelligence for COVID-19: battling the pandemic with computational intelligence. INTELLIGENT MEDICINE 2022; 2:13-29. [PMID: 34697578 PMCID: PMC8529224 DOI: 10.1016/j.imed.2021.09.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 09/15/2021] [Accepted: 09/29/2021] [Indexed: 12/15/2022]
Abstract
The new coronavirus disease 2019 (COVID-19) has become a global pandemic leading to over 180 million confirmed cases and nearly 4 million deaths until June 2021, according to the World Health Organization. Since the initial report in December 2019 , COVID-19 has demonstrated a high transmission rate (with an R0 > 2), a diverse set of clinical characteristics (e.g., high rate of hospital and intensive care unit admission rates, multi-organ dysfunction for critically ill patients due to hyperinflammation, thrombosis, etc.), and a tremendous burden on health care systems around the world. To understand the serious and complex diseases and develop effective control, treatment, and prevention strategies, researchers from different disciplines have been making significant efforts from different aspects including epidemiology and public health, biology and genomic medicine, as well as clinical care and patient management. In recent years, artificial intelligence (AI) has been introduced into the healthcare field to aid clinical decision-making for disease diagnosis and treatment such as detecting cancer based on medical images, and has achieved superior performance in multiple data-rich application scenarios. In the COVID-19 pandemic, AI techniques have also been used as a powerful tool to overcome the complex diseases. In this context, the goal of this study is to review existing studies on applications of AI techniques in combating the COVID-19 pandemic. Specifically, these efforts can be grouped into the fields of epidemiology, therapeutics, clinical research, social and behavioral studies and are summarized. Potential challenges, directions, and open questions are discussed accordingly, which may provide new insights into addressing the COVID-19 pandemic and would be helpful for researchers to explore more related topics in the post-pandemic era.
Collapse
Affiliation(s)
- Zhenxing Xu
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, New York 10065, United States
| | - Chang Su
- Department of Health Service Administration and Policy, Temple University, Philadelphia 19122, United States
| | - Yunyu Xiao
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, New York 10065, United States
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medicine, Cornell University, New York 10065, United States
| |
Collapse
|
45
|
Ter-Sarkisov A. One Shot Model For The Prediction of COVID-19 And Lesions Segmentation In Chest CT Scans Through The Affinity Among Lesion Mask Features. Appl Soft Comput 2022; 116:108261. [PMID: 34924896 PMCID: PMC8668605 DOI: 10.1016/j.asoc.2021.108261] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 11/09/2021] [Accepted: 11/27/2021] [Indexed: 01/15/2023]
Abstract
We present a novel framework that integrates segmentation of lesion masks and prediction of COVID-19 in chest CT scans in one shot. In order to classify the whole input image, we introduce a type of associations among lesion mask features extracted from the scan slice that we refer to as affinities. First, we map mask features to the affinity space by training an affinity matrix. Next, we map them back into the feature space through a trainable affinity vector. Finally, this feature representation is used for the classification of the whole input scan slice. We achieve a 93.55% COVID-19 sensitivity, 96.93% common pneumonia sensitivity, 99.37% true negative rate and 97.37% F1-score on the test split of CNCB-NCOV dataset with 21192 chest CT scan slices. We also achieve a 0.4240 mean average precision on the lesion segmentation task. All source code, models and results are publicly available on https://github.com/AlexTS1980/COVID-Affinity-Model.
Collapse
Affiliation(s)
- Aram Ter-Sarkisov
- CitAI Research Center, Department of Computer Science, City University of London, United Kingdom
| |
Collapse
|
46
|
Ter-Sarkisov A. COVID-CT-Mask-Net: prediction of COVID-19 from CT scans using regional features. APPL INTELL 2022; 52:9664-9675. [PMID: 35035092 PMCID: PMC8741555 DOI: 10.1007/s10489-021-02731-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/28/2021] [Indexed: 12/20/2022]
Abstract
We present COVID-CT-Mask-Net model that predicts COVID-19 in chest CT scans. The model works in two stages: in the first stage, Mask R-CNN is trained to localize and detect two types of lesions in images. In the second stage, these detections are fused to classify the whole input image. To develop the solution for the three-class problem (COVID-19, Common Pneumonia and Control), we used the COVIDx-CT data split derived from the dataset of chest CT scans collected by China National Center for Bioinformation. We use 3000 images (about 5% of the train split of COVIDx-CT) to train the model. Without any complicated data normalization, balancing and regularization, and training only a small fraction of the model's parameters, we achieve a 9 0 . 8 0 % COVID-19 sensitivity, 9 1 . 6 2 % Common Pneumonia sensitivity and 9 2 . 1 0 % true negative rate (Control sensitivity), an overall accuracy of 9 1 . 6 6 % and F1-score of 9 1 . 5 0 % on the test data split with 21192 images, bringing the ratio of test to train data to 7.06. We also establish an important result that regional predictions (bounding boxes with confidence scores) detected by Mask R-CNN can be used to classify whole images. The full source code, models and pretrained weights are available on https://github.com/AlexTS1980/COVID-CT-Mask-Net.
Collapse
Affiliation(s)
- Aram Ter-Sarkisov
- CitAI Research Center, Department of Computer Science, City, University of London, Northampton Square, London, UK
| |
Collapse
|
47
|
Abstract
Unique pneumonia due to an unknown source emerged in December 2019 in the city of Wuhan, China. Consequently, the World Health Organization (WHO) declared this condition as a new coronavirus disease-19 also known as COVID-19 on February 11, 2020, which on March 13, 2020 was declared as a pandemic. The virus that causes COVID-19 was found to have a similar genome (80% similarity) with the previously known acute respiratory syndrome also known as SARS-CoV. The novel virus was later named Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). SARS-CoV-2 falls in the family of Coronaviridae which is further divided into Nidovirales and another subfamily called Orthocoronavirinae. The four generations of the coronaviruses belongs to the Orthocoronavirinae family that consists of alpha, beta, gamma and delta coronavirus which are denoted as α-CoV, β-CoV, γ-CoV, δ-CoV respectively. The α-CoV and β-CoVs are mainly known to infect mammals whereas γ-CoV and δ-CoV are generally found in birds. The β-CoVs also comprise of SARS-CoV and also include another virus that was found in the Middle East called the Middle East respiratory syndrome virus (MERS-CoV) and the cause of current pandemic SARS-CoV-2. These viruses initially cause the development of pneumonia in the patients and further development of a severe case of acute respiratory distress syndrome (ARDS) and other related symptoms that can be fatal leading to death.
Collapse
|
48
|
Shiri I, Arabi H, Salimi Y, Sanaat A, Akhavanallaf A, Hajianfar G, Askari D, Moradi S, Mansouri Z, Pakbin M, Sandoughdaran S, Abdollahi H, Radmard AR, Rezaei‐Kalantari K, Ghelich Oghli M, Zaidi H. COLI-Net: Deep learning-assisted fully automated COVID-19 lung and infection pneumonia lesion detection and segmentation from chest computed tomography images. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2022; 32:12-25. [PMID: 34898850 PMCID: PMC8652855 DOI: 10.1002/ima.22672] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 09/18/2021] [Accepted: 10/17/2021] [Indexed: 05/17/2023]
Abstract
We present a deep learning (DL)-based automated whole lung and COVID-19 pneumonia infectious lesions (COLI-Net) detection and segmentation from chest computed tomography (CT) images. This multicenter/multiscanner study involved 2368 (347'259 2D slices) and 190 (17 341 2D slices) volumetric CT exams along with their corresponding manual segmentation of lungs and lesions, respectively. All images were cropped, resized, and the intensity values clipped and normalized. A residual network with non-square Dice loss function built upon TensorFlow was employed. The accuracy of lung and COVID-19 lesions segmentation was evaluated on an external reverse transcription-polymerase chain reaction positive COVID-19 dataset (7'333 2D slices) collected at five different centers. To evaluate the segmentation performance, we calculated different quantitative metrics, including radiomic features. The mean Dice coefficients were 0.98 ± 0.011 (95% CI, 0.98-0.99) and 0.91 ± 0.038 (95% CI, 0.90-0.91) for lung and lesions segmentation, respectively. The mean relative Hounsfield unit differences were 0.03 ± 0.84% (95% CI, -0.12 to 0.18) and -0.18 ± 3.4% (95% CI, -0.8 to 0.44) for the lung and lesions, respectively. The relative volume difference for lung and lesions were 0.38 ± 1.2% (95% CI, 0.16-0.59) and 0.81 ± 6.6% (95% CI, -0.39 to 2), respectively. Most radiomic features had a mean relative error less than 5% with the highest mean relative error achieved for the lung for the range first-order feature (-6.95%) and least axis length shape feature (8.68%) for lesions. We developed an automated DL-guided three-dimensional whole lung and infected regions segmentation in COVID-19 patients to provide fast, consistent, robust, and human error immune framework for lung and pneumonia lesion detection and quantification.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Dariush Askari
- Department of Radiology TechnologyShahid Beheshti University of Medical SciencesTehranIran
| | - Shakiba Moradi
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Masoumeh Pakbin
- Clinical Research Development CenterQom University of Medical SciencesQomIran
| | - Saleh Sandoughdaran
- Men's Health and Reproductive Health Research CenterShahid Beheshti University of Medical SciencesTehranIran
| | - Hamid Abdollahi
- Department of Radiologic Technology, Faculty of Allied MedicineKerman University of Medical SciencesKermanIran
| | - Amir Reza Radmard
- Department of RadiologyShariati Hospital, Tehran University of Medical SciencesTehranIran
| | - Kiara Rezaei‐Kalantari
- Rajaie Cardiovascular Medical and Research CenterIran University of Medical SciencesTehranIran
| | - Mostafa Ghelich Oghli
- Research and Development DepartmentMed Fanavaran Plus Co.KarajIran
- Department of Cardiovascular SciencesKU LeuvenLeuvenBelgium
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University NeurocenterGeneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
49
|
Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 77:29-52. [PMID: 34980946 PMCID: PMC8459787 DOI: 10.1016/j.inffus.2021.07.016] [Citation(s) in RCA: 195] [Impact Index Per Article: 65.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/25/2021] [Accepted: 07/25/2021] [Indexed: 05/04/2023]
Abstract
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Collapse
Affiliation(s)
- Guang Yang
- National Heart and Lung Institute, Imperial College London, London, UK
- Royal Brompton Hospital, London, UK
- Imperial Institute of Advanced Technology, Hangzhou, China
| | - Qinghao Ye
- Hangzhou Ocean’s Smart Boya Co., Ltd, China
- University of California, San Diego, La Jolla, CA, USA
| | - Jun Xia
- Radiology Department, Shenzhen Second People’s Hospital, Shenzhen, China
| |
Collapse
|
50
|
Balaha HM, El-Gendy EM, Saafan MM. CovH2SD: A COVID-19 detection approach based on Harris Hawks Optimization and stacked deep learning. EXPERT SYSTEMS WITH APPLICATIONS 2021; 186:115805. [PMID: 34511738 PMCID: PMC8418701 DOI: 10.1016/j.eswa.2021.115805] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 08/13/2021] [Accepted: 08/23/2021] [Indexed: 05/14/2023]
Abstract
Starting from Wuhan in China at the end of 2019, coronavirus disease (COVID-19) has propagated fast all over the world, affecting the lives of billions of people and increasing the mortality rate worldwide in few months. The golden treatment against the invasive spread of COVID-19 is done by identifying and isolating the infected patients, and as a result, fast diagnosis of COVID-19 is a critical issue. The common laboratory test for confirming the infection of COVID-19 is Reverse Transcription Polymerase Chain Reaction (RT-PCR). However, these tests suffer from some problems in time, accuracy, and availability. Chest images have proven to be a powerful tool in the early detection of COVID-19. In the current study, a hybrid learning and optimization approach named CovH2SD is proposed for the COVID-19 detection from the Chest Computed Tomography (CT) images. CovH2SD uses deep learning and pre-trained models to extract the features from the CT images and learn from them. It uses Harris Hawks Optimization (HHO) algorithm to optimize the hyperparameters. Transfer learning is applied using nine pre-trained convolutional neural networks (i.e. ResNet50, ResNet101, VGG16, VGG19, Xception, MobileNetV1, MobileNetV2, DenseNet121, and DenseNet169). Fast Classification Stage (FCS) and Compact Stacking Stage (CSS) are suggested to stack the best models into a single one. Nine experiments are applied and results are reported based on the Loss, Accuracy, Precision, Recall, F1-Score, and Area Under Curve (AUC) performance metrics. The comparison between combinations is applied using the Weighted Sum Method (WSM). Six experiments report a WSM value above 96.5%. The top WSM and accuracy reported values are 99.31% and 99.33% respectively which are higher than the eleven compared state-of-the-art studies.
Collapse
Affiliation(s)
- Hossam Magdy Balaha
- Computers Engineering and Systems Department, Faculty of Engineering, Mansoura University, Egypt
| | - Eman M El-Gendy
- Computers Engineering and Systems Department, Faculty of Engineering, Mansoura University, Egypt
| | - Mahmoud M Saafan
- Computers Engineering and Systems Department, Faculty of Engineering, Mansoura University, Egypt
| |
Collapse
|