1
|
Gu Y, Zheng S, Zhang B, Kang H, Jiang R, Li J. Deep multiple instance learning on heterogeneous graph for drug-disease association prediction. Comput Biol Med 2025; 184:109403. [PMID: 39577348 DOI: 10.1016/j.compbiomed.2024.109403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 11/05/2024] [Accepted: 11/08/2024] [Indexed: 11/24/2024]
Abstract
Drug repositioning offers promising prospects for accelerating drug discovery by identifying potential drug-disease associations (DDAs) for existing drugs and diseases. Previous methods have generated meta-path-augmented node or graph embeddings for DDA prediction in drug-disease heterogeneous networks. However, these approaches rarely develop end-to-end frameworks for path instance-level representation learning as well as the further feature selection and aggregation. By leveraging the abundant topological information in path instances, more fine-grained and interpretable predictions can be achieved. To this end, we introduce deep multiple instance learning into drug repositioning by proposing a novel method called MilGNet. MilGNet employs a heterogeneous graph neural network (HGNN)-based encoder to learn drug and disease node embeddings. Treating each drug-disease pair as a bag, we designed a special quadruplet meta-path form and implemented a pseudo meta-path generator in MilGNet to obtain multiple meta-path instances based on network topology. Additionally, a bidirectional instance encoder enhances the representation of meta-path instances. Finally, MilGNet utilizes a multi-scale interpretable predictor to aggregate bag embeddings with an attention mechanism, providing predictions at both the bag and instance levels for accurate and explainable predictions. Comprehensive experiments on five benchmarks demonstrate that MilGNet significantly outperforms ten advanced methods. Notably, three case studies on one drug (Methotrexate) and two diseases (Renal Failure and Mismatch Repair Cancer Syndrome) highlight MilGNet's potential for discovering new indications, therapies, and generating rational meta-path instances to investigate possible treatment mechanisms. The source code is available at https://github.com/gu-yaowen/MilGNet.
Collapse
Affiliation(s)
- Yaowen Gu
- Institute of Medical Information, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS&PUMC), Beijing, 100020, China; Department of Chemistry, New York University, NY, 10027, USA.
| | - Si Zheng
- Institute of Medical Information, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS&PUMC), Beijing, 100020, China; Institute for Artificial Intelligence, Department of Computer Science and Technology, BNRist, Tsinghua University, Beijing, 100084, China
| | - Bowen Zhang
- Beijing StoneWise Technology Co Ltd., Beijing, 100080, China
| | - Hongyu Kang
- Institute of Medical Information, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS&PUMC), Beijing, 100020, China
| | - Rui Jiang
- Ministry of Education Key Laboratory of Bioinformatics, Bioinformatics Division at the Beijing National Research Center for Information Science and Technology, Center for Synthetic and Systems Biology, Department of Automation, Tsinghua University, Beijing, 100084, China
| | - Jiao Li
- Institute of Medical Information, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS&PUMC), Beijing, 100020, China.
| |
Collapse
|
2
|
Sheikh BUH, Zafar A. Removing Adversarial Noise in X-ray Images via Total Variation Minimization and Patch-Based Regularization for Robust Deep Learning-based Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3282-3303. [PMID: 38886292 PMCID: PMC11639383 DOI: 10.1007/s10278-023-00919-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 09/08/2023] [Accepted: 10/18/2023] [Indexed: 06/20/2024]
Abstract
Deep learning has significantly advanced the field of radiology-based disease diagnosis, offering enhanced accuracy and efficiency in detecting various medical conditions through the analysis of complex medical images such as X-rays. This technology's ability to discern subtle patterns and anomalies has proven invaluable for swift and accurate disease identification. The relevance of deep learning in radiology has been particularly highlighted during the COVID-19 pandemic, where rapid and accurate diagnosis is crucial for effective treatment and containment. However, recent research has uncovered vulnerabilities in deep learning models when exposed to adversarial attacks, leading to incorrect predictions. In response to this critical challenge, we introduce a novel approach that leverages total variation minimization to combat adversarial noise within X-ray images effectively. Our focus narrows to COVID-19 diagnosis as a case study, where we initially construct a classification model through transfer learning designed to accurately classify lung X-ray images encompassing no pneumonia, COVID-19 pneumonia, and non-COVID pneumonia cases. Subsequently, we extensively evaluated the model's susceptibility to targeted and un-targeted adversarial attacks by employing the fast gradient sign gradient (FGSM) method. Our findings reveal a substantial reduction in the model's performance, with the average accuracy plummeting from 95.56 to 19.83% under adversarial conditions. However, the experimental results demonstrate the exceptional efficacy of the proposed denoising approach in enhancing the performance of diagnosis models when applied to adversarial examples. Post-denoising, the model exhibits a remarkable accuracy improvement, surging from 19.83 to 88.23% on adversarial images. These promising outcomes underscore the potential of denoising techniques to fortify the resilience and reliability of AI-based COVID-19 diagnostic systems, laying the foundation for their successful deployment in clinical settings.
Collapse
Affiliation(s)
- Burhan Ul Haque Sheikh
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
| | - Aasim Zafar
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India
| |
Collapse
|
3
|
Huang Q, Li G. Knowledge graph based reasoning in medical image analysis: A scoping review. Comput Biol Med 2024; 182:109100. [PMID: 39244959 DOI: 10.1016/j.compbiomed.2024.109100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 08/04/2024] [Accepted: 08/31/2024] [Indexed: 09/10/2024]
Abstract
Automated computer-aided diagnosis (CAD) is becoming more significant in the field of medicine due to advancements in computer hardware performance and the progress of artificial intelligence. The knowledge graph is a structure for visually representing knowledge facts. In the last decade, a large body of work based on knowledge graphs has effectively improved the organization and interpretability of large-scale complex knowledge. Introducing knowledge graph inference into CAD is a research direction with significant potential. In this review, we briefly review the basic principles and application methods of knowledge graphs firstly. Then, we systematically organize and analyze the research and application of knowledge graphs in medical imaging-assisted diagnosis. We also summarize the shortcomings of the current research, such as medical data barriers and deficiencies, low utilization of multimodal information, and weak interpretability. Finally, we propose future research directions with possibilities and potentials to address the shortcomings of current approaches.
Collapse
Affiliation(s)
- Qinghua Huang
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi'an, 710072, Shaanxi, China.
| | - Guanghui Li
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, 127 West Youyi Road, Beilin District, Xi'an, 710072, Shaanxi, China; School of Computer Science, Northwestern Polytechnical University, 1 Dongxiang Road, Chang'an District, Xi'an, 710129, Shaanxi, China.
| |
Collapse
|
4
|
Nabrdalik K, Irlik K, Meng Y, Kwiendacz H, Piaśnik J, Hendel M, Ignacy P, Kulpa J, Kegler K, Herba M, Boczek S, Hashim EB, Gao Z, Gumprecht J, Zheng Y, Lip GYH, Alam U. Artificial intelligence-based classification of cardiac autonomic neuropathy from retinal fundus images in patients with diabetes: The Silesia Diabetes Heart Study. Cardiovasc Diabetol 2024; 23:296. [PMID: 39127709 PMCID: PMC11316981 DOI: 10.1186/s12933-024-02367-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 07/17/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND Cardiac autonomic neuropathy (CAN) in diabetes mellitus (DM) is independently associated with cardiovascular (CV) events and CV death. Diagnosis of this complication of DM is time-consuming and not routinely performed in the clinical practice, in contrast to fundus retinal imaging which is accessible and routinely performed. Whether artificial intelligence (AI) utilizing retinal images collected through diabetic eye screening can provide an efficient diagnostic method for CAN is unknown. METHODS This was a single center, observational study in a cohort of patients with DM as a part of the Cardiovascular Disease in Patients with Diabetes: The Silesia Diabetes-Heart Project (NCT05626413). To diagnose CAN, we used standard CV autonomic reflex tests. In this analysis we implemented AI-based deep learning techniques with non-mydriatic 5-field color fundus imaging to identify patients with CAN. Two experiments have been developed utilizing Multiple Instance Learning and primarily ResNet 18 as the backbone network. Models underwent training and validation prior to testing on an unseen image set. RESULTS In an analysis of 2275 retinal images from 229 patients, the ResNet 18 backbone model demonstrated robust diagnostic capabilities in the binary classification of CAN, correctly identifying 93% of CAN cases and 89% of non-CAN cases within the test set. The model achieved an area under the receiver operating characteristic curve (AUCROC) of 0.87 (95% CI 0.74-0.97). For distinguishing between definite or severe stages of CAN (dsCAN), the ResNet 18 model accurately classified 78% of dsCAN cases and 93% of cases without dsCAN, with an AUCROC of 0.94 (95% CI 0.86-1.00). An alternate backbone model, ResWide 50, showed enhanced sensitivity at 89% for dsCAN, but with a marginally lower AUCROC of 0.91 (95% CI 0.73-1.00). CONCLUSIONS AI-based algorithms utilising retinal images can differentiate with high accuracy patients with CAN. AI analysis of fundus images to detect CAN may be implemented in routine clinical practice to identify patients at the highest CV risk. TRIAL REGISTRATION This is a part of the Silesia Diabetes-Heart Project (Clinical-Trials.gov Identifier: NCT05626413).
Collapse
Affiliation(s)
- Katarzyna Nabrdalik
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland.
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK.
| | - Krzysztof Irlik
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
- Doctoral School, Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Yanda Meng
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Hanna Kwiendacz
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Julia Piaśnik
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Mirela Hendel
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Paweł Ignacy
- Doctoral School, Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Justyna Kulpa
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Kamil Kegler
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Mikołaj Herba
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Sylwia Boczek
- Student's Scientific Association at the Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Effendy Bin Hashim
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Zhuangzhi Gao
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Janusz Gumprecht
- Department of Internal Medicine, Diabetology and Nephrology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Katowice, Poland
| | - Yalin Zheng
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Gregory Y H Lip
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Danish Center for Health Services Research, Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Uazman Alam
- Liverpool Centre for Cardiovascular Science at University of Liverpool, Liverpool John Moores University and Liverpool Heart and Chest Hospital, Liverpool, UK
- Diabetes & Endocrinology Research and Pain Research Institute, Institute of Life Course and Medical Sciences, University of Liverpool and Liverpool University Hospital NHS Foundation Trust, Liverpool, UK
| |
Collapse
|
5
|
Liu X, Tan H, Wang W, Chen Z. Deep learning based retinal vessel segmentation and hypertensive retinopathy quantification using heterogeneous features cross-attention neural network. Front Med (Lausanne) 2024; 11:1377479. [PMID: 38841586 PMCID: PMC11150614 DOI: 10.3389/fmed.2024.1377479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 05/09/2024] [Indexed: 06/07/2024] Open
Abstract
Retinal vessels play a pivotal role as biomarkers in the detection of retinal diseases, including hypertensive retinopathy. The manual identification of these retinal vessels is both resource-intensive and time-consuming. The fidelity of vessel segmentation in automated methods directly depends on the fundus images' quality. In instances of sub-optimal image quality, applying deep learning-based methodologies emerges as a more effective approach for precise segmentation. We propose a heterogeneous neural network combining the benefit of local semantic information extraction of convolutional neural network and long-range spatial features mining of transformer network structures. Such cross-attention network structure boosts the model's ability to tackle vessel structures in the retinal images. Experiments on four publicly available datasets demonstrate our model's superior performance on vessel segmentation and the big potential of hypertensive retinopathy quantification.
Collapse
Affiliation(s)
- Xinghui Liu
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
- Department of Cardiovascular Medicine, Guizhou Provincial People's Hospital, Guiyang, China
| | - Hongwen Tan
- Department of Cardiovascular Medicine, Guizhou Provincial People's Hospital, Guiyang, China
| | - Wu Wang
- Electrical Engineering College, Guizhou University, Guiyang, China
| | - Zhangrong Chen
- School of Clinical Medicine, Guizhou Medical University, Guiyang, China
- Department of Cardiovascular Medicine, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| |
Collapse
|
6
|
Zhang Y, Feng X, Dong Y, Chen Y, Zhao Z, Yang B, Chang Y, Bai Y. SM-GRSNet: sparse mapping-based graph representation segmentation network for honeycomb lung lesion. Phys Med Biol 2024; 69:085020. [PMID: 38417177 DOI: 10.1088/1361-6560/ad2e6b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Objective. Honeycomb lung is a rare but severe disease characterized by honeycomb-like imaging features and distinct radiological characteristics. Therefore, this study aims to develop a deep-learning model capable of segmenting honeycomb lung lesions from Computed Tomography (CT) scans to address the efficacy issue of honeycomb lung segmentation.Methods. This study proposes a sparse mapping-based graph representation segmentation network (SM-GRSNet). SM-GRSNet integrates an attention affinity mechanism to effectively filter redundant features at a coarse-grained region level. The attention encoder generated by this mechanism specifically focuses on the lesion area. Additionally, we introduce a graph representation module based on sparse links in SM-GRSNet. Subsequently, graph representation operations are performed on the sparse graph, yielding detailed lesion segmentation results. Finally, we construct a pyramid-structured cascaded decoder in SM-GRSNet, which combines features from the sparse link-based graph representation modules and attention encoders to generate the final segmentation mask.Results. Experimental results demonstrate that the proposed SM-GRSNet achieves state-of-the-art performance on a dataset comprising 7170 honeycomb lung CT images. Our model attains the highest IOU (87.62%), Dice(93.41%). Furthermore, our model also achieves the lowest HD95 (6.95) and ASD (2.47).Significance.The SM-GRSNet method proposed in this paper can be used for automatic segmentation of honeycomb lung CT images, which enhances the segmentation performance of Honeycomb lung lesions under small sample datasets. It will help doctors with early screening, accurate diagnosis, and customized treatment. This method maintains a high correlation and consistency between the automatic segmentation results and the expert manual segmentation results. Accurate automatic segmentation of the honeycomb lung lesion area is clinically important.
Collapse
Affiliation(s)
- Yuanrong Zhang
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Xiufang Feng
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Yunyun Dong
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Ying Chen
- School of International Education, Beijing University of Chemical Technology, Beijing 100029, People's Republic of China
| | - Zian Zhao
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Bingqian Yang
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Yunqing Chang
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| | - Yujie Bai
- School of Software, Taiyuan University of Technology, Taiyuan 030024, People's Republic of China
| |
Collapse
|
7
|
Fang Y, Xing X, Wang S, Walsh S, Yang G. Post-COVID highlights: Challenges and solutions of artificial intelligence techniques for swift identification of COVID-19. Curr Opin Struct Biol 2024; 85:102778. [PMID: 38364679 DOI: 10.1016/j.sbi.2024.102778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 01/22/2024] [Accepted: 01/26/2024] [Indexed: 02/18/2024]
Abstract
Since the onset of the COVID-19 pandemic in 2019, there has been a concerted effort to develop cost-effective, non-invasive, and rapid AI-based tools. These tools were intended to alleviate the burden on healthcare systems, control the rapid spread of the virus, and enhance intervention outcomes, all in response to this unprecedented global crisis. As we transition into a post-COVID era, we retrospectively evaluate these proposed studies and offer a review of the techniques employed in AI diagnostic models, with a focus on the solutions proposed for different challenges. This review endeavors to provide insights into the diverse solutions designed to address the multifaceted challenges that arose during the pandemic. By doing so, we aim to prepare the AI community for the development of AI tools tailored to address public health emergencies effectively.
Collapse
Affiliation(s)
- Yingying Fang
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| | - Xiaodan Xing
- Bioengineering Department, Imperial College London, London W12 7SL, UK
| | - Shiyi Wang
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| | - Simon Walsh
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK; Bioengineering Department, Imperial College London, London W12 7SL, UK; Imperial-X, Imperial College London, London W12 7SL, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London WC2R 2LS, UK.
| |
Collapse
|
8
|
Bougourzi F, Distante C, Dornaika F, Taleb-Ahmed A, Hadid A, Chaudhary S, Yang W, Qiang Y, Anwar T, Breaban ME, Hsu CC, Tai SC, Chen SN, Tricarico D, Chaudhry HAH, Fiandrotti A, Grangetto M, Spatafora MAN, Ortis A, Battiato S. COVID-19 Infection Percentage Estimation from Computed Tomography Scans: Results and Insights from the International Per-COVID-19 Challenge. SENSORS (BASEL, SWITZERLAND) 2024; 24:1557. [PMID: 38475092 PMCID: PMC10934842 DOI: 10.3390/s24051557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/29/2023] [Accepted: 02/26/2024] [Indexed: 03/14/2024]
Abstract
COVID-19 analysis from medical imaging is an important task that has been intensively studied in the last years due to the spread of the COVID-19 pandemic. In fact, medical imaging has often been used as a complementary or main tool to recognize the infected persons. On the other hand, medical imaging has the ability to provide more details about COVID-19 infection, including its severity and spread, which makes it possible to evaluate the infection and follow-up the patient's state. CT scans are the most informative tool for COVID-19 infection, where the evaluation of COVID-19 infection is usually performed through infection segmentation. However, segmentation is a tedious task that requires much effort and time from expert radiologists. To deal with this limitation, an efficient framework for estimating COVID-19 infection as a regression task is proposed. The goal of the Per-COVID-19 challenge is to test the efficiency of modern deep learning methods on COVID-19 infection percentage estimation (CIPE) from CT scans. Participants had to develop an efficient deep learning approach that can learn from noisy data. In addition, participants had to cope with many challenges, including those related to COVID-19 infection complexity and crossdataset scenarios. This paper provides an overview of the COVID-19 infection percentage estimation challenge (Per-COVID-19) held at MIA-COVID-2022. Details of the competition data, challenges, and evaluation metrics are presented. The best performing approaches and their results are described and discussed.
Collapse
Affiliation(s)
- Fares Bougourzi
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
- Laboratoire LISSI, University Paris-Est Creteil, Vitry sur Seine, 94400 Paris, France
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy;
| | - Fadi Dornaika
- Department of Computer Science and Artificial Intelligence, University of the Basque Country UPV/EHU, Manuel Lardizabal, 1, 20018 San Sebastian, Spain;
- IKERBASQUE, Basque Foundation for Science, 48011 Bilbao, Spain
| | - Abdelmalik Taleb-Ahmed
- Institut d’Electronique de Microélectronique et de Nanotechnologie (IEMN), UMR 8520, Universite Polytechnique Hauts-de-France, Université de Lille, CNRS, 59313 Valenciennes, France;
| | - Abdenour Hadid
- Sorbonne Center for Artificial Intelligence, Sorbonne University of Abu Dhabi, Abu Dhabi P.O. Box 38044, United Arab Emirates
| | - Suman Chaudhary
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Wanting Yang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Yan Qiang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan 030024, China; (S.C.)
| | - Talha Anwar
- School of Computing, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
| | | | - Chih-Chung Hsu
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Shen-Chieh Tai
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Shao-Ning Chen
- Institute of Data Science, National Cheng Kung University, No. 1, University Rd., East Dist., Tainan City 701, Taiwan
| | - Davide Tricarico
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Hafiza Ayesha Hoor Chaudhry
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Attilio Fiandrotti
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | - Marco Grangetto
- Dipartimento di Informatica, Universita degli Studi di Torino, Corso Svizzera 185, 10149 Torino, Italy; (D.T.); (H.A.H.C.)
| | | | - Alessandro Ortis
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy (S.B.)
| | - Sebastiano Battiato
- Department of Mathematics and Computer Science, University of Catania, 95125 Catania, Italy (S.B.)
| |
Collapse
|
9
|
Garg A, Alag S, Duncan D. CoSev: Data-Driven Optimizations for COVID-19 Severity Assessment in Low-Sample Regimes. Diagnostics (Basel) 2024; 14:337. [PMID: 38337853 PMCID: PMC10855975 DOI: 10.3390/diagnostics14030337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/06/2024] [Accepted: 01/19/2024] [Indexed: 02/12/2024] Open
Abstract
Given the pronounced impact COVID-19 continues to have on society-infecting 700 million reported individuals and causing 6.96 million deaths-many deep learning works have recently focused on the virus's diagnosis. However, assessing severity has remained an open and challenging problem due to a lack of large datasets, the large dimensionality of images for which to find weights, and the compute limitations of modern graphics processing units (GPUs). In this paper, a new, iterative application of transfer learning is demonstrated on the understudied field of 3D CT scans for COVID-19 severity analysis. This methodology allows for enhanced performance on the MosMed Dataset, which is a small and challenging dataset containing 1130 images of patients for five levels of COVID-19 severity (Zero, Mild, Moderate, Severe, and Critical). Specifically, given the large dimensionality of the input images, we create several custom shallow convolutional neural network (CNN) architectures and iteratively refine and optimize them, paying attention to learning rates, layer types, normalization types, filter sizes, dropout values, and more. After a preliminary architecture design, the models are systematically trained on a simplified version of the dataset-building models for two-class, then three-class, then four-class, and finally five-class classification. The simplified problem structure allows the model to start learning preliminary features, which can then be further modified for more difficult classification tasks. Our final model CoSev boosts classification accuracies from below 60% at first to 81.57% with the optimizations, reaching similar performance to the state-of-the-art on the dataset, with much simpler setup procedures. In addition to COVID-19 severity diagnosis, the explored methodology can be applied to general image-based disease detection. Overall, this work highlights innovative methodologies that advance current computer vision practices for high-dimension, low-sample data as well as the practicality of data-driven machine learning and the importance of feature design for training, which can then be implemented for improvements in clinical practices.
Collapse
Affiliation(s)
- Aksh Garg
- Computer Science Department, Stanford University, Stanford, CA 94305, USA; (A.G.); (S.A.)
| | - Shray Alag
- Computer Science Department, Stanford University, Stanford, CA 94305, USA; (A.G.); (S.A.)
| | - Dominique Duncan
- Laboratory of Neuro Imaging, USC Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA 90033, USA
| |
Collapse
|
10
|
Haque SBU, Zafar A. Robust Medical Diagnosis: A Novel Two-Phase Deep Learning Framework for Adversarial Proof Disease Detection in Radiology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:308-338. [PMID: 38343214 PMCID: PMC11266337 DOI: 10.1007/s10278-023-00916-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/23/2023] [Accepted: 10/08/2023] [Indexed: 03/02/2024]
Abstract
In the realm of medical diagnostics, the utilization of deep learning techniques, notably in the context of radiology images, has emerged as a transformative force. The significance of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), lies in their capacity to rapidly and accurately diagnose diseases from radiology images. This capability has been particularly vital during the COVID-19 pandemic, where rapid and precise diagnosis played a pivotal role in managing the spread of the virus. DL models, trained on vast datasets of radiology images, have showcased remarkable proficiency in distinguishing between normal and COVID-19-affected cases, offering a ray of hope amidst the crisis. However, as with any technological advancement, vulnerabilities emerge. Deep learning-based diagnostic models, although proficient, are not immune to adversarial attacks. These attacks, characterized by carefully crafted perturbations to input data, can potentially disrupt the models' decision-making processes. In the medical context, such vulnerabilities could have dire consequences, leading to misdiagnoses and compromised patient care. To address this, we propose a two-phase defense framework that combines advanced adversarial learning and adversarial image filtering techniques. We use a modified adversarial learning algorithm to enhance the model's resilience against adversarial examples during the training phase. During the inference phase, we apply JPEG compression to mitigate perturbations that cause misclassification. We evaluate our approach on three models based on ResNet-50, VGG-16, and Inception-V3. These models perform exceptionally in classifying radiology images (X-ray and CT) of lung regions into normal, pneumonia, and COVID-19 pneumonia categories. We then assess the vulnerability of these models to three targeted adversarial attacks: fast gradient sign method (FGSM), projected gradient descent (PGD), and basic iterative method (BIM). The results show a significant drop in model performance after the attacks. However, our defense framework greatly improves the models' resistance to adversarial attacks, maintaining high accuracy on adversarial examples. Importantly, our framework ensures the reliability of the models in diagnosing COVID-19 from clean images.
Collapse
Affiliation(s)
- Sheikh Burhan Ul Haque
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India.
| | - Aasim Zafar
- Department of Computer Science, Aligarh Muslim University, Uttar Pradesh, Aligarh, 202002, India
| |
Collapse
|
11
|
Liang Z, Xue Z, Rajaraman S, Feng Y, Antani S. Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : SECOND INTERNATIONAL WORKSHOP, MILLAND 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. MILLAND (WORKSHOP) : (2ND : 2023 : VANCOUVER, B... 2023; 14307:128-137. [PMID: 38415180 PMCID: PMC10896252 DOI: 10.1007/978-3-031-44917-8_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.
Collapse
Affiliation(s)
- Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yang Feng
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
12
|
Yang W, Wu W, Wang L, Zhang S, Zhao J, Qiang Y. PMSG-Net: A priori-guided multilevel graph transformer fusion network for immunotherapy efficacy prediction. Comput Biol Med 2023; 164:107371. [PMID: 37586204 DOI: 10.1016/j.compbiomed.2023.107371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/24/2023] [Accepted: 08/12/2023] [Indexed: 08/18/2023]
Abstract
In the case of specific immunotherapy regimens and access to pre-treatment CT scans, developing reliable, interpretable intelligent image biomarkers to predict efficacy is essential for physician decision-making and patient treatment selection. However, varying levels of prognosis show a similar appearance on CT scans. It becomes challenging to stratify patients by a single pre-treatment CT scan when presenting subtle differences in images for experienced experts and existing prognostic classification methods. In addition, the pattern of peri-tumoural radiological structures also determines the patient's response to ICIs. Therefore, it is essential to develop a method that focuses on the clinical priori features of the tumour edges but also makes full use of the rich information within the 3D tumour. This paper proposes a priori-guided multilevel graph transformer fusion network (PMSG-Net). Specifically, a graph convolutional network is first used to obtain a feature representation of the tumour edge, and complementary information from that detailed representation is used to enhance the global representation. In the tumour global representation branch (MSGNet), we designed the cascaded scale-enhanced swin transformer to obtain attributes of graph nodes, and efficiently learn and model spatial dependencies and semantic connections at different scales through multi-hop context-aware attention (MCA), yielding a richer global semantic representation. To our knowledge, this is the first attempt to use graph neural networks to predict the efficacy of immunotherapy, and the experimental results show that this method outperforms the current mainstream methods.
Collapse
Affiliation(s)
- Wanting Yang
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Wei Wu
- Department of Clinical Laboratory, Affiliated People's Hospital of Shanxi Medical University, Shanxi Provincial People's Hospital, Taiyuan, Shanxi, China
| | - Long Wang
- Jinzhong College of Information, 030600, Taiyuan, Shanxi, China
| | - Shuming Zhang
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Juanjuan Zhao
- College of Software, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China
| | - Yan Qiang
- College of Information and Computer, Taiyuan University of Technology, 030000, Taiyuan, Shanxi, China.
| |
Collapse
|
13
|
Patefield A, Meng Y, Airaldi M, Coco G, Vaccaro S, Parekh M, Semeraro F, Gadhvi KA, Kaye SB, Zheng Y, Romano V. Deep Learning Using Preoperative AS-OCT Predicts Graft Detachment in DMEK. Transl Vis Sci Technol 2023; 12:14. [PMID: 37184500 DOI: 10.1167/tvst.12.5.14] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023] Open
Abstract
Purpose To evaluate a novel deep learning algorithm to distinguish between eyes that may or may not have a graft detachment based on pre-Descemet membrane endothelial keratoplasty (DMEK) anterior segment optical coherence tomography (AS-OCT) images. Methods Retrospective cohort study. A multiple-instance learning artificial intelligence (MIL-AI) model using a ResNet-101 backbone was designed. AS-OCT images were split into training and testing sets. The MIL-AI model was trained and validated on the training set. Model performance and heatmaps were calculated from the testing set. Classification performance metrics included F1 score (harmonic mean of recall and precision), specificity, sensitivity, and area under curve (AUC). Finally, MIL-AI performance was compared to manual classification by an experienced ophthalmologist. Results In total, 9466 images of 74 eyes (128 images per eye) were included in the study. Images from 50 eyes were used to train and validate the MIL-AI system, while the remaining 24 eyes were used as the test set to determine its performance and generate heatmaps for visualization. The performance metrics on the test set (95% confidence interval) were as follows: F1 score, 0.77 (0.57-0.91); precision, 0.67 (0.44-0.88); specificity, 0.45 (0.15-0.75); sensitivity, 0.92 (0.73-1.00); and AUC, 0.63 (0.52-0.86). MIL-AI performance was more sensitive (92% vs. 31%) but less specific (45% vs. 64%) than the ophthalmologist's performance. Conclusions The MIL-AI predicts with high sensitivity the eyes that may have post-DMEK graft detachment requiring rebubbling. Larger-scale clinical trials are warranted to validate the model. Translational Relevance MIL-AI models represent an opportunity for implementation in routine DMEK suitability screening.
Collapse
Affiliation(s)
- Alastair Patefield
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Yanda Meng
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Matteo Airaldi
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Giulia Coco
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
- Department of Clinical Science and Translational Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Sabrina Vaccaro
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
- Department of Ophthalmology, University of "Magna Graecia," Catanzaro, Italy
| | - Mohit Parekh
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Francesco Semeraro
- Ophthalmology Clinic, Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| | - Kunal A Gadhvi
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Stephen B Kaye
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Vito Romano
- Department of Eye and Vision Sciences, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
- Department of Corneal Diseases, St. Paul's Eye Unit, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
- Ophthalmology Clinic, Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Brescia, Italy
| |
Collapse
|