1
|
Essa HA, Ismaiel E, Hinnawi MFA. Feature-based detection of breast cancer using convolutional neural network and feature engineering. Sci Rep 2024; 14:22215. [PMID: 39333731 DOI: 10.1038/s41598-024-73083-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 09/13/2024] [Indexed: 09/29/2024] Open
Abstract
Breast cancer (BC) is a prominent cause of female mortality on a global scale. Recently, there has been growing interest in utilizing blood and tissue-based biomarkers to detect and diagnose BC, as this method offers a non-invasive approach. To improve the classification and prediction of BC using large biomarker datasets, several machine-learning techniques have been proposed. In this paper, we present a multi-stage approach that consists of computing new features and then sorting them into an input image for the ResNet50 neural network. The method involves transforming the original values into normalized values based on their membership in the Gaussian distribution of healthy and BC samples of each feature. To test the effectiveness of our proposed approach, we employed the Coimbra and Wisconsin datasets. The results demonstrate efficient performance improvement, with an accuracy of 100% and 100% using the Coimbra and Wisconsin datasets, respectively. Furthermore, the comparison with existing literature validates the reliability and effectiveness of our methodology, where the normalized value can reduce the misclassified samples of ML techniques because of its generality.
Collapse
Affiliation(s)
- Hiba Allah Essa
- Department of Biomedical Engineering, Faculty of Electrical and Mechanical Engineering, Damascus University, Damascus, Syria.
| | - Ebrahim Ismaiel
- Faculty of Biomedical Engineering, Al-Andalus University for Medical Sciences, Tartous, Syria.
- Department of Medicine and Surgery, University of Parma, Parma, Italy.
| | - Mhd Firas Al Hinnawi
- Department of Biomedical Engineering, Faculty of Electrical and Mechanical Engineering, Damascus University, Damascus, Syria
| |
Collapse
|
2
|
Jiang B, Bao L, He S, Chen X, Jin Z, Ye Y. Deep learning applications in breast cancer histopathological imaging: diagnosis, treatment, and prognosis. Breast Cancer Res 2024; 26:137. [PMID: 39304962 DOI: 10.1186/s13058-024-01895-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 09/16/2024] [Indexed: 09/22/2024] Open
Abstract
Breast cancer is the most common malignant tumor among women worldwide and remains one of the leading causes of death among women. Its incidence and mortality rates are continuously rising. In recent years, with the rapid advancement of deep learning (DL) technology, DL has demonstrated significant potential in breast cancer diagnosis, prognosis evaluation, and treatment response prediction. This paper reviews relevant research progress and applies DL models to image enhancement, segmentation, and classification based on large-scale datasets from TCGA and multiple centers. We employed foundational models such as ResNet50, Transformer, and Hover-net to investigate the performance of DL models in breast cancer diagnosis, treatment, and prognosis prediction. The results indicate that DL techniques have significantly improved diagnostic accuracy and efficiency, particularly in predicting breast cancer metastasis and clinical prognosis. Furthermore, the study emphasizes the crucial role of robust databases in developing highly generalizable models. Future research will focus on addressing challenges related to data management, model interpretability, and regulatory compliance, ultimately aiming to provide more precise clinical treatment and prognostic evaluation programs for breast cancer patients.
Collapse
Affiliation(s)
- Bitao Jiang
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China.
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China.
| | - Lingling Bao
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China
| | - Songqin He
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China
| | - Xiao Chen
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China
| | - Zhihui Jin
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China
| | - Yingquan Ye
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China.
| |
Collapse
|
3
|
Horasan A, Güneş A. Advancing Prostate Cancer Diagnosis: A Deep Learning Approach for Enhanced Detection in MRI Images. Diagnostics (Basel) 2024; 14:1871. [PMID: 39272656 PMCID: PMC11393904 DOI: 10.3390/diagnostics14171871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 08/04/2024] [Accepted: 08/16/2024] [Indexed: 09/15/2024] Open
Abstract
Prostate cancer remains a leading cause of mortality among men globally, necessitating advancements in diagnostic methodologies to improve detection and treatment outcomes. Magnetic Resonance Imaging has emerged as a crucial technique for the detection of prostate cancer, with current research focusing on the integration of deep learning frameworks to refine this diagnostic process. This study employs a comprehensive approach using multiple deep learning models, including a three-dimensional (3D) Convolutional Neural Network, a Residual Network, and an Inception Network to enhance the accuracy and robustness of prostate cancer detection. By leveraging the complementary strengths of these models through an ensemble method and soft voting technique, the study aims to achieve superior diagnostic performance. The proposed methodology demonstrates state-of-the-art results, with the ensemble model achieving an overall accuracy of 91.3%, a sensitivity of 90.2%, a specificity of 92.1%, a precision of 89.8%, and an F1 score of 90.0% when applied to MRI images from the SPIE-AAPM-NCI PROSTATEx dataset. Evaluation of the models involved meticulous pre-processing, data augmentation, and the use of advanced deep-learning architectures to analyze the whole MRI slices and volumes. The findings highlight the potential of using an ensemble approach to significantly improve prostate cancer diagnostics, offering a robust and precise tool for clinical applications.
Collapse
Affiliation(s)
- Alparslan Horasan
- Computer Engineering Department, Istanbul Aydin University, 34150 Istanbul, Turkey
| | - Ali Güneş
- Computer Engineering Department, Istanbul Aydin University, 34150 Istanbul, Turkey
| |
Collapse
|
4
|
Rajdeo P, Aronow B, Surya Prasath VB. Deep learning-based multimodal spatial transcriptomics analysis for cancer. Adv Cancer Res 2024; 163:1-38. [PMID: 39271260 PMCID: PMC11431148 DOI: 10.1016/bs.acr.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
The advent of deep learning (DL) and multimodal spatial transcriptomics (ST) has revolutionized cancer research, offering unprecedented insights into tumor biology. This book chapter explores the integration of DL with ST to advance cancer diagnostics, treatment planning, and precision medicine. DL, a subset of artificial intelligence, employs neural networks to model complex patterns in vast datasets, significantly enhancing diagnostic and treatment applications. In oncology, convolutional neural networks excel in image classification, segmentation, and tumor volume analysis, essential for identifying tumors and optimizing radiotherapy. The chapter also delves into multimodal data analysis, which integrates genomic, proteomic, imaging, and clinical data to offer a holistic understanding of cancer biology. Leveraging diverse data sources, researchers can uncover intricate details of tumor heterogeneity, microenvironment interactions, and treatment responses. Examples include integrating MRI data with genomic profiles for accurate glioma grading and combining proteomic and clinical data to uncover drug resistance mechanisms. DL's integration with multimodal data enables comprehensive and actionable insights for cancer diagnosis and treatment. The synergy between DL models and multimodal data analysis enhances diagnostic accuracy, personalized treatment planning, and prognostic modeling. Notable applications include ST, which maps gene expression patterns within tissue contexts, providing critical insights into tumor heterogeneity and potential therapeutic targets. In summary, the integration of DL and multimodal ST represents a paradigm shift towards more precise and personalized oncology. This chapter elucidates the methodologies and applications of these advanced technologies, highlighting their transformative potential in cancer research and clinical practice.
Collapse
Affiliation(s)
- Pankaj Rajdeo
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Bruce Aronow
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States; Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - V B Surya Prasath
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States; Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States; Department of Computer Science, University of Cincinnati, Cincinnati, OH, United States.
| |
Collapse
|
5
|
Dunenova G, Kalmataeva Z, Kaidarova D, Dauletbaev N, Semenova Y, Mansurova M, Grjibovski A, Kassymbekova F, Sarsembayev A, Semenov D, Glushkova N. The Performance and Clinical Applicability of HER2 Digital Image Analysis in Breast Cancer: A Systematic Review. Cancers (Basel) 2024; 16:2761. [PMID: 39123488 PMCID: PMC11311684 DOI: 10.3390/cancers16152761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/28/2024] [Accepted: 07/30/2024] [Indexed: 08/12/2024] Open
Abstract
This systematic review aims to address the research gap in the performance of computational algorithms for the digital image analysis of HER2 images in clinical settings. While numerous studies have explored various aspects of these algorithms, there is a lack of comprehensive evaluation regarding their effectiveness in real-world clinical applications. We conducted a search of the Web of Science and PubMed databases for studies published from 31 December 2013 to 30 June 2024, focusing on performance effectiveness and components such as dataset size, diversity and source, ground truth, annotation, and validation methods. The study was registered with PROSPERO (CRD42024525404). Key questions guiding this review include the following: How effective are current computational algorithms at detecting HER2 status in digital images? What are the common validation methods and dataset characteristics used in these studies? Is there standardization of algorithm evaluations of clinical applications that can improve the clinical utility and reliability of computational tools for HER2 detection in digital image analysis? We identified 6833 publications, with 25 meeting the inclusion criteria. The accuracy rate with clinical datasets varied from 84.19% to 97.9%. The highest accuracy was achieved on the publicly available Warwick dataset at 98.8% in synthesized datasets. Only 12% of studies used separate datasets for external validation; 64% of studies used a combination of accuracy, precision, recall, and F1 as a set of performance measures. Despite the high accuracy rates reported in these studies, there is a notable absence of direct evidence supporting their clinical application. To facilitate the integration of these technologies into clinical practice, there is an urgent need to address real-world challenges and overreliance on internal validation. Standardizing study designs on real clinical datasets can enhance the reliability and clinical applicability of computational algorithms in improving the detection of HER2 cancer.
Collapse
Affiliation(s)
- Gauhar Dunenova
- Department of Epidemiology, Biostatistics and Evidence-Based Medicine, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Zhanna Kalmataeva
- Rector Office, Asfendiyarov Kazakh National Medical University, Almaty 050000, Kazakhstan;
| | - Dilyara Kaidarova
- Kazakh Research Institute of Oncology and Radiology, Almaty 050022, Kazakhstan;
| | - Nurlan Dauletbaev
- Department of Internal, Respiratory and Critical Care Medicine, Philipps University of Marburg, 35037 Marburg, Germany;
- Department of Pediatrics, Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H4A 3J1, Canada
- Faculty of Medicine and Health Care, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Yuliya Semenova
- School of Medicine, Nazarbayev University, Astana 010000, Kazakhstan;
| | - Madina Mansurova
- Department of Artificial Intelligence and Big Data, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| | - Andrej Grjibovski
- Central Scientific Research Laboratory, Northern State Medical University, Arkhangelsk 163000, Russia;
- Department of Epidemiology and Modern Vaccination Technologies, I.M. Sechenov First Moscow State Medical University, Moscow 105064, Russia
- Department of Biology, Ecology and Biotechnology, Northern (Arctic) Federal University, Arkhangelsk 163000, Russia
- Department of Health Policy and Management, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
| | - Fatima Kassymbekova
- Department of Public Health and Social Sciences, Kazakhstan Medical University “KSPH”, Almaty 050060, Kazakhstan;
| | - Aidos Sarsembayev
- School of Digital Technologies, Almaty Management University, Almaty 050060, Kazakhstan;
- Health Research Institute, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| | - Daniil Semenov
- Computer Science and Engineering Program, Astana IT University, Astana 020000, Kazakhstan;
| | - Natalya Glushkova
- Department of Epidemiology, Biostatistics and Evidence-Based Medicine, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan
- Health Research Institute, Al-Farabi Kazakh National University, Almaty 050040, Kazakhstan;
| |
Collapse
|
6
|
Chen W, Li Q, Zhang H, Sun K, Sun W, Jiao Z, Ni X. MR-CT image fusion method of intracranial tumors based on Res2Net. BMC Med Imaging 2024; 24:169. [PMID: 38977957 PMCID: PMC11232265 DOI: 10.1186/s12880-024-01329-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 06/10/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Information complementarity can be achieved by fusing MR and CT images, and fusion images have abundant soft tissue and bone information, facilitating accurate auxiliary diagnosis and tumor target delineation. PURPOSE The purpose of this study was to construct high-quality fusion images based on the MR and CT images of intracranial tumors by using the Residual-Residual Network (Res2Net) method. METHODS This paper proposes an MR and CT image fusion method based on Res2Net. The method comprises three components: feature extractor, fusion layer, and reconstructor. The feature extractor utilizes the Res2Net framework to extract multiscale features from source images. The fusion layer incorporates a fusion strategy based on spatial mean attention, adaptively adjusting fusion weights for feature maps at each position to preserve fine details from the source images. Finally, fused features are input into the feature reconstructor to reconstruct a fused image. RESULTS Qualitative results indicate that the proposed fusion method exhibits clear boundary contours and accurate localization of tumor regions. Quantitative results show that the method achieves average gradient, spatial frequency, entropy, and visual information fidelity for fusion metrics of 4.6771, 13.2055, 1.8663, and 0.5176, respectively. Comprehensive experimental results demonstrate that the proposed method preserves more texture details and structural information in fused images than advanced fusion algorithms, reducing spectral artifacts and information loss and performing better in terms of visual quality and objective metrics. CONCLUSION The proposed method effectively combines MR and CT image information, allowing the precise localization of tumor region boundaries, assisting clinicians in clinical diagnosis.
Collapse
Affiliation(s)
- Wei Chen
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, 213164, China
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Qixuan Li
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, 213164, China
| | - Heng Zhang
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Kangkang Sun
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, 213164, China
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Wei Sun
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Zhuqing Jiao
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, 213164, China.
| | - Xinye Ni
- Department of Radiotherapy, The Affiliated Changzhou NO. 2 People's Hospital of Nanjing Medical University, Changzhou, 213003, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
| |
Collapse
|
7
|
Ozaki Y, Broughton P, Abdollahi H, Valafar H, Blenda AV. Integrating Omics Data and AI for Cancer Diagnosis and Prognosis. Cancers (Basel) 2024; 16:2448. [PMID: 39001510 PMCID: PMC11240413 DOI: 10.3390/cancers16132448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 06/27/2024] [Accepted: 07/01/2024] [Indexed: 07/16/2024] Open
Abstract
Cancer is one of the leading causes of death, making timely diagnosis and prognosis very important. Utilization of AI (artificial intelligence) enables providers to organize and process patient data in a way that can lead to better overall outcomes. This review paper aims to look at the varying uses of AI for diagnosis and prognosis and clinical utility. PubMed and EBSCO databases were utilized for finding publications from 1 January 2020 to 22 December 2023. Articles were collected using key search terms such as "artificial intelligence" and "machine learning." Included in the collection were studies of the application of AI in determining cancer diagnosis and prognosis using multi-omics data, radiomics, pathomics, and clinical and laboratory data. The resulting 89 studies were categorized into eight sections based on the type of data utilized and then further subdivided into two subsections focusing on cancer diagnosis and prognosis, respectively. Eight studies integrated more than one form of omics, namely genomics, transcriptomics, epigenomics, and proteomics. Incorporating AI into cancer diagnosis and prognosis alongside omics and clinical data represents a significant advancement. Given the considerable potential of AI in this domain, ongoing prospective studies are essential to enhance algorithm interpretability and to ensure safe clinical integration.
Collapse
Affiliation(s)
- Yousaku Ozaki
- Department of Biomedical Sciences, University of South Carolina School of Medicine Greenville, Greenville, SC 29605, USA; (Y.O.); (P.B.)
| | - Phil Broughton
- Department of Biomedical Sciences, University of South Carolina School of Medicine Greenville, Greenville, SC 29605, USA; (Y.O.); (P.B.)
| | - Hamed Abdollahi
- Department of Computer Science and Engineering, Molinaroli College of Engineering and Computing, Columbia, SC 29208, USA;
| | - Homayoun Valafar
- Department of Computer Science and Engineering, Molinaroli College of Engineering and Computing, Columbia, SC 29208, USA;
| | - Anna V. Blenda
- Department of Biomedical Sciences, University of South Carolina School of Medicine Greenville, Greenville, SC 29605, USA; (Y.O.); (P.B.)
- Prisma Health Cancer Institute, Prisma Health, Greenville, SC 29605, USA
| |
Collapse
|
8
|
Dong S, Fu A, Liu J. Prediction of metastases in confusing mediastinal lymph nodes based on flourine-18 fluorodeoxyglucose ( 18F-FDG) positron emission tomography/computed tomography (PET/CT) imaging using machine learning. Quant Imaging Med Surg 2024; 14:4723-4734. [PMID: 39022286 PMCID: PMC11250303 DOI: 10.21037/qims-24-100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 05/11/2024] [Indexed: 07/20/2024]
Abstract
Background For patient management and prognosis, accurate assessment of mediastinal lymph node (LN) status is essential. This study aimed to use machine learning approaches to assess the status of confusing LNs in the mediastinum using positron emission tomography/computed tomography (PET/CT) images; the results were then compared with the diagnostic conclusions of nuclear medicine physicians. Methods A total of 509 confusing mediastinal LNs that had undergone pathological assessment or follow-up from 320 patients from three centres were retrospectively included in the study. LNs from centres I and II were randomised into a training cohort (N=324) and an internal validation cohort (N=81), while those from centre III patients formed an external validation cohort (N=104). Various parameters measured from PET and CT images and extracted radiomics and deep learning features were used to construct PET/CT-parameter, radiomics, and deep learning models, respectively. Model performance was compared with the diagnostic results of nuclear medicine physicians using the area under the curve (AUC), sensitivity, specificity, and decision curve analysis (DCA). Results The coupled model of gradient boosting decision tree-logistic regression (GBDT-LR) incorporating radiomic features showed AUCs of 92.2% [95% confidence interval (CI), 0.890-0.953], 84.6% (95% CI, 0.761-0.930) and 84.6% (95% CI, 0.770-0.922) across the three cohorts. It significantly outperformed the deep learning model, the parametric PET/CT model and the physician's diagnosis. DCA demonstrated the clinical usefulness of the GBDT-LR model. Conclusions The presented GBDT-LR model performed well in evaluating confusing mediastinal LNs in both internal and external validation sets. It not only crossed radiometric features but also avoided overfitting.
Collapse
Affiliation(s)
- Siqin Dong
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Medical School, Southeast University, Nanjing, China
| | - Ao Fu
- Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, Nanjing, China
| | - Jiacheng Liu
- Department of Nuclear Medicine, Jiangsu Key Laboratory of Molecular and Functional Imaging, Zhongda Hospital, Medical School, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Albalawi E, Thakur A, Dorai DR, Bhatia Khan S, Mahesh TR, Almusharraf A, Aurangzeb K, Anwar MS. Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach. Front Comput Neurosci 2024; 18:1418546. [PMID: 38933391 PMCID: PMC11199693 DOI: 10.3389/fncom.2024.1418546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/23/2024] [Indexed: 06/28/2024] Open
Abstract
Background The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error. Objective This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans. Methods The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification. Results The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications. Conclusion This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - D. Ramya Dorai
- Department of Information Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - T. R. Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|
10
|
Mou X, Wang P, Sun J, Chen X, Du L, Zhan Q, Xia J, Yang T, Fang Z. A Novel Approach for the Detection and Severity Grading of Chronic Obstructive Pulmonary Disease Based on Transformed Volumetric Capnography. Bioengineering (Basel) 2024; 11:530. [PMID: 38927766 PMCID: PMC11200784 DOI: 10.3390/bioengineering11060530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/14/2024] [Accepted: 05/17/2024] [Indexed: 06/28/2024] Open
Abstract
Chronic Obstructive Pulmonary Disease (COPD), as the third leading cause of death worldwide, is a major global health issue. The early detection and grading of COPD are pivotal for effective treatment. Traditional spirometry tests, requiring considerable physical effort and strict adherence to quality standards, pose challenges in COPD diagnosis. Volumetric capnography (VCap), which can be performed during natural breathing without requiring additional compliance, presents a promising alternative tool. In this study, the dataset comprised 279 subjects with normal pulmonary function and 148 patients diagnosed with COPD. We introduced a novel quantitative analysis method for VCap. Volumetric capnograms were converted into two-dimensional grayscale images through the application of Gramian Angular Field (GAF) transformation. Subsequently, a multi-scale convolutional neural network, CapnoNet, was conducted to extract features and facilitate classification. To improve CapnoNet's performance, two data augmentation techniques were implemented. The proposed model exhibited a detection accuracy for COPD of 95.83%, with precision, recall, and F1 measures of 95.21%, 95.70%, and 95.45%, respectively. In the task of grading the severity of COPD, the model attained an accuracy of 96.36%, complemented by precision, recall, and F1 scores of 88.49%, 89.99%, and 89.15%, respectively. This work provides a new perspective for the quantitative analysis of volumetric capnography and demonstrates the strong performance of the proposed CapnoNet in the diagnosis and grading of COPD. It offers direction and an effective solution for the clinical application of capnography.
Collapse
Affiliation(s)
- Xiuying Mou
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Peng Wang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jie Sun
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xianxiang Chen
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lidong Du
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qingyuan Zhan
- Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, China–Japan Friendship Hospital, Beijing 100029, China; (Q.Z.); (J.X.)
| | - Jingen Xia
- Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, China–Japan Friendship Hospital, Beijing 100029, China; (Q.Z.); (J.X.)
| | - Ting Yang
- Department of Pulmonary and Critical Care Medicine, Center of Respiratory Medicine, China–Japan Friendship Hospital, Beijing 100029, China; (Q.Z.); (J.X.)
| | - Zhen Fang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China; (X.M.); (P.W.); (J.S.); (X.C.); (L.D.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Research Unit of Personalized Management of Chronic Respiratory Disease, Chinese Academy of Medical Sciences, Beijing 100190, China
| |
Collapse
|
11
|
Thakur GK, Thakur A, Kulkarni S, Khan N, Khan S. Deep Learning Approaches for Medical Image Analysis and Diagnosis. Cureus 2024; 16:e59507. [PMID: 38826977 PMCID: PMC11144045 DOI: 10.7759/cureus.59507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/01/2024] [Indexed: 06/04/2024] Open
Abstract
In addition to enhancing diagnostic accuracy, deep learning techniques offer the potential to streamline workflows, reduce interpretation time, and ultimately improve patient outcomes. The scalability and adaptability of deep learning algorithms enable their deployment across diverse clinical settings, ranging from radiology departments to point-of-care facilities. Furthermore, ongoing research efforts focus on addressing the challenges of data heterogeneity, model interpretability, and regulatory compliance, paving the way for seamless integration of deep learning solutions into routine clinical practice. As the field continues to evolve, collaborations between clinicians, data scientists, and industry stakeholders will be paramount in harnessing the full potential of deep learning for advancing medical image analysis and diagnosis. Furthermore, the integration of deep learning algorithms with other technologies, including natural language processing and computer vision, may foster multimodal medical data analysis and clinical decision support systems to improve patient care. The future of deep learning in medical image analysis and diagnosis is promising. With each success and advancement, this technology is getting closer to being leveraged for medical purposes. Beyond medical image analysis, patient care pathways like multimodal imaging, imaging genomics, and intelligent operating rooms or intensive care units can benefit from deep learning models.
Collapse
Affiliation(s)
- Gopal Kumar Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Abhishek Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shridhar Kulkarni
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Naseebia Khan
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shahnawaz Khan
- Department of Computer Application, Bundelkhand University, Jhansi, IND
| |
Collapse
|
12
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
13
|
Hong MP, Zhang R, Fan SJ, Liang YT, Cai HJ, Xu MS, Zhou B, Li LS. Interpretable CT radiomics model for invasiveness prediction in patients with ground-glass nodules. Clin Radiol 2024; 79:e8-e16. [PMID: 37833141 DOI: 10.1016/j.crad.2023.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023]
Abstract
AIM To evaluate the performance of an interpretable computed tomography (CT) radiomic model in predicting the invasiveness of ground-glass nodules (GGNs). MATERIALS AND METHODS The study was conducted retrospectively from 1 August 2017 to 1 August 2022, at three different centres. Two hundred and thirty patients with GGNs were enrolled at centre I as a training cohort. Centres II (n=157) and III (n=156) formed two external validation cohorts. Radiomics features extracted based on CT were reduced by a coarse-fine feature screening strategy. A radiomic model was developed through the use of the LASSO (least absolute shrinkage and selection operator) and XGBoost algorithms. Then, a radiological model was established through multivariate logistic regression analysis. Finally, the interpretability of the model was explored using SHapley Additive exPlanations (SHAP). RESULTS The radiomic XGBoost model outperformed the radiomic logistic model and radiological model in assessing the invasiveness of GGNs. The area under the curve (AUC) values for the radiomic XGBoost model were 0.885 (95% confidence interval [CI] 0.836-0.923), 0.853 (95% CI 0.790-0.906), and 0.838 (95% CI 0.773-0.902) in the training and the two external validation cohorts, respectively. The SHAP method allowed for both a quantitative and visual representation of how decisions were made using a given model for each individual patient. This can provide a deeper understanding of the decision-making mechanisms within the model and the factors that contribute to its prediction effectiveness. CONCLUSIONS The present interpretable CT radiomics model has the potential to preoperatively evaluate the invasiveness of GGNs. Furthermore, it can provide personalised, image-based clinical-decision support.
Collapse
Affiliation(s)
- M P Hong
- Department of Radiology, Jiaxing TCM Hospital Affiliated to Zhejiang Chinese Medical University, Jiaxing, China
| | - R Zhang
- Department of Radiology, Shunde Hospital, Southern Medical University (The First People's Hospital of Shunde), Foshan, China
| | - S J Fan
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Y T Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - H J Cai
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - M S Xu
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - B Zhou
- Department of Radiology, Jiaxing TCM Hospital Affiliated to Zhejiang Chinese Medical University, Jiaxing, China.
| | - L S Li
- Department of Radiology, Jiaxing TCM Hospital Affiliated to Zhejiang Chinese Medical University, Jiaxing, China.
| |
Collapse
|
14
|
Gogoshin G, Rodin AS. Graph Neural Networks in Cancer and Oncology Research: Emerging and Future Trends. Cancers (Basel) 2023; 15:5858. [PMID: 38136405 PMCID: PMC10742144 DOI: 10.3390/cancers15245858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 12/09/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
Next-generation cancer and oncology research needs to take full advantage of the multimodal structured, or graph, information, with the graph data types ranging from molecular structures to spatially resolved imaging and digital pathology, biological networks, and knowledge graphs. Graph Neural Networks (GNNs) efficiently combine the graph structure representations with the high predictive performance of deep learning, especially on large multimodal datasets. In this review article, we survey the landscape of recent (2020-present) GNN applications in the context of cancer and oncology research, and delineate six currently predominant research areas. We then identify the most promising directions for future research. We compare GNNs with graphical models and "non-structured" deep learning, and devise guidelines for cancer and oncology researchers or physician-scientists, asking the question of whether they should adopt the GNN methodology in their research pipelines.
Collapse
Affiliation(s)
- Grigoriy Gogoshin
- Department of Computational and Quantitative Medicine, Beckman Research Institute, and Diabetes and Metabolism Research Institute, City of Hope National Medical Center, 1500 East Duarte Road, Duarte, CA 91010, USA
| | - Andrei S. Rodin
- Department of Computational and Quantitative Medicine, Beckman Research Institute, and Diabetes and Metabolism Research Institute, City of Hope National Medical Center, 1500 East Duarte Road, Duarte, CA 91010, USA
| |
Collapse
|
15
|
Moretti R, Meffe G, Annunziata S, Capotosti A. Innovations in imaging modalities: a comparative review of MRI, long-axial field-of-view PET, and full-ring CZT-SPECT in detecting bone metastases. THE QUARTERLY JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING : OFFICIAL PUBLICATION OF THE ITALIAN ASSOCIATION OF NUCLEAR MEDICINE (AIMN) [AND] THE INTERNATIONAL ASSOCIATION OF RADIOPHARMACOLOGY (IAR), [AND] SECTION OF THE SOCIETY OF... 2023; 67:259-270. [PMID: 37870526 DOI: 10.23736/s1824-4785.23.03537-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
The accurate diagnosis of bone metastasis, a condition in which cancer cells have spread to the bone, is essential for optimal patient care and outcome. This review provides a detailed overview of the current medical imaging techniques used to detect and diagnose this critical condition focusing on three cardinal imaging modalities: positron emission tomography (PET), single photon emission computed tomography (SPECT) and magnetic resonance imaging (MRI). Each of these techniques has unique advantages: PET/CT combines functional imaging with anatomical imaging, allowing precise localization of metabolic abnormalities; the SPECT/CT offers a wider range of radiopharmaceuticals for visualizing specific receptors and metabolic pathways; MRI stands out for its unparalleled ability to produce high-resolution images of bone marrow structures. However, as this paper shows, each modality has its own limitations. The comprehensive analysis does not stop at the technical aspects, but ventures into the wider implications of these techniques in a clinical setting. By understanding the synergies and shortcomings of these modalities, healthcare professionals can make diagnostic and therapeutic decisions. Furthermore, at a time when medical technology is evolving at a breakneck pace, this review casts a speculative eye towards future advances in the field of bone metastasis imaging, bridging the current state with future possibilities. Such insights are essential for both clinicians and researchers navigating the complex landscape of bone metastasis diagnosis.
Collapse
Affiliation(s)
- Roberto Moretti
- Department of Diagnostic Imaging, Radiation Oncology and Hematology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Guenda Meffe
- Department of Diagnostic Imaging, Radiation Oncology and Hematology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Salvatore Annunziata
- Department of Diagnostic Imaging, Radiation Oncology and Hematology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Amedeo Capotosti
- Department of Diagnostic Imaging, Radiation Oncology and Hematology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy -
| |
Collapse
|
16
|
He Z, Liu J, Gou F, Wu J. An Innovative Solution Based on TSCA-ViT for Osteosarcoma Diagnosis in Resource-Limited Settings. Biomedicines 2023; 11:2740. [PMID: 37893113 PMCID: PMC10604772 DOI: 10.3390/biomedicines11102740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/24/2023] [Accepted: 10/08/2023] [Indexed: 10/29/2023] Open
Abstract
Identifying and managing osteosarcoma pose significant challenges, especially in resource-constrained developing nations. Advanced diagnostic methods involve isolating the nucleus from cancer cells for comprehensive analysis. However, two main challenges persist: mitigating image noise during the capture and transmission of cellular sections, and providing an efficient, accurate, and cost-effective solution for cell nucleus segmentation. To tackle these issues, we introduce the Twin-Self and Cross-Attention Vision Transformer (TSCA-ViT). This pioneering AI-based system employs a directed filtering algorithm for noise reduction and features an innovative transformer architecture with a twin attention mechanism for effective segmentation. The model also incorporates cross-attention-enabled skip connections to augment spatial information. We evaluated our method on a dataset of 1000 osteosarcoma pathology slide images from the Second People's Hospital of Huaihua, achieving a remarkable average precision of 97.7%. This performance surpasses traditional methodologies. Furthermore, TSCA-ViT offers enhanced computational efficiency owing to its fewer parameters, which results in reduced time and equipment costs. These findings underscore the superior efficacy and efficiency of TSCA-ViT, offering a promising approach for addressing the ongoing challenges in osteosarcoma diagnosis and treatment, particularly in settings with limited resources.
Collapse
Affiliation(s)
- Zengxiao He
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
17
|
García-Domínguez A, Galván-Tejada CE, Magallanes-Quintanar R, Cruz M, Gonzalez-Curiel I, Delgado-Contreras JR, Soto-Murillo MA, Celaya-Padilla JM, Galván-Tejada JI. Optimizing Clinical Diabetes Diagnosis through Generative Adversarial Networks: Evaluation and Validation. Diseases 2023; 11:134. [PMID: 37873778 PMCID: PMC10594466 DOI: 10.3390/diseases11040134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/24/2023] [Accepted: 09/28/2023] [Indexed: 10/25/2023] Open
Abstract
The escalating prevalence of Type 2 Diabetes (T2D) represents a substantial burden on global healthcare systems, especially in regions such as Mexico. Existing diagnostic techniques, although effective, often require invasive procedures and labor-intensive efforts. The promise of artificial intelligence and data science for streamlining and enhancing T2D diagnosis is well-recognized; however, these advancements are frequently constrained by the limited availability of comprehensive patient datasets. To mitigate this challenge, the present study investigated the efficacy of Generative Adversarial Networks (GANs) for augmenting existing T2D patient data, with a focus on a Mexican cohort. The researchers utilized a dataset of 1019 Mexican nationals, divided into 499 non-diabetic controls and 520 diabetic cases. GANs were applied to create synthetic patient profiles, which were subsequently used to train a Random Forest (RF) classification model. The study's findings revealed a notable improvement in the model's diagnostic accuracy, validating the utility of GAN-based data augmentation in a clinical context. The results bear significant implications for enhancing the robustness and reliability of Machine Learning tools in T2D diagnosis and management, offering a pathway toward more timely and effective patient care.
Collapse
Affiliation(s)
- Antonio García-Domínguez
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - Carlos E. Galván-Tejada
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - Rafael Magallanes-Quintanar
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - Miguel Cruz
- Medical Research Unit in Biochemestry, National Medical Center Siglo XXI, IMSS, Mexico City 06720, Mexico;
| | - Irma Gonzalez-Curiel
- Unidad Académica de Ciencias Químicas, Universidad Autónoma de Zacatecas, Jardín Juarez 147, Centro, Zacatecas 98000, Mexico;
| | - J. Rubén Delgado-Contreras
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - Manuel A. Soto-Murillo
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - José M. Celaya-Padilla
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| | - Jorge I. Galván-Tejada
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro, Zacatecas 98000, Mexico; (A.G.-D.); (R.M.-Q.); (J.R.D.-C.); (M.A.S.-M.); (J.M.C.-P.); (J.I.G.-T.)
| |
Collapse
|