1
|
Farahat IS, Sharafeldeen A, Ghazal M, Alghamdi NS, Mahmoud A, Connelly J, van Bogaert E, Zia H, Tahtouh T, Aladrousy W, Tolba AE, Elmougy S, El-Baz A. An AI-based novel system for predicting respiratory support in COVID-19 patients through CT imaging analysis. Sci Rep 2024; 14:851. [PMID: 38191606 PMCID: PMC10774502 DOI: 10.1038/s41598-023-51053-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 01/10/2024] Open
Abstract
The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.
Collapse
Affiliation(s)
- Ibrahim Shawky Farahat
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | | | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ali Mahmoud
- Department of Bioengineering, University of Louisville, Louisville, USA
| | - James Connelly
- Department of Radiology, University of Louisville, Louisville, USA
| | - Eric van Bogaert
- Department of Radiology, University of Louisville, Louisville, USA
| | - Huma Zia
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Tania Tahtouh
- College of Health Sciences, Abu Dhabi University, Abu Dhabi, UAE
| | - Waleed Aladrousy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ahmed Elsaid Tolba
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
- The Higher Institute of Engineering and Automotive Technology and Energy, Kafr El Sheikh, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, USA.
| |
Collapse
|
2
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
3
|
Hemanth SV, Alagarsamy S, Dhiliphan Rajkumar T. Convolutional neural network-based sea lion optimization algorithm for the detection and classification of diabetic retinopathy. Acta Diabetol 2023; 60:1377-1389. [PMID: 37368025 DOI: 10.1007/s00592-023-02122-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 05/20/2023] [Indexed: 06/28/2023]
Abstract
AIMS Diabetic retinopathy (DR) becomes a complicated type of diabetic that causes damage to the blood vessels of the retina's light-sensitive tissue. DR may initially cause mild symptoms or no symptoms. But prolonged DR results in permanent vision loss, and hence, it is necessary to detect the DR at an early stage. METHODS Manual diagnosing of DR retina fundus image is a time-consuming process and sometimes leads to misdiagnosis. The existing DR detection model faces few shortcomings in case of improper detection accuracy, higher loss or error values, high feature dimensionality, not suitable for large datasets, high computational complexity, poor performances, unbalanced and limited number of data points, and so on. As a result, the DR is diagnosed in this paper through four critical phases to tackle the shortcomings. The retinal images are cropped during preprocessing to reduce unwanted noises and redundant data. The images are then segmented using a modified level set algorithm based on pixel characteristics. RESULTS Here, an Aquila optimizer is employed in extracting the segmented image. Finally, for optimal classification of DR images, the study proposes a convolutional neural network-oriented sea lion optimization (CNN-SLO) algorithm. Here, the CNN-SLO algorithm classifies the retinal images into five classes (healthy, moderate, mild, proliferative and severe). CONCLUSION The experimental investigation is performed for Kaggle datasets with respect to diverse evaluation measures to deliberate the performances of the proposed system.
Collapse
Affiliation(s)
- S V Hemanth
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputhur, Tamil Nadu, India.
| | - Saravanan Alagarsamy
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputhur, Tamil Nadu, India
| | - T Dhiliphan Rajkumar
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputhur, Tamil Nadu, India
| |
Collapse
|
4
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
5
|
Mohanty C, Mahapatra S, Acharya B, Kokkoras F, Gerogiannis VC, Karamitsos I, Kanavos A. Using Deep Learning Architectures for Detection and Classification of Diabetic Retinopathy. SENSORS (BASEL, SWITZERLAND) 2023; 23:5726. [PMID: 37420891 DOI: 10.3390/s23125726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/07/2023] [Accepted: 06/14/2023] [Indexed: 07/09/2023]
Abstract
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients.
Collapse
Affiliation(s)
- Cheena Mohanty
- Department of Electronics and Telecommunication, Biju Patnaik University of Technology, Rourkela 769012, Odisha, India
| | - Sakuntala Mahapatra
- Department of Electronics and Telecommunication Engineering, Trident Academy of Technology, Bhubaneswar 751016, Odisha, India
| | - Biswaranjan Acharya
- Department of Computer Engineering-AI, Marwadi University, Rajkot 360003, Gujarat, India
| | - Fotis Kokkoras
- Department of Digital Systems, University of Thessaly, 41500 Larissa, Greece
| | | | - Ioannis Karamitsos
- Department of Graduate and Research, Rochester Institute of Technology, Dubai 341055, United Arab Emirates
| | - Andreas Kanavos
- Department of Informatics, Ionian University, 49100 Corfu, Greece
| |
Collapse
|
6
|
Matten P, Scherer J, Schlegl T, Nienhaus J, Stino H, Niederleithner M, Schmidt-Erfurth UM, Leitgeb RA, Drexler W, Pollreisz A, Schmoll T. Multiple instance learning based classification of diabetic retinopathy in weakly-labeled widefield OCTA en face images. Sci Rep 2023; 13:8713. [PMID: 37248309 DOI: 10.1038/s41598-023-35713-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 05/22/2023] [Indexed: 05/31/2023] Open
Abstract
Diabetic retinopathy (DR), a pathologic change of the human retinal vasculature, is the leading cause of blindness in working-age adults with diabetes mellitus. Optical coherence tomography angiography (OCTA), a functional extension of optical coherence tomography, has shown potential as a tool for early diagnosis of DR through its ability to visualize the retinal vasculature in all spatial dimensions. Previously introduced deep learning-based classifiers were able to support the detection of DR in OCTA images, but require expert labeling at the pixel level, a labor-intensive and expensive process. We present a multiple instance learning-based network, MIL-ResNet,14 that is capable of detecting biomarkers in an OCTA dataset with high accuracy, without the need for annotations other than the information whether a scan is from a diabetic patient or not. The dataset we used for this study was acquired with a diagnostic ultra-widefield swept-source OCT device with a MHz A-scan rate. We were able to show that our proposed method outperforms previous state-of-the-art networks for this classification task, ResNet14 and VGG16. In addition, our network pays special attention to clinically relevant biomarkers and is robust against adversarial attacks. Therefore, we believe that it could serve as a powerful diagnostic decision support tool for clinical ophthalmic screening.
Collapse
Affiliation(s)
- Philipp Matten
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria.
| | - Julius Scherer
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Thomas Schlegl
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Jonas Nienhaus
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Heiko Stino
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Michael Niederleithner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Ursula M Schmidt-Erfurth
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Rainer A Leitgeb
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Wolfgang Drexler
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
| | - Andreas Pollreisz
- Department of Ophthalmology and Optometry, Medical University of Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Tilman Schmoll
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20 (4L), 1090, Vienna, Austria
- Carl Zeiss Meditec Inc, 5300 Central Pkwy, Dublin, CA, 94568, USA
| |
Collapse
|
7
|
Deep Learning and Medical Image Processing Techniques for Diabetic Retinopathy: A Survey of Applications, Challenges, and Future Trends. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:2728719. [PMID: 36776951 PMCID: PMC9911247 DOI: 10.1155/2023/2728719] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 10/28/2022] [Accepted: 11/25/2022] [Indexed: 02/05/2023]
Abstract
Diabetic retinopathy (DR) is a common eye retinal disease that is widely spread all over the world. It leads to the complete loss of vision based on the level of severity. It damages both retinal blood vessels and the eye's microscopic interior layers. To avoid such issues, early detection of DR is essential in association with routine screening methods to discover mild causes in manual initiation. But these diagnostic procedures are extremely difficult and expensive. The unique contributions of the study include the following: first, providing detailed background of the DR disease and the traditional detection techniques. Second, the various imaging techniques and deep learning applications in DR are presented. Third, the different use cases and real-life scenarios are explored relevant to DR detection wherein deep learning techniques have been implemented. The study finally highlights the potential research opportunities for researchers to explore and deliver effective performance results in diabetic retinopathy detection.
Collapse
|
8
|
Simović A, Lutovac-Banduka M, Lekić S, Kuleto V. Smart Visualization of Medical Images as a Tool in the Function of Education in Neuroradiology. Diagnostics (Basel) 2022; 12:diagnostics12123208. [PMID: 36553215 PMCID: PMC9777748 DOI: 10.3390/diagnostics12123208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/09/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT scan). The new SVMI method provides a more precise representation of the brain image by hiding pixels that are not carrying information and rescaling and coloring the range of pixels essential for detecting and visualizing the disease. In addition, SVMI can be used to avoid the additional exposure of patients to ionizing radiation, which can lead to the occurrence of allergic reactions due to the contrast media administration. Results of the SVMI model were compared with the final diagnosis of the disease after additional diagnostics and confirmation by neuroradiologists, who are highly trained physicians with many years of experience. The application of the realized and presented SVMI model can optimize the engagement of material, medical, and human resources and has the potential for general application in medical training, education, and clinical research.
Collapse
Affiliation(s)
- Aleksandar Simović
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
- Correspondence:
| | - Maja Lutovac-Banduka
- Department of RT-RK Institute, RT-RK for Computer Based Systems, 21000 Novi Sad, Serbia
| | - Snežana Lekić
- Department of Emergency Neuroradiology, University Clinical Centre of Serbia UKCS, 11000 Belgrade, Serbia
| | - Valentin Kuleto
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| |
Collapse
|
9
|
The Role of Medical Image Modalities and AI in the Early Detection, Diagnosis and Grading of Retinal Diseases: A Survey. Bioengineering (Basel) 2022; 9:bioengineering9080366. [PMID: 36004891 PMCID: PMC9405367 DOI: 10.3390/bioengineering9080366] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/28/2022] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Traditional dilated ophthalmoscopy can reveal diseases, such as age-related macular degeneration (AMD), diabetic retinopathy (DR), diabetic macular edema (DME), retinal tear, epiretinal membrane, macular hole, retinal detachment, retinitis pigmentosa, retinal vein occlusion (RVO), and retinal artery occlusion (RAO). Among these diseases, AMD and DR are the major causes of progressive vision loss, while the latter is recognized as a world-wide epidemic. Advances in retinal imaging have improved the diagnosis and management of DR and AMD. In this review article, we focus on the variable imaging modalities for accurate diagnosis, early detection, and staging of both AMD and DR. In addition, the role of artificial intelligence (AI) in providing automated detection, diagnosis, and staging of these diseases will be surveyed. Furthermore, current works are summarized and discussed. Finally, projected future trends are outlined. The work done on this survey indicates the effective role of AI in the early detection, diagnosis, and staging of DR and/or AMD. In the future, more AI solutions will be presented that hold promise for clinical applications.
Collapse
|
10
|
Segmentation of Infant Brain Using Nonnegative Matrix Factorization. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115377] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This study develops an atlas-based automated framework for segmenting infants’ brains from magnetic resonance imaging (MRI). For the accurate segmentation of different structures of an infant’s brain at the isointense age (6–12 months), our framework integrates features of diffusion tensor imaging (DTI) (e.g., the fractional anisotropy (FA)). A brain diffusion tensor (DT) image and its region map are considered samples of a Markov–Gibbs random field (MGRF) that jointly models visual appearance, shape, and spatial homogeneity of a goal structure. The visual appearance is modeled with an empirical distribution of the probability of the DTI features, fused by their nonnegative matrix factorization (NMF) and allocation to data clusters. Projecting an initial high-dimensional feature space onto a low-dimensional space of the significant fused features with the NMF allows for better separation of the goal structure and its background. The cluster centers in the latter space are determined at the training stage by the K-means clustering. In order to adapt to large infant brain inhomogeneities and segment the brain images more accurately, appearance descriptors of both the first-order and second-order are taken into account in the fused NMF feature space. Additionally, a second-order MGRF model is used to describe the appearance based on the voxel intensities and their pairwise spatial dependencies. An adaptive shape prior that is spatially variant is constructed from a training set of co-aligned images, forming an atlas database. Moreover, the spatial homogeneity of the shape is described with a spatially uniform 3D MGRF of the second-order for region labels. In vivo experiments on nine infant datasets showed promising results in terms of the accuracy, which was computed using three metrics: the 95-percentile modified Hausdorff distance (MHD), the Dice similarity coefficient (DSC), and the absolute volume difference (AVD). Both the quantitative and visual assessments confirm that integrating the proposed NMF-fused DTI feature and intensity MGRF models of visual appearance, the adaptive shape prior, and the shape homogeneity MGRF model is promising in segmenting the infant brain DTI.
Collapse
|
11
|
Elsharkawy M, Elrazzaz M, Sharafeldeen A, Alhalabi M, Khalifa F, Soliman A, Elnakib A, Mahmoud A, Ghazal M, El-Daydamony E, Atwan A, Sandhu HS, El-Baz A. The Role of Different Retinal Imaging Modalities in Predicting Progression of Diabetic Retinopathy: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:3490. [PMID: 35591182 PMCID: PMC9101725 DOI: 10.3390/s22093490] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/28/2022] [Accepted: 04/29/2022] [Indexed: 06/15/2023]
Abstract
Diabetic retinopathy (DR) is a devastating condition caused by progressive changes in the retinal microvasculature. It is a leading cause of retinal blindness in people with diabetes. Long periods of uncontrolled blood sugar levels result in endothelial damage, leading to macular edema, altered retinal permeability, retinal ischemia, and neovascularization. In order to facilitate rapid screening and diagnosing, as well as grading of DR, different retinal modalities are utilized. Typically, a computer-aided diagnostic system (CAD) uses retinal images to aid the ophthalmologists in the diagnosis process. These CAD systems use a combination of machine learning (ML) models (e.g., deep learning (DL) approaches) to speed up the diagnosis and grading of DR. In this way, this survey provides a comprehensive overview of different imaging modalities used with ML/DL approaches in the DR diagnosis process. The four imaging modalities that we focused on are fluorescein angiography, fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). In addition, we discuss limitations of the literature that utilizes such modalities for DR diagnosis. In addition, we introduce research gaps and provide suggested solutions for the researchers to resolve. Lastly, we provide a thorough discussion about the challenges and future directions of the current state-of-the-art DL/ML approaches. We also elaborate on how integrating different imaging modalities with the clinical information and demographic data will lead to promising results for the scientists when diagnosing and grading DR. As a result of this article's comparative analysis and discussion, it remains necessary to use DL methods over existing ML models to detect DR in multiple modalities.
Collapse
Affiliation(s)
- Mohamed Elsharkawy
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mostafa Elrazzaz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Marah Alhalabi
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Fahmi Khalifa
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Soliman
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ahmed Elnakib
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.); (M.A.)
| | - Eman El-Daydamony
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Ahmed Atwan
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (E.E.-D.); (A.A.)
| | - Harpal Singh Sandhu
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.E.); (M.E.); (A.S.); (F.K.); (A.S.); (A.E.); (A.M.); (H.S.S.)
| |
Collapse
|
12
|
Toğaçar M, Ergen B, Tümen V. Use of dominant activations obtained by processing OCT images with the CNNs and slime mold method in retinal disease detection. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
13
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
14
|
The Role of 3D CT Imaging in the Accurate Diagnosis of Lung Function in Coronavirus Patients. Diagnostics (Basel) 2022; 12:diagnostics12030696. [PMID: 35328249 PMCID: PMC8947065 DOI: 10.3390/diagnostics12030696] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/25/2022] [Accepted: 03/08/2022] [Indexed: 12/04/2022] Open
Abstract
Early grading of coronavirus disease 2019 (COVID-19), as well as ventilator support machines, are prime ways to help the world fight this virus and reduce the mortality rate. To reduce the burden on physicians, we developed an automatic Computer-Aided Diagnostic (CAD) system to grade COVID-19 from Computed Tomography (CT) images. This system segments the lung region from chest CT scans using an unsupervised approach based on an appearance model, followed by 3D rotation invariant Markov–Gibbs Random Field (MGRF)-based morphological constraints. This system analyzes the segmented lung and generates precise, analytical imaging markers by estimating the MGRF-based analytical potentials. Three Gibbs energy markers were extracted from each CT scan by tuning the MGRF parameters on each lesion separately. The latter were healthy/mild, moderate, and severe lesions. To represent these markers more reliably, a Cumulative Distribution Function (CDF) was generated, then statistical markers were extracted from it, namely, 10th through 90th CDF percentiles with 10% increments. Subsequently, the three extracted markers were combined together and fed into a backpropagation neural network to make the diagnosis. The developed system was assessed on 76 COVID-19-infected patients using two metrics, namely, accuracy and Kappa. In this paper, the proposed system was trained and tested by three approaches. In the first approach, the MGRF model was trained and tested on the lungs. This approach achieved 95.83% accuracy and 93.39% kappa. In the second approach, we trained the MGRF model on the lesions and tested it on the lungs. This approach achieved 91.67% accuracy and 86.67% kappa. Finally, we trained and tested the MGRF model on lesions. It achieved 100% accuracy and 100% kappa. The results reported in this paper show the ability of the developed system to accurately grade COVID-19 lesions compared to other machine learning classifiers, such as k-Nearest Neighbor (KNN), decision tree, naïve Bayes, and random forest.
Collapse
|