1
|
Islam MM, Rifat HR, Shahid MSB, Akhter A, Uddin MA. Utilizing Deep Feature Fusion for Automatic Leukemia Classification: An Internet of Medical Things-Enabled Deep Learning Framework. SENSORS (BASEL, SWITZERLAND) 2024; 24:4420. [PMID: 39001200 PMCID: PMC11244606 DOI: 10.3390/s24134420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 06/30/2024] [Accepted: 07/05/2024] [Indexed: 07/16/2024]
Abstract
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.
Collapse
Affiliation(s)
- Md Manowarul Islam
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Habibur Rahman Rifat
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Shamim Bin Shahid
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Arnisha Akhter
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Ashraf Uddin
- School of Info Technology, Deakin University, Burwood, VIC 3125, Australia
| |
Collapse
|
2
|
Patel H, Shah H, Patel G, Patel A. Hematologic cancer diagnosis and classification using machine and deep learning: State-of-the-art techniques and emerging research directives. Artif Intell Med 2024; 152:102883. [PMID: 38657439 DOI: 10.1016/j.artmed.2024.102883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 04/26/2024]
Abstract
Hematology is the study of diagnosis and treatment options for blood diseases, including cancer. Cancer is considered one of the deadliest diseases across all age categories. Diagnosing such a deadly disease at the initial stage is essential to cure the disease. Hematologists and pathologists rely on microscopic evaluation of blood or bone marrow smear images to diagnose blood-related ailments. The abundance of overlapping cells, cells of varying densities among platelets, non-illumination levels, and the amount of red and white blood cells make it more difficult to diagnose illness using blood cell images. Pathologists are required to put more effort into the traditional, time-consuming system. Nowadays, it becomes possible with machine learning and deep learning techniques, to automate the diagnostic processes, categorize microscopic blood cells, and improve the accuracy of the procedure and its speed as the models developed using these methods may guide an assisting tool. In this article, we have acquired, analyzed, scrutinized, and finally selected around 57 research papers from various machine learning and deep learning methodologies that have been employed in the diagnosis of leukemia and its classification over the past 20 years, which have been published between the years 2003 and 2023 by PubMed, IEEE, Science Direct, Google Scholar and other pertinent sources. Our primary emphasis is on evaluating the advantages and limitations of analogous research endeavors to provide a concise and valuable research directive that can be of significant utility to fellow researchers in the field.
Collapse
Affiliation(s)
- Hema Patel
- Smt. Chandaben Mohanbhai Patel Institute of Computer Applications, Charotar University of Science and Technology, CHARUSAT, Campus, Changa, 388421 Anand, Gujarat, India.
| | - Himal Shah
- QURE Haematology Centre, Ahmedabad 380006, Gujarat, India
| | - Gayatri Patel
- Ramanbhai Patel College of Pharmacy, Charotar University of Science and Technology, CHARUSAT, Campus, Changa, 388421 Anand, Gujarat, India
| | - Atul Patel
- Smt. Chandaben Mohanbhai Patel Institute of Computer Applications, Charotar University of Science and Technology, CHARUSAT, Campus, Changa, 388421 Anand, Gujarat, India
| |
Collapse
|
3
|
Awais M, Ahmad R, Kausar N, Alzahrani AI, Alalwan N, Masood A. ALL classification using neural ensemble and memetic deep feature optimization. Front Artif Intell 2024; 7:1351942. [PMID: 38655268 PMCID: PMC11035867 DOI: 10.3389/frai.2024.1351942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 03/15/2024] [Indexed: 04/26/2024] Open
Abstract
Acute lymphoblastic leukemia (ALL) is a fatal blood disorder characterized by the excessive proliferation of immature white blood cells, originating in the bone marrow. An effective prognosis and treatment of ALL calls for its accurate and timely detection. Deep convolutional neural networks (CNNs) have shown promising results in digital pathology. However, they face challenges in classifying different subtypes of leukemia due to their subtle morphological differences. This study proposes an improved pipeline for binary detection and sub-type classification of ALL from blood smear images. At first, a customized, 88 layers deep CNN is proposed and trained using transfer learning along with GoogleNet CNN to create an ensemble of features. Furthermore, this study models the feature selection problem as a combinatorial optimization problem and proposes a memetic version of binary whale optimization algorithm, incorporating Differential Evolution-based local search method to enhance the exploration and exploitation of feature search space. The proposed approach is validated using publicly available standard datasets containing peripheral blood smear images of various classes of ALL. An overall best average accuracy of 99.15% is achieved for binary classification of ALL with an 85% decrease in the feature vector, together with 99% precision and 98.8% sensitivity. For B-ALL sub-type classification, the best accuracy of 98.69% is attained with 98.7% precision and 99.57% specificity. The proposed methodology shows better performance metrics as compared with several existing studies.
Collapse
Affiliation(s)
- Muhammad Awais
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah, Pakistan
- Department of Computer Engineering, TED University Ankara, Ankara, Türkiye
| | - Riaz Ahmad
- Department of Computer Science, Iqra University Islamabad, Islamabad, Pakistan
- Department of Computer Science, COMSATS University Islamabad, Wah, Pakistan
| | - Nabeela Kausar
- Department of Computer Science, Iqra University Islamabad, Islamabad, Pakistan
| | - Ahmed Ibrahim Alzahrani
- Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Nasser Alalwan
- Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Radiology, Boston Children's Hospital, Boston, MA, United States
| |
Collapse
|
4
|
Awais M, Abdal MN, Akram T, Alasiry A, Marzougui M, Masood A. An efficient decision support system for leukemia identification utilizing nature-inspired deep feature optimization. Front Oncol 2024; 14:1328200. [PMID: 38505591 PMCID: PMC10949894 DOI: 10.3389/fonc.2024.1328200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 01/25/2024] [Indexed: 03/21/2024] Open
Abstract
In the field of medicine, decision support systems play a crucial role by harnessing cutting-edge technology and data analysis to assist doctors in disease diagnosis and treatment. Leukemia is a malignancy that emerges from the uncontrolled growth of immature white blood cells within the human body. An accurate and prompt diagnosis of leukemia is desired due to its swift progression to distant parts of the body. Acute lymphoblastic leukemia (ALL) is an aggressive type of leukemia that affects both children and adults. Computer vision-based identification of leukemia is challenging due to structural irregularities and morphological similarities of blood entities. Deep neural networks have shown promise in extracting valuable information from image datasets, but they have high computational costs due to their extensive feature sets. This work presents an efficient pipeline for binary and subtype classification of acute lymphoblastic leukemia. The proposed method first unveils a novel neighborhood pixel transformation method using differential evolution to improve the clarity and discriminability of blood cell images for better analysis. Next, a hybrid feature extraction approach is presented leveraging transfer learning from selected deep neural network models, InceptionV3 and DenseNet201, to extract comprehensive feature sets. To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. These optimized features subsequently empower multiple classifiers, potentially capturing diverse perspectives and amplifying classification accuracy. The proposed pipeline is validated on publicly available standard datasets of ALL images. For binary classification, the best average accuracy of 98.1% is achieved with 98.1% sensitivity and 98% precision. For ALL subtype classifications, the best accuracy of 98.14% was attained with 78.5% sensitivity and 98% precision. The proposed feature selection method shows a better convergence behavior as compared to classical population-based meta-heuristics. The suggested solution also demonstrates comparable or better performance in comparison to several existing techniques.
Collapse
Affiliation(s)
- Muhammad Awais
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah, Pakistan
- Department of Computer Engineering, TED University, Ankara, Türkiye
| | - Md. Nazmul Abdal
- Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh
| | - Tallha Akram
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Wah, Pakistan
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
5
|
Holguin-Garcia SA, Guevara-Navarro E, Daza-Chica AE, Patiño-Claro MA, Arteaga-Arteaga HB, Ruz GA, Tabares-Soto R, Bravo-Ortiz MA. A comparative study of CNN-capsule-net, CNN-transformer encoder, and Traditional machine learning algorithms to classify epileptic seizure. BMC Med Inform Decis Mak 2024; 24:60. [PMID: 38429718 PMCID: PMC10908140 DOI: 10.1186/s12911-024-02460-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 02/13/2024] [Indexed: 03/03/2024] Open
Abstract
INTRODUCTION Epilepsy is a disease characterized by an excessive discharge in neurons generally provoked without any external stimulus, known as convulsions. About 2 million people are diagnosed each year in the world. This process is carried out by a neurological doctor using an electroencephalogram (EEG), which is lengthy. METHOD To optimize these processes and make them more efficient, we have resorted to innovative artificial intelligence methods essential in classifying EEG signals. For this, comparing traditional models, such as machine learning or deep learning, with cutting-edge models, in this case, using Capsule-Net architectures and Transformer Encoder, has a crucial role in finding the most accurate model and helping the doctor to have a faster diagnosis. RESULT In this paper, a comparison was made between different models for binary and multiclass classification of the epileptic seizure detection database, achieving a binary accuracy of 99.92% with the Capsule-Net model and a multiclass accuracy with the Transformer Encoder model of 87.30%. CONCLUSION Artificial intelligence is essential in diagnosing pathology. The comparison between models is helpful as it helps to discard those that are not efficient. State-of-the-art models overshadow conventional models, but data processing also plays an essential role in evaluating the higher accuracy of the models.
Collapse
Affiliation(s)
| | - Ernesto Guevara-Navarro
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Alvaro Eduardo Daza-Chica
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Maria Alejandra Patiño-Claro
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Harold Brayan Arteaga-Arteaga
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
| | - Gonzalo A Ruz
- Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, 7941169, Chile
- Center of Applied Ecology and Sustainability (CAPES), Santiago, 8331150, Chile
- Data Observatory Foundation, Santiago, 7510277, Chile
| | - Reinel Tabares-Soto
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia
- Departamento de Sistemas e Informática, Universidad de Caldas, Manizales, 170004, Caldas, Colombia
- Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, 7941169, Chile
| | - Mario Alejandro Bravo-Ortiz
- Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001, Caldas, Colombia.
- Centro de Bioinformática y Biología Computacional (BIOS), Manizales, 170001, Colombia.
| |
Collapse
|
6
|
Arefinia F, Aria M, Rabiei R, Hosseini A, Ghaemian A, Roshanpoor A. Non-invasive fractional flow reserve estimation using deep learning on intermediate left anterior descending coronary artery lesion angiography images. Sci Rep 2024; 14:1818. [PMID: 38245614 PMCID: PMC10799954 DOI: 10.1038/s41598-024-52360-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/17/2024] [Indexed: 01/22/2024] Open
Abstract
This study aimed to design an end-to-end deep learning model for estimating the value of fractional flow reserve (FFR) using angiography images to classify left anterior descending (LAD) branch angiography images with average stenosis between 50 and 70% into two categories: FFR > 80 and FFR ≤ 80. In this study 3625 images were extracted from 41 patients' angiography films. Nine pre-trained convolutional neural networks (CNN), including DenseNet121, InceptionResNetV2, VGG16, VGG19, ResNet50V2, Xception, MobileNetV3Large, DenseNet201, and DenseNet169, were used to extract the features of images. DenseNet169 indicated higher performance compared to other networks. AUC, Accuracy, Sensitivity, Specificity, Precision, and F1-score of the proposed DenseNet169 network were 0.81, 0.81, 0.86, 0.75, 0.82, and 0.84, respectively. The deep learning-based method proposed in this study can non-invasively and consistently estimate FFR from angiographic images, offering significant clinical potential for diagnosing and treating coronary artery disease by combining anatomical and physiological parameters.
Collapse
Affiliation(s)
- Farhad Arefinia
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehrad Aria
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Reza Rabiei
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Azamossadat Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Ali Ghaemian
- Department of Cardiology, Faculty of Medicine, Cardiovascular Research Center, Mazandaran University of Medical Sciences, Sari, Iran
| | - Arash Roshanpoor
- Department of Computer, Yadegar-e-Imam Khomeini (RAH), Islamic Azad University, Janat-Abad Branch, Tehran, Iran
| |
Collapse
|
7
|
Huang ML, Huang ZB. An ensemble-acute lymphoblastic leukemia model for acute lymphoblastic leukemia image classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:1959-1978. [PMID: 38454670 DOI: 10.3934/mbe.2024087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
The timely diagnosis of acute lymphoblastic leukemia (ALL) is of paramount importance for enhancing the treatment efficacy and the survival rates of patients. In this study, we seek to introduce an ensemble-ALL model for the image classification of ALL, with the goal of enhancing early diagnostic capabilities and streamlining the diagnostic and treatment processes for medical practitioners. In this study, a publicly available dataset is partitioned into training, validation, and test sets. A diverse set of convolutional neural networks, including InceptionV3, EfficientNetB4, ResNet50, CONV_POOL-CNN, ALL-CNN, Network in Network, and AlexNet, are employed for training. The top-performing four individual models are meticulously chosen and integrated with the squeeze-and-excitation (SE) module. Furthermore, the two most effective SE-embedded models are harmoniously combined to create the proposed ensemble-ALL model. This model leverages the Bayesian optimization algorithm to enhance its performance. The proposed ensemble-ALL model attains remarkable accuracy, precision, recall, F1-score, and kappa scores, registering at 96.26, 96.26, 96.26, 96.25, and 91.36%, respectively. These results surpass the benchmarks set by state-of-the-art studies in the realm of ALL image classification. This model represents a valuable contribution to the field of medical image recognition, particularly in the diagnosis of acute lymphoblastic leukemia, and it offers the potential to enhance the efficiency and accuracy of medical professionals in the diagnostic and treatment processes.
Collapse
Affiliation(s)
- Mei-Ling Huang
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| | - Zong-Bin Huang
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| |
Collapse
|
8
|
Rahimi M, Khameneh EA, Riazi-Esfahani H, Mahmoudi T, Khalili Pour E, Kafieh R. Application of ImageJ in Optical Coherence Tomography Angiography (OCT-A): A Literature Review. J Ophthalmol 2023; 2023:9479183. [PMID: 38033422 PMCID: PMC10686712 DOI: 10.1155/2023/9479183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/23/2023] [Accepted: 11/11/2023] [Indexed: 12/02/2023] Open
Abstract
Background This study aimed to review the literature on the application of ImageJ in optical coherence tomography angiography (OCT-A) images. Methods A general search was performed in PubMed, Google Scholar, and Scopus databases. The authors evaluated each of the selected articles in order to assess the implementation of ImageJ in OCT-A images. Results ImageJ can aid in reducing artifacts, enhancing image quality to increase the accuracy of the process and analysis, processing and analyzing images, generating comparable parameters such as the parameters that assess perfusion of the layers (vessel density (VD), skeletonized density (SD), and vessel length density (VLD)) and the parameters that evaluate the structure of the layers (fractal dimension (FD), vessel density index (VDI), and lacunarity (LAC)), and the foveal avascular zone (FAZ) that are used widely in the retinal and choroidal studies), and establishing diagnostic criteria. It can help to save time when the dataset is huge with numerous plugins and options for image processing and analysis with reliable results. Diverse studies implemented distinct binarization and thresholding techniques, resulting in disparate outcomes and incomparable parameters. Uniformity in methodology is required to acquire comparable data from studies employing diverse processing and analysis techniques that yield varied outcomes. Conclusion Researchers and professionals might benefit from using ImageJ because of how quickly and correctly it processes and analyzes images. It is highly adaptable and potent software, allowing users to evaluate images in a variety of ways. There exists a diverse range of methodologies for analyzing OCTA images through the utilization of ImageJ. However, it is imperative to establish a standardized strategy to ensure the reliability and consistency of the method for research purposes.
Collapse
Affiliation(s)
- Masoud Rahimi
- Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Hamid Riazi-Esfahani
- Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Tahereh Mahmoudi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Elias Khalili Pour
- Retina Ward, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Rahele Kafieh
- Department of Engineering, Durham University, South Road, Durham DH1 3LE, UK
| |
Collapse
|
9
|
Heo Y, Kim J, Choi SG. Two-Stage Model-Based Predicting PV Generation with the Conjugation of IoT Sensor Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:9178. [PMID: 38005566 PMCID: PMC10675006 DOI: 10.3390/s23229178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 11/05/2023] [Accepted: 11/09/2023] [Indexed: 11/26/2023]
Abstract
This paper proposes a novel short-term photovoltaic voltage (PV) prediction scheme using IoT sensor data with the two-stage neural network model. It is efficient to use environmental data provided by the meteorological agency to predict future PV generation. However, such environmental data represent the average value of the wide area, and there is a limitation in detecting environmental changes in the specific area where the solar panel is installed. In order to solve such issues, it is essential to establish IoT sensor data to detect environmental changes in the specific area. However, most conventional research focuses only on the efficiency of IoT sensor data without taking into account the timing of data acquisition from the sensors. In real-world scenarios, IoT sensor data is not available precisely when needed for predictions. Therefore, it is necessary to predict the IoT data first and then use it to forecast PV generation. In this paper, we propose a two-stage model to achieve high-accuracy prediction results. In the first stage, we use predicted environmental data to access IoT sensor data in the desired future time point. In the second stage, the predicted IoT sensors and environmental data are used to predict PV generation. Here, we determine the appropriate prediction scheme at each stage by analyzing the model characteristics to increase prediction accuracy. In addition, we show that the proposed prediction scheme could increase prediction accuracy by more than 12% compared to the baseline scheme that only uses a meteorological agency to predict PV generation.
Collapse
Affiliation(s)
- Youngju Heo
- DGB Financial Holding Company, Seoul 04521, Republic of Korea;
| | - Jangkyum Kim
- Department of Data Science, Sejong University, Seoul 05006, Republic of Korea
| | - Seong Gon Choi
- School of Information and Communication Engineering, Chungbuk University, Cheongju 28644, Republic of Korea
| |
Collapse
|
10
|
Su Z, Adam A, Nasrudin MF, Ayob M, Punganan G. Skeletal Fracture Detection with Deep Learning: A Comprehensive Review. Diagnostics (Basel) 2023; 13:3245. [PMID: 37892066 PMCID: PMC10606060 DOI: 10.3390/diagnostics13203245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/29/2023] Open
Abstract
Deep learning models have shown great promise in diagnosing skeletal fractures from X-ray images. However, challenges remain that hinder progress in this field. Firstly, a lack of clear definitions for recognition, classification, detection, and localization tasks hampers the consistent development and comparison of methodologies. The existing reviews often lack technical depth or have limited scope. Additionally, the absence of explainable facilities undermines the clinical application and expert confidence in results. To address these issues, this comprehensive review analyzes and evaluates 40 out of 337 recent papers identified in prestigious databases, including WOS, Scopus, and EI. The objectives of this review are threefold. Firstly, precise definitions are established for the bone fracture recognition, classification, detection, and localization tasks within deep learning. Secondly, each study is summarized based on key aspects such as the bones involved, research objectives, dataset sizes, methods employed, results obtained, and concluding remarks. This process distills the diverse approaches into a generalized processing framework or workflow. Moreover, this review identifies the crucial areas for future research in deep learning models for bone fracture diagnosis. These include enhancing the network interpretability, integrating multimodal clinical information, providing therapeutic schedule recommendations, and developing advanced visualization methods for clinical application. By addressing these challenges, deep learning models can be made more intelligent and specialized in this domain. In conclusion, this review fills the gap in precise task definitions within deep learning for bone fracture diagnosis and provides a comprehensive analysis of the recent research. The findings serve as a foundation for future advancements, enabling improved interpretability, multimodal integration, clinical decision support, and advanced visualization techniques.
Collapse
Affiliation(s)
- Zhihao Su
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Afzan Adam
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Mohammad Faidzul Nasrudin
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Masri Ayob
- Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Gauthamen Punganan
- Department of Orthopedics and Traumatology, Hospital Raja Permaisuri Bainun, Ipoh 30450, Perak, Malaysia
| |
Collapse
|
11
|
Gaidai O, Yakimov V, Balakrishna R. Dementia death rates prediction. BMC Psychiatry 2023; 23:691. [PMID: 37736716 PMCID: PMC10515261 DOI: 10.1186/s12888-023-05172-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 09/08/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Prevalence of dementia illness, causing certain morbidity and mortality globally, places burden on global public health. This study primary goal was to assess future risks of dying from severe dementia, given specific return period, within selected group of regions or nations. METHODS Traditional statistical approaches do not have benefits of effectively handling large regional dimensionality, along with nonlinear cross-correlations between various regional observations. In order to produce reliable long-term projections of excessive dementia death rate risks, this study advocates novel bio-system reliability technique, that being particularly suited for multi-regional environmental, biological, and health systems. DATA Raw clinical data has been used as an input to the suggested population-based, bio-statistical technique using data from medical surveys and several centers. RESULTS Novel spatiotemporal health system reliability methodology has been developed and applied to dementia death rates raw clinical data. Suggested methodology shown to be capable of dealing efficiently with spatiotemporal clinical observations of multi-regional nature. Accurate disease risks multi-regional spatiotemporal prediction being done, relevant confidence intervals have been presented as well. CONCLUSIONS Based on available clinical survey dataset, the proposed approach may be applied in a variety of clinical public health applications. Confidence bands, given for predicted dementia-associated death rate levels with return periods of interest, have been reasonably narrow, indicating practical values of advocated prognostics.
Collapse
Affiliation(s)
| | - Vladimir Yakimov
- Central Marine Research and Design Institute, Saint Petersburg, Russia
| | | |
Collapse
|
12
|
Son S, Lee W, Jung H, Lee J, Kim C, Lee H, Park H, Lee H, Jang J, Cho S, Ryu HC. Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain. SENSORS (BASEL, SWITZERLAND) 2023; 23:8027. [PMID: 37836857 PMCID: PMC10575171 DOI: 10.3390/s23198027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 09/13/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023]
Abstract
This study is the first to develop technology to evaluate the object recognition performance of camera sensors, which are increasingly important in autonomous vehicles owing to their relatively low price, and to verify the efficiency of camera recognition algorithms in obstruction situations. To this end, the concentration and color of the blockage and the type and color of the object were set as major factors, with their effects on camera recognition performance analyzed using a camera simulator based on a virtual test drive toolkit. The results show that the blockage concentration has the largest impact on object recognition, followed in order by the object type, blockage color, and object color. As for the blockage color, black exhibited better recognition performance than gray and yellow. In addition, changes in the blockage color affected the recognition of object types, resulting in different responses to each object. Through this study, we propose a blockage-based camera recognition performance evaluation method using simulation, and we establish an algorithm evaluation environment for various manufacturers through an interface with an actual camera. By suggesting the necessity and timing of future camera lens cleaning, we provide manufacturers with technical measures to improve the cleaning timing and camera safety.
Collapse
Affiliation(s)
- Sungho Son
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
- Department of Artificial Intelligence Convergence, University of Sahmyook, Seoul 01795, Republic of Korea
| | - Woongsu Lee
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Hyungi Jung
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Jungki Lee
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Charyung Kim
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Hyunwoo Lee
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Hyungwon Park
- Department of Future Vehicle Research, Korea Automobile Testing and Research Institute, Hwaseong 18247, Republic of Korea; (S.S.)
| | - Hyunmi Lee
- TOD Based Transportation Research Center, University of Ajou, Suwon 16499, Republic of Korea
| | - Jeongah Jang
- TOD Based Transportation Research Center, University of Ajou, Suwon 16499, Republic of Korea
| | - Sungwan Cho
- Department of Advanced Development, Techways, Yongin 16942, Republic of Korea
| | - Han-Cheol Ryu
- Department of Artificial Intelligence Convergence, University of Sahmyook, Seoul 01795, Republic of Korea
| |
Collapse
|
13
|
Gabralla LA, Hussien AM, AlMohimeed A, Saleh H, Alsekait DM, El-Sappagh S, Ali AA, Refaat Hassan M. Automated Diagnosis for Colon Cancer Diseases Using Stacking Transformer Models and Explainable Artificial Intelligence. Diagnostics (Basel) 2023; 13:2939. [PMID: 37761306 PMCID: PMC10529133 DOI: 10.3390/diagnostics13182939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/29/2023] Open
Abstract
Colon cancer is the third most common cancer type worldwide in 2020, almost two million cases were diagnosed. As a result, providing new, highly accurate techniques in detecting colon cancer leads to early and successful treatment of this disease. This paper aims to propose a heterogenic stacking deep learning model to predict colon cancer. Stacking deep learning is integrated with pretrained convolutional neural network (CNN) models with a metalearner to enhance colon cancer prediction performance. The proposed model is compared with VGG16, InceptionV3, Resnet50, and DenseNet121 using different evaluation metrics. Furthermore, the proposed models are evaluated using the LC25000 and WCE binary and muticlassified colon cancer image datasets. The results show that the stacking models recorded the highest performance for the two datasets. For the LC25000 dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (100). For the WCE colon image dataset, the stacked model recorded the highest performance accuracy, recall, precision, and F1 score (98). Stacking-SVM achieved the highest performed compared to existing models (VGG16, InceptionV3, Resnet50, and DenseNet121) because it combines the output of multiple single models and trains and evaluates a metalearner using the output to produce better predictive results than any single model. Black-box deep learning models are represented using explainable AI (XAI).
Collapse
Affiliation(s)
- Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ali Mohamed Hussien
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| | - Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt
| | - Deema Mohammed Alsekait
- Department of Computer Science and Information Technology, Applied College, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaker El-Sappagh
- Faculty of Computer Science and Engineering, Galala University, Suez 34511, Egypt
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Banha 13518, Egypt
| | - Abdelmgeid A. Ali
- Faculty of Computers and Information, Minia University, Minia 61519, Egypt
| | - Moatamad Refaat Hassan
- Department of Computer Science, Faculty of Science, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
14
|
Kufel J, Bargieł-Łączek K, Koźlik M, Czogalik Ł, Dudek P, Magiera M, Bartnikowska W, Lis A, Paszkiewicz I, Kocot S, Cebula M, Gruszczyńska K, Nawrat Z. Chest X-ray Foreign Objects Detection Using Artificial Intelligence. J Clin Med 2023; 12:5841. [PMID: 37762783 PMCID: PMC10531506 DOI: 10.3390/jcm12185841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 09/02/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
Diagnostic imaging has become an integral part of the healthcare system. In recent years, scientists around the world have been working on artificial intelligence-based tools that help in achieving better and faster diagnoses. Their accuracy is crucial for successful treatment, especially for imaging diagnostics. This study used a deep convolutional neural network to detect four categories of objects on digital chest X-ray images. The data were obtained from the publicly available National Institutes of Health (NIH) Chest X-ray (CXR) Dataset. In total, 112,120 CXRs from 30,805 patients were manually checked for foreign objects: vascular port, shoulder endoprosthesis, necklace, and implantable cardioverter-defibrillator (ICD). Then, they were annotated with the use of a computer program, and the necessary image preprocessing was performed, such as resizing, normalization, and cropping. The object detection model was trained using the You Only Look Once v8 architecture and the Ultralytics framework. The results showed not only that the obtained average precision of foreign object detection on the CXR was 0.815 but also that the model can be useful in detecting foreign objects on the CXR images. Models of this type may be used as a tool for specialists, in particular, with the growing popularity of radiology comes an increasing workload. We are optimistic that it could accelerate and facilitate the work to provide a faster diagnosis.
Collapse
Affiliation(s)
- Jakub Kufel
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland;
| | - Katarzyna Bargieł-Łączek
- Paediatric Radiology Students’ Scientific Association at the Division of Diagnostic Imaging, 40-752 Katowice, Poland; (K.B.-Ł.); (W.B.)
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-752 Katowice, Poland;
| | - Maciej Koźlik
- Division of Cardiology and Structural Heart Disease, Medical University of Silesia, 40-635 Katowice, Poland;
| | - Łukasz Czogalik
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysic, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Piotr Dudek
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysic, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Mikołaj Magiera
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysic, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Wiktoria Bartnikowska
- Paediatric Radiology Students’ Scientific Association at the Division of Diagnostic Imaging, 40-752 Katowice, Poland; (K.B.-Ł.); (W.B.)
| | - Anna Lis
- Cardiology Students’ Scientific Association at the III Department of Cardiology, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-635 Katowice, Poland;
| | - Iga Paszkiewicz
- Professor Zbigniew Religa Student Scientific Association at the Department of Biophysic, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland; (Ł.C.); (P.D.); (M.M.); (I.P.)
| | - Szymon Kocot
- Bright Coders’ Factory, Technologiczna 2, 45-839 Opole, Poland;
| | - Maciej Cebula
- Individual Specialist Medical Practice, 40-754 Katowice, Poland;
| | - Katarzyna Gruszczyńska
- Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences in Katowice, Medical University of Silesia, 40-752 Katowice, Poland;
| | - Zbigniew Nawrat
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Jordana 19, 41-808 Zabrze, Poland;
- Foundation of Cardiac Surgery Development, 41-800 Zabrze, Poland
| |
Collapse
|
15
|
Kaur M, AlZubi AA, Jain A, Singh D, Yadav V, Alkhayyat A. DSCNet: Deep Skip Connections-Based Dense Network for ALL Diagnosis Using Peripheral Blood Smear Images. Diagnostics (Basel) 2023; 13:2752. [PMID: 37685290 PMCID: PMC10486457 DOI: 10.3390/diagnostics13172752] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 08/16/2023] [Accepted: 08/18/2023] [Indexed: 09/10/2023] Open
Abstract
Acute lymphoblastic leukemia (ALL) is a life-threatening hematological malignancy that requires early and accurate diagnosis for effective treatment. However, the manual diagnosis of ALL is time-consuming and can delay critical treatment decisions. To address this challenge, researchers have turned to advanced technologies such as deep learning (DL) models. These models leverage the power of artificial intelligence to analyze complex patterns and features in medical images and data, enabling faster and more accurate diagnosis of ALL. However, the existing DL-based ALL diagnosis suffers from various challenges, such as computational complexity, sensitivity to hyperparameters, and difficulties with noisy or low-quality input images. To address these issues, in this paper, we propose a novel Deep Skip Connections-Based Dense Network (DSCNet) tailored for ALL diagnosis using peripheral blood smear images. The DSCNet architecture integrates skip connections, custom image filtering, Kullback-Leibler (KL) divergence loss, and dropout regularization to enhance its performance and generalization abilities. DSCNet leverages skip connections to address the vanishing gradient problem and capture long-range dependencies, while custom image filtering enhances relevant features in the input data. KL divergence loss serves as the optimization objective, enabling accurate predictions. Dropout regularization is employed to prevent overfitting during training, promoting robust feature representations. The experiments conducted on an augmented dataset for ALL highlight the effectiveness of DSCNet. The proposed DSCNet outperforms competing methods, showcasing significant enhancements in accuracy, sensitivity, specificity, F-score, and area under the curve (AUC), achieving increases of 1.25%, 1.32%, 1.12%, 1.24%, and 1.23%, respectively. The proposed approach demonstrates the potential of DSCNet as an effective tool for early and accurate ALL diagnosis, with potential applications in clinical settings to improve patient outcomes and advance leukemia detection research.
Collapse
Affiliation(s)
- Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India;
| | - Ahmad Ali AlZubi
- Department of Computer Science, Community College, King Saud University, Riyadh 11421, Saudi Arabia;
| | - Arpit Jain
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vijayawada 522302, India;
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Vaishali Yadav
- School of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India
| | - Ahmed Alkhayyat
- College of Technical Engineering, The Islamic University, Najaf 7003, Iraq;
| |
Collapse
|
16
|
Atteia G, Alnashwan R, Hassan M. Hybrid Feature-Learning-Based PSO-PCA Feature Engineering Approach for Blood Cancer Classification. Diagnostics (Basel) 2023; 13:2672. [PMID: 37627931 PMCID: PMC10453878 DOI: 10.3390/diagnostics13162672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023] Open
Abstract
Acute lymphoblastic leukemia (ALL) is a lethal blood cancer that is characterized by an abnormal increased number of immature lymphocytes in the blood or bone marrow. For effective treatment of ALL, early assessment of the disease is essential. Manual examination of stained blood smear images is current practice for initially screening ALL. This practice is time-consuming and error-prone. In order to effectively diagnose ALL, numerous deep-learning-based computer vision systems have been developed for detecting ALL in blood peripheral images (BPIs). Such systems extract a huge number of image features and use them to perform the classification task. The extracted features may contain irrelevant or redundant features that could reduce classification accuracy and increase the running time of the classifier. Feature selection is considered an effective tool to mitigate the curse of the dimensionality problem and alleviate its corresponding shortcomings. One of the most effective dimensionality-reduction tools is principal component analysis (PCA), which maps input features into an orthogonal space and extracts the features that convey the highest variability from the data. Other feature selection approaches utilize evolutionary computation (EC) to search the feature space and localize optimal features. To profit from both feature selection approaches in improving the classification performance of ALL, in this study, a new hybrid deep-learning-based feature engineering approach is proposed. The introduced approach integrates the powerful capability of PCA and particle swarm optimization (PSO) approaches in selecting informative features from BPI mages with the power of pre-trained CNNs of feature extraction. Image features are first extracted through the feature-transfer capability of the GoogleNet convolutional neural network (CNN). PCA is utilized to generate a feature set of the principal components that covers 95% of the variability in the data. In parallel, bio-inspired particle swarm optimization is used to search for the optimal image features. The PCA and PSO-derived feature sets are then integrated to develop a hybrid set of features that are then used to train a Bayesian-based optimized support vector machine (SVM) and subspace discriminant ensemble-learning (SDEL) classifiers. The obtained results show improved classification performance for the ML classifiers trained by the proposed hybrid feature set over the original PCA, PSO, and all extracted feature sets for ALL multi-class classification. The Bayesian-optimized SVM trained with the proposed hybrid PCA-PSO feature set achieves the highest classification accuracy of 97.4%. The classification performance of the proposed feature engineering approach competes with the state of the art.
Collapse
Affiliation(s)
- Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Rana Alnashwan
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Malak Hassan
- College of Medicine, Alfaisal University, P.O. Box 50927, Riyadh 11533, Saudi Arabia;
| |
Collapse
|
17
|
Lewis JE, Pozdnyakova O. Digital assessment of peripheral blood and bone marrow aspirate smears. Int J Lab Hematol 2023. [PMID: 37211430 DOI: 10.1111/ijlh.14082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 04/20/2023] [Indexed: 05/23/2023]
Abstract
The diagnosis of benign and neoplastic hematologic disorders relies on analysis of peripheral blood and bone marrow aspirate smears. As demonstrated by the widespread laboratory adoption of hematology analyzers for automated assessment of peripheral blood, digital analysis of these samples provides many significant benefits compared to relying solely on manual review. Nonetheless, analogous instruments for digital bone marrow aspirate smear assessment have yet to be clinically implemented. In this review, we first provide a historical overview detailing the implementation of hematology analyzers for digital peripheral blood assessment in the clinical laboratory, including the improvements in accuracy, scope, and throughput of current instruments over prior generations. We also describe recent research in digital peripheral blood assessment, particularly in the development of advanced machine learning models that may soon be incorporated into commercial instruments. Next, we provide an overview of recent research in digital assessment of bone marrow aspirate smears and how these approaches could soon lead to development and clinical adoption of instrumentation for automated bone marrow aspirate smear analysis. Finally, we describe the relative advantages and provide our vision for the future of digital assessment of peripheral blood and bone marrow aspirate smears, including what improvements we can soon expect in the hematology laboratory.
Collapse
Affiliation(s)
- Joshua E Lewis
- Department of Pathology, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Olga Pozdnyakova
- Department of Pathology, Brigham and Women's Hospital, Boston, Massachusetts, USA
| |
Collapse
|
18
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
19
|
Zhu H, Wang J, Wang SH, Raman R, Górriz JM, Zhang YD. An Evolutionary Attention-Based Network for Medical Image Classification. Int J Neural Syst 2023; 33:2350010. [PMID: 36655400 DOI: 10.1142/s0129065723500107] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Deep learning has become a primary choice in medical image analysis due to its powerful representation capability. However, most existing deep learning models designed for medical image classification can only perform well on a specific disease. The performance drops dramatically when it comes to other diseases. Generalizability remains a challenging problem. In this paper, we propose an evolutionary attention-based network (EDCA-Net), which is an effective and robust network for medical image classification tasks. To extract task-related features from a given medical dataset, we first propose the densely connected attentional network (DCA-Net) where feature maps are automatically channel-wise weighted, and the dense connectivity pattern is introduced to improve the efficiency of information flow. To improve the model capability and generalizability, we introduce two types of evolution: intra- and inter-evolution. The intra-evolution optimizes the weights of DCA-Net, while the inter-evolution allows two instances of DCA-Net to exchange training experience during training. The evolutionary DCA-Net is referred to as EDCA-Net. The EDCA-Net is evaluated on four publicly accessible medical datasets of different diseases. Experiments showed that the EDCA-Net outperforms the state-of-the-art methods on three datasets and achieves comparable performance on the last dataset, demonstrating good generalizability for medical image classification.
Collapse
Affiliation(s)
- Hengde Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Jian Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Rajeev Raman
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Juan M Górriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada 52005, Spain
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
20
|
Chen X, Zheng G, Zhou L, Li Z, Fan H. Deep self-supervised transformation learning for leukocyte classification. JOURNAL OF BIOPHOTONICS 2023; 16:e202200244. [PMID: 36377387 DOI: 10.1002/jbio.202200244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 10/03/2022] [Accepted: 11/06/2022] [Indexed: 06/16/2023]
Abstract
The scarcity of training annotation is one of the major challenges for the application of deep learning technology in medical image analysis. Recently, self-supervised learning provides a powerful solution to alleviate this challenge by extracting useful features from a large number of unlabeled training data. In this article, we propose a simple and effective self-supervised learning method for leukocyte classification by identifying the different transformations of leukocyte images, without requiring a large batch of negative sampling or specialized architectures. Specifically, a convolutional neural network backbone takes different transformations of leukocyte image as input for feature extraction. Then, a pretext task of self-supervised transformation recognition on the extracted feature is conducted by a classifier, which helps the backbone learn useful representations that generalize well across different leukocyte types and datasets. In the experiment, we systematically study the effect of different transformation compositions on useful leukocyte feature extraction. Compared with five typical baselines of self-supervised image classification, experimental results demonstrate that our method performs better in different evaluation protocols including linear evaluation, domain transfer, and finetuning, which proves the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xinwei Chen
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China
| | - Guolin Zheng
- College of Computer and Data Science, Fuzhou University, Fuzhou, China
| | - Liwei Zhou
- Department of Nutrition, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, College of Computer and Control Engineering, Minjiang University, Fuzhou, China
| | - Haoyi Fan
- School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
21
|
Alhares H, Tanha J, Balafar MA. AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
Affiliation(s)
- Hadi Alhares
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Jafar Tanha
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Mohammad Ali Balafar
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| |
Collapse
|
22
|
Ebrahimi Tarki F, Zarrabi M, Abdiali A, Sharbatdar M. Integration of Machine Learning and Structural Analysis for Predicting Peptide Antibiofilm Effects: Advancements in Drug Discovery for Biofilm-Related Infections. IRANIAN JOURNAL OF PHARMACEUTICAL RESEARCH : IJPR 2023; 22:e138704. [PMID: 38450220 PMCID: PMC10916117 DOI: 10.5812/ijpr-138704] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/22/2023] [Accepted: 08/26/2023] [Indexed: 03/08/2024]
Abstract
Background The rise of antibiotic resistance has become a major concern, signaling the end of the golden age of antibiotics. Bacterial biofilms, which exhibit high resistance to antibiotics, significantly contribute to the emergence of antibiotic resistance. Therefore, there is an urgent need to discover new therapeutic agents with specific characteristics to effectively combat biofilm-related infections. Studies have shown the promising potential of peptides as antimicrobial agents. Objectives This study aimed to establish a cost-effective and streamlined computational method for predicting the antibiofilm effects of peptides. This method can assist in addressing the intricate challenge of designing peptides with strong antibiofilm properties, a task that can be both challenging and costly. Methods A positive library, consisting of peptide sequences with antibiofilm activity exceeding 50%, was assembled, along with a negative library containing quorum-sensing peptides. For each peptide sequence, feature vectors were calculated, while considering the primary structure, the order of amino acids, their physicochemical properties, and their distributions. Multiple supervised learning algorithms were used to classify peptides with significant antibiofilm effects for subsequent experimental evaluations. Results The computational approach exhibited high accuracy in predicting the antibiofilm effects of peptides, with accuracy, precision, Matthew's correlation coefficient (MCC), and F1 score of 99%, 99%, 0.97, and 0.99, respectively. The performance level of this computational approach was comparable to that of previous methods. This study introduced a novel approach by combining the feature space with high antibiofilm activity. Conclusions In this study, a reliable and cost-effective method was developed for predicting the antibiofilm effects of peptides using a computational approach. This approach allows for the identification of peptide sequences with substantial antibiofilm activities for further experimental investigations. Accessible source codes and raw data of this study can be found online (hiABF), providing easy access and enabling future updates.
Collapse
Affiliation(s)
- Fatemeh Ebrahimi Tarki
- Department of Biotechnology, Faculty of Biological Sciences, Alzahra University, Tehran, Iran
| | - Mahboobeh Zarrabi
- Department of Biotechnology, Faculty of Biological Sciences, Alzahra University, Tehran, Iran
| | - Ahya Abdiali
- Department of Microbiology, Faculty of Biological Sciences, Alzahra University, Tehran, Iran
| | - Mahkame Sharbatdar
- Department of Mechanical Engineering, Khajeh Nasir Toosi University of Technology, Tehran, Iran
| |
Collapse
|
23
|
Fan J, Qin B, Gu F, Wang Z, Liu X, Zhu Q, Yang J. Automatic Detection of Horner Syndrome by Using Facial Images. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:8670350. [PMID: 36451761 PMCID: PMC9705100 DOI: 10.1155/2022/8670350] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 10/14/2022] [Accepted: 10/19/2022] [Indexed: 10/17/2023]
Abstract
Horner syndrome is a clinical constellation that presents with miosis, ptosis, and facial anhidrosis. It is important as a warning sign of the damaged oculosympathetic chain, potentially with serious causes. However, the diagnosis of Horner syndrome is operator dependent and subjective. This study aims to present an objective method that can recognize Horner sign from facial photos and verify its accuracy. A total of 173 images were collected, annotated, and divided into training and testing groups. Two types of classifiers were trained (two-stage classifier and one-stage classifier). The two-stage method utilized the MediaPipe face mesh to estimate the coordinates of landmarks and generate facial geometric features accordingly. Then, ten machine learning classifiers were trained based on this. The one-stage classifier was trained based on one of the latest algorithms, YOLO v5. The performance of the classifier was evaluated by the diagnosis accuracy, sensitivity, and specificity. For the two-stage model, the MediaPipe successfully detected 92.2% of images in the testing group, and the Decision Tree Classifier presented the highest accuracy (0.790). The sensitivity and specificity of this classifier were 0.432 and 0.970, respectively. As for the one-stage classifier, the accuracy, sensitivity, and specificity were 0.65, 0.51, and 0.84, respectively. The results of this study proved the possibility of automatic detection of Horner syndrome from images. This tool could work as a second advisor for neurologists by reducing subjectivity and increasing accuracy in diagnosing Horner syndrome.
Collapse
Affiliation(s)
- Jingyuan Fan
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Bengang Qin
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Fanbin Gu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Zhaoyang Wang
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Xiaolin Liu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Qingtang Zhu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Jiantao Yang
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| |
Collapse
|
24
|
Liu Y, Yao L, Li B, Sammut C, Chang X. Interpolation graph convolutional network for 3D point cloud analysis. INT J INTELL SYST 2022. [DOI: 10.1002/int.23087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Yao Liu
- School of Computer Science and Engineering University of New South Wales Sydney New South Wales Australia
| | - Lina Yao
- School of Computer Science and Engineering University of New South Wales Sydney New South Wales Australia
- Data 61, CSIRO & School of Computer Science and Engineering University of New South Wales Sydney New South Wales Australia
| | - Binghao Li
- School of Minerals and Energy Resources Engineering University of New South Wales Sydney New South Wales Australia
| | - Claude Sammut
- School of Computer Science and Engineering University of New South Wales Sydney New South Wales Australia
| | - Xiaojun Chang
- Faculty of Engineering and Information Technology University of Technology Sydney Sydney New South Wales Australia
| |
Collapse
|
25
|
Sampathila N, Chadaga K, Goswami N, Chadaga RP, Pandya M, Prabhu S, Bairy MG, Katta SS, Bhat D, Upadya SP. Customized Deep Learning Classifier for Detection of Acute Lymphoblastic Leukemia Using Blood Smear Images. Healthcare (Basel) 2022; 10:1812. [PMID: 36292259 PMCID: PMC9601337 DOI: 10.3390/healthcare10101812] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/07/2022] [Accepted: 09/09/2022] [Indexed: 08/27/2023] Open
Abstract
Acute lymphoblastic leukemia (ALL) is a rare type of blood cancer caused due to the overproduction of lymphocytes by the bone marrow in the human body. It is one of the common types of cancer in children, which has a fair chance of being cured. However, this may even occur in adults, and the chances of a cure are slim if diagnosed at a later stage. To aid in the early detection of this deadly disease, an intelligent method to screen the white blood cells is proposed in this study. The proposed intelligent deep learning algorithm uses the microscopic images of blood smears as the input data. This algorithm is implemented with a convolutional neural network (CNN) to predict the leukemic cells from the healthy blood cells. The custom ALLNET model was trained and tested using the microscopic images available as open-source data. The model training was carried out on Google Collaboratory using the Nvidia Tesla P-100 GPU method. Maximum accuracy of 95.54%, specificity of 95.81%, sensitivity of 95.91%, F1-score of 95.43%, and precision of 96% were obtained by this accurate classifier. The proposed technique may be used during the pre-screening to detect the leukemia cells during complete blood count (CBC) and peripheral blood tests.
Collapse
Affiliation(s)
- Niranjana Sampathila
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Krishnaraj Chadaga
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Neelankit Goswami
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Rajagopala P. Chadaga
- Department of Mechanical & Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Mayur Pandya
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Srikanth Prabhu
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Muralidhar G. Bairy
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Swathi S. Katta
- Manipal Institute of Management, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Devadas Bhat
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| | - Sudhakara P. Upadya
- Manipal School of Information Science, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
| |
Collapse
|
26
|
Zhao Y, Wang S, Ren Y, Zhang Y. CRANet: a comprehensive residual attention network for intracranial aneurysm image classification. BMC Bioinformatics 2022; 23:322. [PMID: 35931949 PMCID: PMC9356401 DOI: 10.1186/s12859-022-04872-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/02/2022] [Indexed: 11/15/2022] Open
Abstract
Rupture of intracranial aneurysm is the first cause of subarachnoid hemorrhage, second only to cerebral thrombosis and hypertensive cerebral hemorrhage, and the mortality rate is very high. MRI technology plays an irreplaceable role in the early detection and diagnosis of intracranial aneurysms and supports evaluating the size and structure of aneurysms. The increase in many aneurysm images, may be a massive workload for the doctors, which is likely to produce a wrong diagnosis. Therefore, we proposed a simple and effective comprehensive residual attention network (CRANet) to improve the accuracy of aneurysm detection, using a residual network to extract the features of an aneurysm. Many experiments have shown that the proposed CRANet model could detect aneurysms effectively. In addition, on the test set, the accuracy and recall rates reached 97.81% and 94%, which significantly improved the detection rate of aneurysms.
Collapse
Affiliation(s)
- Yawu Zhao
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China
| | - Shudong Wang
- College of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong, China.
| | - Yande Ren
- The Department of Medical Imaging Center, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Yulin Zhang
- College of Mathematics and System Science, Shandong University of Science and Technology, Qingdao, Shandong, China.
| |
Collapse
|
27
|
Efficient Framework for Detection of COVID-19 Omicron and Delta Variants Based on Two Intelligent Phases of CNN Models. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4838009. [PMID: 35495884 PMCID: PMC9050257 DOI: 10.1155/2022/4838009] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 03/10/2022] [Accepted: 03/16/2022] [Indexed: 01/06/2023]
Abstract
Introduction While the COVID-19 pandemic was waning in most parts of the world, a new wave of COVID-19 Omicron and Delta variants in Central Asia and the Middle East caused a devastating crisis and collapse of health-care systems. As the diagnostic methods for this COVID-19 variant became more complex, health-care centers faced a dramatic increase in patients. Thus, the need for less expensive and faster diagnostic methods led researchers and specialists to work on improving diagnostic testing. Method Inspired by the COVID-19 diagnosis methods, the latest and most efficient deep learning algorithms in the field of extracting X-ray and CT scan image features were used to identify COVID-19 in the early stages of the disease. Results We presented a general framework consisting of two models which are developed by convolutional neural network (CNN) using the concept of transfer learning and parameter optimization. The proposed phase of the framework was evaluated on the test dataset and yielded remarkable results and achieved a detection sensitivity, specificity, and accuracy of 0.99, 0.986, and 0.988, for the first phase and 0.997, 0.9976, and 0.997 for the second phase, respectively. In all cases, the whole framework was able to successfully classify COVID-19 and non-COVID-19 cases from CT scans and X-ray images. Conclusion Since the proposed framework was based on two deep learning models that used two radiology modalities, it was able to significantly assist radiologists in detecting COVID-19 in the early stages. The use of models with this feature can be considered as a powerful and reliable tool, compared to the previous models used in the past pandemics.
Collapse
|