1
|
Zhao J, Liu J, Wang S, Zhang P, Yu W, Yang C, Zhang Y, Chen Y. PIAA: Pre-imaging all-round assistant for digital radiography. Technol Health Care 2024:THC240639. [PMID: 39240596 DOI: 10.3233/thc-240639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
BACKGROUND In radiography procedures, radiographers' suboptimal positioning and exposure parameter settings may necessitate image retakes, subjecting patients to unnecessary ionizing radiation exposure. Reducing retakes is crucial to minimize patient X-ray exposure and conserve medical resources. OBJECTIVE We propose a Digital Radiography (DR) Pre-imaging All-round Assistant (PIAA) that leverages Artificial Intelligence (AI) technology to enhance traditional DR. METHODS PIAA consists of an RGB-Depth (RGB-D) multi-camera array, an embedded computing platform, and multiple software components. It features an Adaptive RGB-D Image Acquisition (ARDIA) module that automatically selects the appropriate RGB camera based on the distance between the cameras and patients. It includes a 2.5D Selective Skeletal Keypoints Estimation (2.5D-SSKE) module that fuses depth information with 2D keypoints to estimate the pose of target body parts. Thirdly, it also uses a Domain expertise (DE) embedded Full-body Exposure Parameter Estimation (DFEPE) module that combines 2.5D-SSKE and DE to accurately estimate parameters for full-body DR views. RESULTS Optimizes DR workflow, significantly enhancing operational efficiency. The average time required for positioning patients and preparing exposure parameters was reduced from 73 seconds to 8 seconds. CONCLUSIONS PIAA shows significant promise for extension to full-body examinations.
Collapse
Affiliation(s)
- Jie Zhao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Jianqiang Liu
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Shijie Wang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Pinzheng Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Wenxue Yu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Chunfeng Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yudong Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
2
|
Bigalke A, Hansen L, Diesel J, Hennigs C, Rostalski P, Heinrich MP. Anatomy-guided domain adaptation for 3D in-bed human pose estimation. Med Image Anal 2023; 89:102887. [PMID: 37453235 DOI: 10.1016/j.media.2023.102887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 06/16/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023]
Abstract
3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain. Our method comprises two complementary adaptation strategies based on prior knowledge about human anatomy. First, we guide the learning process in the target domain by constraining predictions to the space of anatomically plausible poses. To this end, we embed the prior knowledge into an anatomical loss function that penalizes asymmetric limb lengths, implausible bone lengths, and implausible joint angles. Second, we propose to filter pseudo labels for self-training according to their anatomical plausibility and incorporate the concept into the Mean Teacher paradigm. We unify both strategies in a point cloud-based framework applicable to unsupervised and source-free domain adaptation. Evaluation is performed for in-bed pose estimation under two adaptation scenarios, using the public SLP dataset and a newly created dataset. Our method consistently outperforms various state-of-the-art domain adaptation methods, surpasses the baseline model by 31%/66%, and reduces the domain gap by 65%/82%. Source code is available at https://github.com/multimodallearning/da-3dhpe-anatomy.
Collapse
Affiliation(s)
- Alexander Bigalke
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany.
| | - Lasse Hansen
- EchoScout GmbH, Maria-Goeppert-Str. 3, 23562 Lübeck, Germany
| | - Jasper Diesel
- Drägerwerk AG & Co. KGaA, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Carlotta Hennigs
- Institute for Electrical Engineering in Medicine, University of Lübeck, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Philipp Rostalski
- Institute for Electrical Engineering in Medicine, University of Lübeck, Moislinger Allee 53-55, 23558 Lübeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany
| |
Collapse
|
3
|
Momin MS, Sufian A, Barman D, Dutta P, Dong M, Leo M. In-Home Older Adults' Activity Pattern Monitoring Using Depth Sensors: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:9067. [PMID: 36501769 PMCID: PMC9735577 DOI: 10.3390/s22239067] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 11/10/2022] [Accepted: 11/15/2022] [Indexed: 06/17/2023]
Abstract
The global population is aging due to many factors, including longer life expectancy through better healthcare, changing diet, physical activity, etc. We are also witnessing various frequent epidemics as well as pandemics. The existing healthcare system has failed to deliver the care and support needed to our older adults (seniors) during these frequent outbreaks. Sophisticated sensor-based in-home care systems may offer an effective solution to this global crisis. The monitoring system is the key component of any in-home care system. The evidence indicates that they are more useful when implemented in a non-intrusive manner through different visual and audio sensors. Artificial Intelligence (AI) and Computer Vision (CV) techniques may be ideal for this purpose. Since the RGB imagery-based CV technique may compromise privacy, people often hesitate to utilize in-home care systems which use this technology. Depth, thermal, and audio-based CV techniques could be meaningful substitutes here. Due to the need to monitor larger areas, this review article presents a systematic discussion on the state-of-the-art using depth sensors as primary data-capturing techniques. We mainly focused on fall detection and other health-related physical patterns. As gait parameters may help to detect these activities, we also considered depth sensor-based gait parameters separately. The article provides discussions on the topic in relation to the terminology, reviews, a survey of popular datasets, and future scopes.
Collapse
Affiliation(s)
- Md Sarfaraz Momin
- Department of Computer Science, Kaliachak College, University of Gour Banga, Malda 732101, India
- Department of Computer & System Sciences, Visva-Bharati University, Bolpur 731235, India
| | - Abu Sufian
- Department of Computer Science, University of Gour Banga, Malda 732101, India
| | - Debaditya Barman
- Department of Computer & System Sciences, Visva-Bharati University, Bolpur 731235, India
| | - Paramartha Dutta
- Department of Computer & System Sciences, Visva-Bharati University, Bolpur 731235, India
| | - Mianxiong Dong
- Department of Science and Informatics, Muroran Institute of Technology, Muroran 050-8585, Hokkaido, Japan
| | - Marco Leo
- National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, 73100 Lecce, Italy
| |
Collapse
|
4
|
Liu T, Siegel E, Shen D. Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction. Annu Rev Biomed Eng 2022; 24:179-201. [PMID: 35316609 DOI: 10.1146/annurev-bioeng-110220-012203] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has imposed dramatic challenges to health-care organizations worldwide. To combat the global crisis, the use of thoracic imaging has played a major role in diagnosis, prediction, and management for COVID-19 patients with moderate to severe symptoms or with evidence of worsening respiratory status. In response, the medical image analysis community acted quickly to develop and disseminate deep learning models and tools to meet the urgent need of managing and interpreting large amounts of COVID-19 imaging data. This review aims to not only summarize existing deep learning and medical image analysis methods but also offer in-depth discussions and recommendations for future investigations. We believe that the wide availability of high-quality, curated, and benchmarked COVID-19 imaging data sets offers the great promise of a transformative test bed to develop, validate, and disseminate novel deep learning methods in the frontiers of data science and artificial intelligence. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 24 is June 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Tianming Liu
- Department of Computer Science, University of Georgia, Athens, Georgia, USA;
| | - Eliot Siegel
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Maryland, USA;
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China;
| |
Collapse
|
5
|
Bigalke A, Hansen L, Diesel J, Heinrich MP. Seeing under the cover with a 3D U-Net: point cloud-based weight estimation of covered patients. Int J Comput Assist Radiol Surg 2021; 16:2079-2087. [PMID: 34420184 PMCID: PMC8616862 DOI: 10.1007/s11548-021-02476-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 08/05/2021] [Indexed: 11/02/2022]
Abstract
PURPOSE Body weight is a crucial parameter for patient-specific treatments, particularly in the context of proper drug dosage. Contactless weight estimation from visual sensor data constitutes a promising approach to overcome challenges arising in emergency situations. Machine learning-based methods have recently been shown to perform accurate weight estimation from point cloud data. The proposed methods, however, are designed for controlled conditions in terms of visibility and position of the patient, which limits their practical applicability. In this work, we aim to decouple accurate weight estimation from such specific conditions by predicting the weight of covered patients from voxelized point cloud data. METHODS We propose a novel deep learning framework, which comprises two 3D CNN modules solving the given task in two separate steps. First, we train a 3D U-Net to virtually uncover the patient, i.e. to predict the patient's volumetric surface without a cover. Second, the patient's weight is predicted from this 3D volume by means of a 3D CNN architecture, which we optimized for weight regression. RESULTS We evaluate our approach on a lying pose dataset (SLP) under two different cover conditions. The proposed framework considerably improves on the baseline model by up to [Formula: see text] and reduces the gap between the accuracy of weight estimates for covered and uncovered patients by up to [Formula: see text]. CONCLUSION We present a novel pipeline to estimate the weight of patients, which are covered by a blanket. Our approach relaxes the specific conditions that were required for accurate weight estimates by previous contactless methods and thus constitutes an important step towards fully automatic weight estimation in clinical practice.
Collapse
Affiliation(s)
- Alexander Bigalke
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538, Lübeck, Germany.
| | - Lasse Hansen
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538, Lübeck, Germany
| | - Jasper Diesel
- Drägerwerk AG & Co. KGaA, Moislinger Allee 53-55, 23558, Lübeck, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538, Lübeck, Germany
| |
Collapse
|
6
|
Liu T. Grand Challenges in AI in Radiology. FRONTIERS IN RADIOLOGY 2021; 1:629992. [PMID: 37492177 PMCID: PMC10364978 DOI: 10.3389/fradi.2021.629992] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/10/2021] [Indexed: 07/27/2023]
|
7
|
Qiao Z, Bae A, Glass LM, Xiao C, Sun J. FLANNEL (Focal Loss bAsed Neural Network EnsembLe) for COVID-19 detection. J Am Med Inform Assoc 2021; 28:444-452. [PMID: 33125051 PMCID: PMC7665533 DOI: 10.1093/jamia/ocaa280] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/26/2020] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The study sought to test the possibility of differentiating chest x-ray images of coronavirus disease 2019 (COVID-19) against other pneumonia and healthy patients using deep neural networks. MATERIALS AND METHODS We construct the radiography (x-ray) imaging data from 2 publicly available sources, which include 5508 chest x-ray images across 2874 patients with 4 classes: normal, bacterial pneumonia, non-COVID-19 viral pneumonia, and COVID-19. To identify COVID-19, we propose a FLANNEL (Focal Loss bAsed Neural Network EnsembLe) model, a flexible module to ensemble several convolutional neural network models and fuse with a focal loss for accurate COVID-19 detection on class imbalance data. RESULTS FLANNEL consistently outperforms baseline models on COVID-19 identification task in all metrics. Compared with the best baseline, FLANNEL shows a higher macro-F1 score, with 6% relative increase on the COVID-19 identification task, in which it achieves precision of 0.7833 ± 0.07, recall of 0.8609 ± 0.03, and F1 score of 0.8168 ± 0.03. DISCUSSION Ensemble learning that combines multiple independent basis classifiers can increase the robustness and accuracy. We propose a neural weighing module to learn the importance weight for each base model and combine them via weighted ensemble to get the final classification results. In order to handle the class imbalance challenge, we adapt focal loss to our multiple classification task as the loss function. CONCLUSION FLANNEL effectively combines state-of-the-art convolutional neural network classification models and tackles class imbalance with focal loss to achieve better performance on COVID-19 detection from x-rays.
Collapse
Affiliation(s)
- Zhi Qiao
- Analytics Center of Excellence, IQVIA, Beijing, China
| | - Austin Bae
- Analytics Center of Excellence, IQVIA, Cambridge, Massachusetts, USA
| | - Lucas M Glass
- Analytics Center of Excellence, IQVIA, Cambridge, Massachusetts, USA
| | - Cao Xiao
- Analytics Center of Excellence, IQVIA, Beijing, China
| | - Jimeng Sun
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
8
|
Ma J, Nie Z, Wang C, Dong G, Zhu Q, He J, Gui L, Yang X. Active contour regularized semi-supervised learning for COVID-19 CT infection segmentation with limited annotations. Phys Med Biol 2020; 65:225034. [PMID: 33045699 DOI: 10.1088/1361-6560/abc04e] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Infection segmentation on chest CT plays an important role in the quantitative analysis of COVID-19. Developing automatic segmentation tools in a short period with limited labelled images has become an urgent need. Pseudo label-based semi-supervised method is a promising way to leverage unlabelled data to improve segmentation performance. Existing methods usually obtain pseudo labels by first training a network with limited labelled images and then inferring unlabelled images. However, these methods may generate obviously inaccurate labels and degrade the subsequent training process. To address these challenges, in this paper, an active contour regularized semi-supervised learning framework was proposed to automatically segment infections with few labelled images. The active contour regularization was realized by the region-scalable fitting (RSF) model which is embedded to the loss function of the network to regularize and refine the pseudo labels of the unlabelled images. We further designed a splitting method to separately optimize the RSF regularization term and the segmentation loss term with iterative convolution-thresholding method and stochastic gradient descent, respectively, which enable fast optimization of each term. Furthermore, we built a statistical atlas to show the infection spatial distribution. Extensive experiments on a small public dataset and a large scale dataset showed that the proposed method outperforms state-of-the-art methods with up to 5% in dice similarity coefficient and normalized surface dice, 10% in relative absolute volume difference and 8 mm in 95% Hausdorff distance. Moreover, we observed that the infections tend to occur at the dorsal subpleural lung and posterior basal segments that are not mentioned in current radiology reports and are meaningful to advance our understanding of COVID-19.
Collapse
Affiliation(s)
- Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Nanjing, 210094, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
9
|
Zhang P, Zhong Y, Deng Y, Tang X, Li X. Drr4covid: Learning Automated COVID-19 Infection Segmentation From Digitally Reconstructed Radiographs. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:207736-207757. [PMID: 34812368 PMCID: PMC8545269 DOI: 10.1109/access.2020.3038279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 11/10/2020] [Indexed: 05/07/2023]
Abstract
Automated infection measurement and COVID-19 diagnosis based on Chest X-ray (CXR) imaging is important for faster examination, where infection segmentation is an essential step for assessment and quantification. However, due to the heterogeneity of X-ray imaging and the difficulty of annotating infected regions precisely, learning automated infection segmentation on CXRs remains a challenging task. We propose a novel approach, called DRR4Covid, to learn COVID-19 infection segmentation on CXRs from digitally reconstructed radiographs (DRRs). DRR4Covid consists of an infection-aware DRR generator, a segmentation network, and a domain adaptation module. Given a labeled Computed Tomography scan, the infection-aware DRR generator can produce infection-aware DRRs with pixel-level annotations of infected regions for training the segmentation network. The domain adaptation module is designed to enable the segmentation network trained on DRRs to generalize to CXRs. The statistical analyses made on experiment results have indicated that our infection-aware DRRs are significantly better than standard DRRs in learning COVID-19 infection segmentation (p < 0.05) and the domain adaptation module can improve the infection segmentation performance on CXRs significantly (p < 0.05). Without using any annotations of CXRs, our network has achieved a classification score of (Accuracy: 0.949, AUC: 0.987, F1-score: 0.947) and a segmentation score of (Accuracy: 0.956, AUC: 0.980, F1-score: 0.955) on a test set with 558 normal cases and 558 positive cases. Besides, by adjusting the strength of radiological signs of COVID-19 infection in infection-aware DRRs, we estimate the detection limit of X-ray imaging in detecting COVID-19 infection. The estimated detection limit, measured by the percent volume of the lung that is infected by COVID-19, is 19.43% ± 16.29%, and the estimated lower bound of infected voxel contribution rate for significant radiological signs of COVID-19 infection is 20.0%. Our codes are made publicly available at https://github.com/PengyiZhang/DRR4Covid.
Collapse
Affiliation(s)
- Pengyi Zhang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yunxin Zhong
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yulin Deng
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoying Tang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoqiong Li
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| |
Collapse
|
10
|
Sakib S, Tazrin T, Fouda MM, Fadlullah ZM, Guizani M. DL-CRC: Deep Learning-Based Chest Radiograph Classification for COVID-19 Detection: A Novel Approach. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:171575-171589. [PMID: 34976555 PMCID: PMC8675549 DOI: 10.1109/access.2020.3025010] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 08/26/2020] [Indexed: 05/05/2023]
Abstract
With the exponentially growing COVID-19 (coronavirus disease 2019) pandemic, clinicians continue to seek accurate and rapid diagnosis methods in addition to virus and antibody testing modalities. Because radiographs such as X-rays and computed tomography (CT) scans are cost-effective and widely available at public health facilities, hospital emergency rooms (ERs), and even at rural clinics, they could be used for rapid detection of possible COVID-19-induced lung infections. Therefore, toward automating the COVID-19 detection, in this paper, we propose a viable and efficient deep learning-based chest radiograph classification (DL-CRC) framework to distinguish the COVID-19 cases with high accuracy from other abnormal (e.g., pneumonia) and normal cases. A unique dataset is prepared from four publicly available sources containing the posteroanterior (PA) chest view of X-ray data for COVID-19, pneumonia, and normal cases. Our proposed DL-CRC framework leverages a data augmentation of radiograph images (DARI) algorithm for the COVID-19 data by adaptively employing the generative adversarial network (GAN) and generic data augmentation methods to generate synthetic COVID-19 infected chest X-ray images to train a robust model. The training data consisting of actual and synthetic chest X-ray images are fed into our customized convolutional neural network (CNN) model in DL-CRC, which achieves COVID-19 detection accuracy of 93.94% compared to 54.55% for the scenario without data augmentation (i.e., when only a few actual COVID-19 chest X-ray image samples are available in the original dataset). Furthermore, we justify our customized CNN model by extensively comparing it with widely adopted CNN architectures in the literature, namely ResNet, Inception-ResNet v2, and DenseNet that represent depth-based, multi-path-based, and hybrid CNN paradigms. The encouragingly high classification accuracy of our proposal implies that it can efficiently automate COVID-19 detection from radiograph images to provide a fast and reliable evidence of COVID-19 infection in the lung that can complement existing COVID-19 diagnostics modalities.
Collapse
Affiliation(s)
- Sadman Sakib
- Department of Computer ScienceLakehead UniversityThunder BayONP7B 5E1Canada
| | - Tahrat Tazrin
- Department of Computer ScienceLakehead UniversityThunder BayONP7B 5E1Canada
| | - Mostafa M. Fouda
- Department of Electrical and Computer EngineeringCollege of Science and EngineeringIdaho State UniversityPocatelloID83209USA
- Department of Electrical EngineeringFaculty of Engineering at ShoubraBenha UniversityCairo11629Egypt
| | - Zubair Md. Fadlullah
- Department of Computer ScienceLakehead UniversityThunder BayONP7B 5E1Canada
- Thunder Bay Regional Health Research Institute (TBRHRI)Thunder BayONP7B 7A5Canada
| | - Mohsen Guizani
- Department of Computer Science and EngineeringCollege of EngineeringQatar UniversityDohaQatar
| |
Collapse
|