1
|
Jung S, Yang H, Kim HJ, Roh HG, Kwak JT. 3D mobile regression vision transformer for collateral imaging in acute ischemic stroke. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03229-5. [PMID: 39002099 DOI: 10.1007/s11548-024-03229-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 07/01/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE The accurate and timely assessment of the collateral perfusion status is crucial in the diagnosis and treatment of patients with acute ischemic stroke. Previous works have shown that collateral imaging, derived from CT angiography, MR perfusion, and MR angiography, aids in evaluating the collateral status. However, such methods are time-consuming and/or sub-optimal due to the nature of manual processing and heuristics. Recently, deep learning approaches have shown to be promising for generating collateral imaging. These, however, suffer from the computational complexity and cost. METHODS In this study, we propose a mobile, lightweight deep regression neural network for collateral imaging in acute ischemic stroke, leveraging dynamic susceptibility contrast MR perfusion (DSC-MRP). Built based upon lightweight convolution and Transformer architectures, the proposed model manages the balance between the model complexity and performance. RESULTS We evaluated the performance of the proposed model in generating the five-phase collateral maps, including arterial, capillary, early venous, late venous, and delayed phases, using DSC-MRP from 952 patients. In comparison with various deep learning models, the proposed method was superior to the competitors with similar complexity and was comparable to the competitors of high complexity. CONCLUSION The results suggest that the proposed model is able to facilitate rapid and precise assessment of the collateral status of patients with acute ischemic stroke, leading to improved patient care and outcome.
Collapse
Affiliation(s)
- Sumin Jung
- School of Electrical and Electronic Engineering, Korea University, Seoul, Republic of Korea
| | - Hyun Yang
- School of Electrical and Electronic Engineering, Korea University, Seoul, Republic of Korea
| | - Hyun Jeong Kim
- Department of Radiology, St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Daejeon, Republic of Korea
| | - Hong Gee Roh
- Department of Radiology, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical and Electronic Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Aldakhil LA, Alhasson HF, Alharbi SS. Attention-Based Deep Learning Approach for Breast Cancer Histopathological Image Multi-Classification. Diagnostics (Basel) 2024; 14:1402. [PMID: 39001292 PMCID: PMC11241245 DOI: 10.3390/diagnostics14131402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/18/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.
Collapse
Affiliation(s)
| | - Haifa F. Alhasson
- Department of Information Technology, College of Computer, Qassim University, Buraydah 52571, Saudi Arabia; (L.A.A.); (S.S.A.)
| | | |
Collapse
|
3
|
Mobarak-Abadi M, Mahmoudi-Aznaveh A, Dehghani H, Zarei M, Vahdat S, Doyon J, Khatibi A. DeepRetroMoCo: deep neural network-based retrospective motion correction algorithm for spinal cord functional MRI. Front Psychiatry 2024; 15:1323109. [PMID: 39006826 PMCID: PMC11239515 DOI: 10.3389/fpsyt.2024.1323109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 05/21/2024] [Indexed: 07/16/2024] Open
Abstract
Background and purpose There are distinct challenges in the preprocessing of spinal cord fMRI data, particularly concerning the mitigation of voluntary or involuntary movement artifacts during image acquisition. Despite the notable progress in data processing techniques for movement detection and correction, applying motion correction algorithms developed for the brain cortex to the brainstem and spinal cord remains a challenging endeavor. Methods In this study, we employed a deep learning-based convolutional neural network (CNN) named DeepRetroMoCo, trained using an unsupervised learning algorithm. Our goal was to detect and rectify motion artifacts in axial T2*-weighted spinal cord data. The training dataset consisted of spinal cord fMRI data from 27 participants, comprising 135 runs for training and 81 runs for testing. Results To evaluate the efficacy of DeepRetroMoCo, we compared its performance against the sct_fmri_moco method implemented in the spinal cord toolbox. We assessed the motion-corrected images using two metrics: the average temporal signal-to-noise ratio (tSNR) and Delta Variation Signal (DVARS) for both raw and motion-corrected data. Notably, the average tSNR in the cervical cord was significantly higher when DeepRetroMoCo was utilized for motion correction, compared to the sct_fmri_moco method. Additionally, the average DVARS values were lower in images corrected by DeepRetroMoCo, indicating a superior reduction in motion artifacts. Moreover, DeepRetroMoCo exhibited a significantly shorter processing time compared to sct_fmri_moco. Conclusion Our findings strongly support the notion that DeepRetroMoCo represents a substantial improvement in motion correction procedures for fMRI data acquired from the cervical spinal cord. This novel deep learning-based approach showcases enhanced performance, offering a promising solution to address the challenges posed by motion artifacts in spinal cord fMRI data.
Collapse
Affiliation(s)
- Mahdi Mobarak-Abadi
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | | | - Hamed Dehghani
- Neuro Imaging and Analysis Group (NIAG), Research Center for Molecular and Cellular Imaging (RCMCI), Tehran University of Medical Sciences, Tehran, Iran
| | - Mojtaba Zarei
- Institute of Medical Science and Technology, Shahid Beheshti University, Tehran, Iran
| | - Shahabeddin Vahdat
- Department of Applied Physiology and Kinesiology (DAPK), University of Florida, Gainesville, FL, United States
| | - Julien Doyon
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Ali Khatibi
- Centre of Precision Rehabilitation for Spinal Pain, School of Sports Exercise and Rehabilitation Sciences, University of Birmingham, Birmingham, United Kingdom
- Centre for Human Brain Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
4
|
Asadi F, Angsuwatanakul T, O’Reilly JA. Evaluating synthetic neuroimaging data augmentation for automatic brain tumour segmentation with a deep fully-convolutional network. IBRO Neurosci Rep 2024; 16:57-66. [PMID: 39007088 PMCID: PMC11240293 DOI: 10.1016/j.ibneur.2023.12.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 12/11/2023] [Indexed: 07/16/2024] Open
Abstract
Gliomas observed in medical images require expert neuro-radiologist evaluation for treatment planning and monitoring, motivating development of intelligent systems capable of automating aspects of tumour evaluation. Deep learning models for automatic image segmentation rely on the amount and quality of training data. In this study we developed a neuroimaging synthesis technique to augment data for training fully-convolutional networks (U-nets) to perform automatic glioma segmentation. We used StyleGAN2-ada to simultaneously generate fluid-attenuated inversion recovery (FLAIR) magnetic resonance images and corresponding glioma segmentation masks. Synthetic data were successively added to real training data (n = 2751) in fourteen rounds of 1000 and used to train U-nets that were evaluated on held-out validation (n = 590) and test sets (n = 588). U-nets were trained with and without geometric augmentation (translation, zoom and shear), and Dice coefficients were computed to evaluate segmentation performance. We also monitored the number of training iterations before stopping, total training time, and time per iteration to evaluate computational costs associated with training each U-net. Synthetic data augmentation yielded marginal improvements in Dice coefficients (validation set +0.0409, test set +0.0355), whereas geometric augmentation improved generalization (standard deviation between training, validation and test set performances of 0.01 with, and 0.04 without geometric augmentation). Based on the modest performance gains for automatic glioma segmentation we find it hard to justify the computational expense of developing a synthetic image generation pipeline. Future work may seek to optimize the efficiency of synthetic data generation for augmentation of neuroimaging data.
Collapse
Affiliation(s)
- Fawad Asadi
- College of Biomedical Engineering, Rangsit University, Pathum Thani 12000, Thailand
| | | | - Jamie A. O’Reilly
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| |
Collapse
|
5
|
Kolokotroni E, Abler D, Ghosh A, Tzamali E, Grogan J, Georgiadi E, Büchler P, Radhakrishnan R, Byrne H, Sakkalis V, Nikiforaki K, Karatzanis I, McFarlane NJB, Kaba D, Dong F, Bohle RM, Meese E, Graf N, Stamatakos G. A Multidisciplinary Hyper-Modeling Scheme in Personalized In Silico Oncology: Coupling Cell Kinetics with Metabolism, Signaling Networks, and Biomechanics as Plug-In Component Models of a Cancer Digital Twin. J Pers Med 2024; 14:475. [PMID: 38793058 PMCID: PMC11122096 DOI: 10.3390/jpm14050475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/11/2024] [Accepted: 04/17/2024] [Indexed: 05/26/2024] Open
Abstract
The massive amount of human biological, imaging, and clinical data produced by multiple and diverse sources necessitates integrative modeling approaches able to summarize all this information into answers to specific clinical questions. In this paper, we present a hypermodeling scheme able to combine models of diverse cancer aspects regardless of their underlying method or scale. Describing tissue-scale cancer cell proliferation, biomechanical tumor growth, nutrient transport, genomic-scale aberrant cancer cell metabolism, and cell-signaling pathways that regulate the cellular response to therapy, the hypermodel integrates mutation, miRNA expression, imaging, and clinical data. The constituting hypomodels, as well as their orchestration and links, are described. Two specific cancer types, Wilms tumor (nephroblastoma) and non-small cell lung cancer, are addressed as proof-of-concept study cases. Personalized simulations of the actual anatomy of a patient have been conducted. The hypermodel has also been applied to predict tumor control after radiotherapy and the relationship between tumor proliferative activity and response to neoadjuvant chemotherapy. Our innovative hypermodel holds promise as a digital twin-based clinical decision support system and as the core of future in silico trial platforms, although additional retrospective adaptation and validation are necessary.
Collapse
Affiliation(s)
- Eleni Kolokotroni
- In Silico Oncology and In Silico Medicine Group, Institute of Communication and Computer Systems, School of Electrical and Computer Engineering, National Technical University of Athens, 157 80 Zografos, Greece;
| | - Daniel Abler
- Department of Oncology, Geneva University Hospitals and University of Geneva, 1205 Geneva, Switzerland;
- Department of Oncology, Lausanne University Hospital and University of Lausanne, 1011 Lausanne, Switzerland
| | - Alokendra Ghosh
- Department of Chemical and Biomolecular Engineering, Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; (A.G.); (R.R.)
| | - Eleftheria Tzamali
- Institute of Computer Science, Foundation for Research and Technology—Hellas, 70013 Heraklion, Greece; (E.T.); (V.S.); (K.N.); (I.K.)
| | - James Grogan
- Irish Centre for High End Computing, University of Galway, H91 TK33 Galway, Ireland;
| | - Eleni Georgiadi
- In Silico Oncology and In Silico Medicine Group, Institute of Communication and Computer Systems, School of Electrical and Computer Engineering, National Technical University of Athens, 157 80 Zografos, Greece;
- Biomedical Engineering Department, University of West Attica, 12243 Egaleo, Greece
| | | | - Ravi Radhakrishnan
- Department of Chemical and Biomolecular Engineering, Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, USA; (A.G.); (R.R.)
| | - Helen Byrne
- Mathematical Institute, University of Oxford, Oxford OX1 2JD, UK;
| | - Vangelis Sakkalis
- Institute of Computer Science, Foundation for Research and Technology—Hellas, 70013 Heraklion, Greece; (E.T.); (V.S.); (K.N.); (I.K.)
| | - Katerina Nikiforaki
- Institute of Computer Science, Foundation for Research and Technology—Hellas, 70013 Heraklion, Greece; (E.T.); (V.S.); (K.N.); (I.K.)
| | - Ioannis Karatzanis
- Institute of Computer Science, Foundation for Research and Technology—Hellas, 70013 Heraklion, Greece; (E.T.); (V.S.); (K.N.); (I.K.)
| | | | - Djibril Kaba
- Department of Computer Science and Technology, University of Bedfordshire, Luton LU1 3JU, UK;
| | - Feng Dong
- Department of Computer & Information Sciences, University of Strathclyde, Glasgow G1 1XH, UK;
| | - Rainer M. Bohle
- Department of Pathology, Saarland University, 66421 Homburg, Germany;
| | - Eckart Meese
- Department of Human Genetics, Saarland University, 66421 Homburg, Germany;
| | - Norbert Graf
- Department of Paediatric Oncology and Haematology, Saarland University, 66421 Homburg, Germany;
| | - Georgios Stamatakos
- In Silico Oncology and In Silico Medicine Group, Institute of Communication and Computer Systems, School of Electrical and Computer Engineering, National Technical University of Athens, 157 80 Zografos, Greece;
| |
Collapse
|
6
|
Sukumarran D, Hasikin K, Khairuddin ASM, Ngui R, Sulaiman WYW, Vythilingam I, Divis PCS. An optimised YOLOv4 deep learning model for efficient malarial cell detection in thin blood smear images. Parasit Vectors 2024; 17:188. [PMID: 38627870 PMCID: PMC11022477 DOI: 10.1186/s13071-024-06215-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 02/25/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Malaria is a serious public health concern worldwide. Early and accurate diagnosis is essential for controlling the disease's spread and avoiding severe health complications. Manual examination of blood smear samples by skilled technicians is a time-consuming aspect of the conventional malaria diagnosis toolbox. Malaria persists in many parts of the world, emphasising the urgent need for sophisticated and automated diagnostic instruments to expedite the identification of infected cells, thereby facilitating timely treatment and reducing the risk of disease transmission. This study aims to introduce a more lightweight and quicker model-but with improved accuracy-for diagnosing malaria using a YOLOv4 (You Only Look Once v. 4) deep learning object detector. METHODS The YOLOv4 model is modified using direct layer pruning and backbone replacement. The primary objective of layer pruning is the removal and individual analysis of residual blocks within the C3, C4 and C5 (C3-C5) Res-block bodies of the backbone architecture's C3-C5 Res-block bodies. The CSP-DarkNet53 backbone is simultaneously replaced for enhanced feature extraction with a shallower ResNet50 network. The performance metrics of the models are compared and analysed. RESULTS The modified models outperform the original YOLOv4 model. The YOLOv4-RC3_4 model with residual blocks pruned from the C3 and C4 Res-block body achieves the highest mean accuracy precision (mAP) of 90.70%. This mAP is > 9% higher than that of the original model, saving approximately 22% of the billion floating point operations (B-FLOPS) and 23 MB in size. The findings indicate that the YOLOv4-RC3_4 model also performs better, with an increase of 9.27% in detecting the infected cells upon pruning the redundant layers from the C3 Res-block bodies of the CSP-DarkeNet53 backbone. CONCLUSIONS The results of this study highlight the use of the YOLOv4 model for detecting infected red blood cells. Pruning the residual blocks from the Res-block bodies helps to determine which Res-block bodies contribute the most and least, respectively, to the model's performance. Our method has the potential to revolutionise malaria diagnosis and pave the way for novel deep learning-based bioinformatics solutions. Developing an effective and automated process for diagnosing malaria will considerably contribute to global efforts to combat this debilitating disease. We have shown that removing undesirable residual blocks can reduce the size of the model and its computational complexity without compromising its precision.
Collapse
Affiliation(s)
- Dhevisha Sukumarran
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia.
- Center of Intelligent Systems for Emerging Technology (CISET), Faculty of Engineering, Universiti Malaya, 50603, Kuala Lumpur, Malaysia.
| | - Anis Salwa Mohd Khairuddin
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Malaria Research Centre, Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak, Kota Samarahan, Sarawak, Malaysia
| | - Romano Ngui
- Department of Para-Clinical Sciences, Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak, Sarawak, Malaysia.
| | | | - Indra Vythilingam
- Department of Parasitology, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Paul Cliff Simon Divis
- Malaria Research Centre, Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak, Kota Samarahan, Sarawak, Malaysia
| |
Collapse
|
7
|
Chun JW, Kim HS. The Present and Future of Artificial Intelligence-Based Medical Image in Diabetes Mellitus: Focus on Analytical Methods and Limitations of Clinical Use. J Korean Med Sci 2023; 38:e253. [PMID: 37550811 PMCID: PMC10412032 DOI: 10.3346/jkms.2023.38.e253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 07/12/2023] [Indexed: 08/09/2023] Open
Abstract
Artificial intelligence (AI)-based diagnostic technology using medical images can be used to increase examination accessibility and support clinical decision-making for screening and diagnosis. To determine a machine learning algorithm for diabetes complications, a literature review of studies using medical image-based AI technology was conducted using the National Library of Medicine PubMed, and the Excerpta Medica databases. Lists of studies using diabetes diagnostic images and AI as keywords were combined. In total, 227 appropriate studies were selected. Diabetic retinopathy studies using the AI model were the most frequent (85.0%, 193/227 cases), followed by diabetic foot (7.9%, 18/227 cases) and diabetic neuropathy (2.7%, 6/227 cases). The studies used open datasets (42.3%, 96/227 cases) or directly constructed data from fundoscopy or optical coherence tomography (57.7%, 131/227 cases). Major limitations in AI-based detection of diabetes complications using medical images were the lack of datasets (36.1%, 82/227 cases) and severity misclassification (26.4%, 60/227 cases). Although it remains difficult to use and fully trust AI-based imaging analysis technology clinically, it reduces clinicians' time and labor, and the expectations from its decision-support roles are high. Various data collection and synthesis data technology developments according to the disease severity are required to solve data imbalance.
Collapse
Affiliation(s)
- Ji-Won Chun
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Hun-Sung Kim
- Department of Medical Informatics, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea.
| |
Collapse
|
8
|
Lin C, Chang YC, Chiu HY, Cheng CH, Huang HM. Differentiation between normal and abnormal kidneys using 99mTc-DMSA SPECT with deep learning in paediatric patients. Clin Radiol 2023; 78:584-589. [PMID: 37244824 DOI: 10.1016/j.crad.2023.04.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/29/2023]
Abstract
AIM To investigate the feasibility of using deep learning (DL) to differentiate normal from abnormal (or scarred) kidneys using technetium-99m dimercaptosuccinic acid (99mTc-DMSA) single-photon-emission computed tomography (SPECT) in paediatric patients. MATERIAL AND METHODS Three hundred and one 99mTc-DMSA renal SPECT examinations were reviewed retrospectively. The 301 patients were split randomly into 261, 20, and 20 for training, validation, and testing data, respectively. The DL model was trained using three-dimensional (3D) SPECT images, two-dimensional (2D) maximum intensity projections (MIPs), and 2.5-dimensional (2.5D) MIPs (i.e., transverse, sagittal, and coronal views). Each DL model was trained to determine renal SPECT images into either normal or abnormal. Consensus reading results by two nuclear medicine physicians served as the reference standard. RESULTS The DL model trained by 2.5D MIPs outperformed that trained by either 3D SPECT images or 2D MIPs. The accuracy, sensitivity, and specificity of the 2.5D model for the differentiation between normal and abnormal kidneys were 92.5%, 90% and 95%, respectively. CONCLUSION The experimental results suggest that DL has the potential to differentiate normal from abnormal kidneys in children using 99mTc-DMSA SPECT imaging.
Collapse
Affiliation(s)
- C Lin
- Department of Nuclear Medicine, Chang Gung Memorial Hospital, No. 5, Fuxing Street, Gueishan District, Taoyuan 33305, Taiwan; School of Chinese Medicine, Chang Gung University, No. 259, Wenhua 1st Rd, Guishan District, Taoyuan 33302, Taiwan
| | - Y-C Chang
- Department of Nuclear Medicine, Chang Gung Memorial Hospital, No. 5, Fuxing Street, Gueishan District, Taoyuan 33305, Taiwan; Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, No. 259, Wenhua 1st Rd, Guishan District, Taoyuan 33302, Taiwan
| | - H-Y Chiu
- Department of Nuclear Medicine, Chang Gung Memorial Hospital, No. 5, Fuxing Street, Gueishan District, Taoyuan 33305, Taiwan
| | - C-H Cheng
- Department of Pediatrics, Chang Gung University, No. 259, Wenhua 1st Rd, Guishan District, Taoyuan 33302, Taiwan; Department of Pediatrics, Chang Gung Memorial Hospital, No. 5, Fuxing Street, Gueishan District, Taoyuan 33305, Taiwan
| | - H-M Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd, Zhongzheng District, Taipei City 100, Taiwan.
| |
Collapse
|
9
|
Bakasa W, Viriri S. VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction. J Imaging 2023; 9:138. [PMID: 37504815 PMCID: PMC10381878 DOI: 10.3390/jimaging9070138] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/19/2023] [Accepted: 07/04/2023] [Indexed: 07/29/2023] Open
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Collapse
Affiliation(s)
- Wilson Bakasa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
| |
Collapse
|
10
|
Yamaoka T, Watanabe S. Artificial intelligence in coronary artery calcium measurement: Barriers and solutions for implementation into daily practice. Eur J Radiol 2023; 164:110855. [PMID: 37167685 DOI: 10.1016/j.ejrad.2023.110855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/29/2023] [Accepted: 04/28/2023] [Indexed: 05/13/2023]
Abstract
Coronary artery calcification (CAC) measurement is a valuable predictor of cardiovascular risk. However, its measurement can be time-consuming and complex, thus driving the desire for artificial intelligence (AI)-based approaches. The aim of this review is to explore the current status of CAC volume measurement using AI-based systems for the automated prediction of cardiovascular events. We also make proposals for the implementation of these systems into clinical practice. Research to date on applying AI to CAC scoring has shown the potential for automation and risk stratification, and, overall, efficacy and a high level of agreement with categorisation by trained clinicians have been demonstrated. However, research in this field has not been uniform or directed. One contributing factor may be a lack of integration and communication between computer scientists and cardiologists. Clinicians, institutions, and organisations should work together towards applying this technology to improve processes, preserve healthcare resources, and improve patient outcomes.
Collapse
Affiliation(s)
- Toshihide Yamaoka
- Department of Diagnostic Imaging and Interventional Radiology, Kyoto Katsura Hospital, Japan.
| | - Sachika Watanabe
- Department of Diagnostic Imaging and Interventional Radiology, Kyoto Katsura Hospital, Japan
| |
Collapse
|
11
|
Bukas C, Galter I, da Silva-Buttkus P, Fuchs H, Maier H, Gailus-Durner V, Müller CL, Hrabě de Angelis M, Piraud M, Spielmann N. Echo2Pheno: a deep-learning application to uncover echocardiographic phenotypes in conscious mice. Mamm Genome 2023; 34:200-215. [PMID: 37221250 PMCID: PMC10290584 DOI: 10.1007/s00335-023-09996-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/28/2023] [Indexed: 05/25/2023]
Abstract
Echocardiography, a rapid and cost-effective imaging technique, assesses cardiac function and structure. Despite its popularity in cardiovascular medicine and clinical research, image-derived phenotypic measurements are manually performed, requiring expert knowledge and training. Notwithstanding great progress in deep-learning applications in small animal echocardiography, the focus has so far only been on images of anesthetized rodents. We present here a new algorithm specifically designed for echocardiograms acquired in conscious mice called Echo2Pheno, an automatic statistical learning workflow for analyzing and interpreting high-throughput non-anesthetized transthoracic murine echocardiographic images in the presence of genetic knockouts. Echo2Pheno comprises a neural network module for echocardiographic image analysis and phenotypic measurements, including a statistical hypothesis-testing framework for assessing phenotypic differences between populations. Using 2159 images of 16 different knockout mouse strains of the German Mouse Clinic, Echo2Pheno accurately confirms known cardiovascular genotype-phenotype relationships (e.g., Dystrophin) and discovers novel genes (e.g., CCR4-NOT transcription complex subunit 6-like, Cnot6l, and synaptotagmin-like protein 4, Sytl4), which cause altered cardiovascular phenotypes, as verified by H&E-stained histological images. Echo2Pheno provides an important step toward automatic end-to-end learning for linking echocardiographic readouts to cardiovascular phenotypes of interest in conscious mice.
Collapse
Affiliation(s)
- Christina Bukas
- Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Isabella Galter
- Institute of Experimental Genetics, German Research Center for Environmental Health, Neuherberg, Germany
| | - Patricia da Silva-Buttkus
- Institute of Experimental Genetics, German Mouse Clinic, German Research Center for Environmental Health, Ingolstädter Landstr. 1, 85764, Neuherberg, Germany
| | - Helmut Fuchs
- Institute of Experimental Genetics, German Mouse Clinic, German Research Center for Environmental Health, Ingolstädter Landstr. 1, 85764, Neuherberg, Germany
| | - Holger Maier
- Institute of Experimental Genetics, German Research Center for Environmental Health, Neuherberg, Germany
| | - Valerie Gailus-Durner
- Institute of Experimental Genetics, German Mouse Clinic, German Research Center for Environmental Health, Ingolstädter Landstr. 1, 85764, Neuherberg, Germany
| | - Christian L Müller
- Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg, Germany
- Department of Statistics, LMU München, Munich, Germany
- Center for Computational Mathematics, Flatiron Institute, New York, USA
| | - Martin Hrabě de Angelis
- Institute of Experimental Genetics, German Mouse Clinic, German Research Center for Environmental Health, Ingolstädter Landstr. 1, 85764, Neuherberg, Germany.
- Chair of Experimental Genetics, TUM School of Life Sciences, Technische Universität München, Freising, Germany.
- German Center for Diabetes Research (DZD), Neuherberg, Germany.
| | - Marie Piraud
- Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
| | - Nadine Spielmann
- Institute of Experimental Genetics, German Mouse Clinic, German Research Center for Environmental Health, Ingolstädter Landstr. 1, 85764, Neuherberg, Germany
| |
Collapse
|
12
|
Zhang T, Tian X, Liu X, Ye J, Fu F, Shi X, Liu R, Xu C. Advances of deep learning in electrical impedance tomography image reconstruction. Front Bioeng Biotechnol 2022; 10:1019531. [PMID: 36588934 PMCID: PMC9794741 DOI: 10.3389/fbioe.2022.1019531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 12/02/2022] [Indexed: 12/15/2022] Open
Abstract
Electrical impedance tomography (EIT) has been widely used in biomedical research because of its advantages of real-time imaging and nature of being non-invasive and radiation-free. Additionally, it can reconstruct the distribution or changes in electrical properties in the sensing area. Recently, with the significant advancements in the use of deep learning in intelligent medical imaging, EIT image reconstruction based on deep learning has received considerable attention. This study introduces the basic principles of EIT and summarizes the application progress of deep learning in EIT image reconstruction with regards to three aspects: a single network reconstruction, deep learning combined with traditional algorithm reconstruction, and multiple network hybrid reconstruction. In future, optimizing the datasets may be the main challenge in applying deep learning for EIT image reconstruction. Adopting a better network structure, focusing on the joint reconstruction of EIT and traditional algorithms, and using multimodal deep learning-based EIT may be the solution to existing problems. In general, deep learning offers a fresh approach for improving the performance of EIT image reconstruction and could be the foundation for building an intelligent integrated EIT diagnostic system in the future.
Collapse
Affiliation(s)
- Tao Zhang
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China,Drug and Instrument Supervision and Inspection Station, Xining Joint Logistics Support Center, Lanzhou, China
| | - Xiang Tian
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - XueChao Liu
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - JianAn Ye
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - Feng Fu
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - XueTao Shi
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - RuiGang Liu
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China
| | - CanHua Xu
- Department of Biomedical Engineering, The Fourth Military Medical University, Xi’an, China,Shaanxi Key Laboratory for Bioelectromagnetic Detection and Intelligent Perception, Xi’an, China,*Correspondence: CanHua Xu,
| |
Collapse
|
13
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 88] [Impact Index Per Article: 44.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
14
|
Ahuja U, Singh S, Kumar M, Kumar K, Sachdeva M. COVID-19: Social distancing monitoring using faster-RCNN and YOLOv3 algorithms. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:7553-7566. [PMID: 36060226 PMCID: PMC9417929 DOI: 10.1007/s11042-022-13718-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 05/21/2022] [Accepted: 08/24/2022] [Indexed: 06/15/2023]
Abstract
As of March 31, 2021, the Coronavirus COVID-19 was affecting 219 countries and territories worldwide, with approximately 129,574,017 confirmed cases and 2,830,220 death cases. Social isolation is the most reliable way to deal with this pandemic situation. Motivated by this notion, this paper proposes a deep learning-based technique for automating the task of monitoring social distancing using surveillance cameras. To separate humans from the background, the proposed system employs object detection models based on F-RCNN (Faster Region-based Convolutional Neural Networks) and YOLO (You Only Look Once) algorithms. In the COVID-19 environment, these models track the percentage of people who violate social distancing norms on a daily basis. The authors compared the performance of both models in experimental work using the MS COCO dataset. Many tests were carried out, and we discovered that YOLOv3 demonstrated efficient performance with balanced FPS (frames per second).
Collapse
Affiliation(s)
- Umang Ahuja
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Sunil Singh
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Munish Kumar
- Department of Computational Sciences, Maharaja Ranjit Singh Punjab Technical University, Bathinda, Punjab India
| | - Krishan Kumar
- Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India
| | - Monika Sachdeva
- Department of Computer Science and Engineering, I. K. G. Punjab Technical University, Kapurthala, Punjab India
| |
Collapse
|
15
|
Baroni A, Glukhov A, Pérez E, Wenger C, Calore E, Schifano SF, Olivo P, Ielmini D, Zambelli C. An energy-efficient in-memory computing architecture for survival data analysis based on resistive switching memories. Front Neurosci 2022; 16:932270. [PMID: 36017177 PMCID: PMC9395721 DOI: 10.3389/fnins.2022.932270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/19/2022] [Indexed: 11/22/2022] Open
Abstract
One of the objectives fostered in medical science is the so-called precision medicine, which requires the analysis of a large amount of survival data from patients to deeply understand treatment options. Tools like machine learning (ML) and deep neural networks are becoming a de-facto standard. Nowadays, computing facilities based on the Von Neumann architecture are devoted to these tasks, yet rapidly hitting a bottleneck in performance and energy efficiency. The in-memory computing (IMC) architecture emerged as a revolutionary approach to overcome that issue. In this work, we propose an IMC architecture based on resistive switching memory (RRAM) crossbar arrays to provide a convenient primitive for matrix-vector multiplication in a single computational step. This opens massive performance improvement in the acceleration of a neural network that is frequently used in survival analysis of biomedical records, namely the DeepSurv. We explored how the synaptic weights mapping strategy and the programming algorithms developed to counter RRAM non-idealities expose a performance/energy trade-off. Finally, we discussed how this application is tailored for the IMC architecture rather than being executed on commodity systems.
Collapse
Affiliation(s)
- Andrea Baroni
- IHP-Leibniz Institut fur Innovative Mikroelektronik, Frankfurt (Oder), Germany
- *Correspondence: Andrea Baroni
| | - Artem Glukhov
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Milano, Italy
| | - Eduardo Pérez
- IHP-Leibniz Institut fur Innovative Mikroelektronik, Frankfurt (Oder), Germany
| | - Christian Wenger
- IHP-Leibniz Institut fur Innovative Mikroelektronik, Frankfurt (Oder), Germany
- BTU Cottbus-Senftenberg, Cottbus, Germany
| | - Enrico Calore
- Dipartimento di Fisica e Scienze Della Terra, Università Degli Studi di Ferrara, Ferrara, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Ferrara, Italy
| | - Sebastiano Fabio Schifano
- Istituto Nazionale di Fisica Nucleare (INFN), Ferrara, Italy
- Dipartimento di Scienze Dell'Ambiente e Della Prevenzione, Università Degli Studi di Ferrara, Ferrara, Italy
| | - Piero Olivo
- Dipartimento di Ingegneria, Università Degli Studi di Ferrara, Ferrara, Italy
| | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IU.NET, Milano, Italy
| | - Cristian Zambelli
- Dipartimento di Ingegneria, Università Degli Studi di Ferrara, Ferrara, Italy
| |
Collapse
|
16
|
An Explainable Classification Method of SPECT Myocardial Perfusion Images in Nuclear Cardiology Using Deep Learning and Grad-CAM. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157592] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background: This study targets the development of an explainable deep learning methodology for the automatic classification of coronary artery disease, utilizing SPECT MPI images. Deep learning is currently judged as non-transparent due to the model’s complex non-linear structure, and thus, it is considered a «black box», making it hard to gain a comprehensive understanding of its internal processes and explain its behavior. Existing explainable artificial intelligence tools can provide insights into the internal functionality of deep learning and especially of convolutional neural networks, allowing transparency and interpretation. Methods: This study seeks to address the identification of patients’ CAD status (infarction, ischemia or normal) by developing an explainable deep learning pipeline in the form of a handcrafted convolutional neural network. The proposed RGB-CNN model utilizes various pre- and post-processing tools and deploys a state-of-the-art explainability tool to produce more interpretable predictions in decision making. The dataset includes cases from 625 patients as stress and rest representations, comprising 127 infarction, 241 ischemic, and 257 normal cases previously classified by a doctor. The imaging dataset was split into 20% for testing and 80% for training, of which 15% was further used for validation purposes. Data augmentation was employed to increase generalization. The efficacy of the well-known Grad-CAM-based color visualization approach was also evaluated in this research to provide predictions with interpretability in the detection of infarction and ischemia in SPECT MPI images, counterbalancing any lack of rationale in the results extracted by the CNNs. Results: The proposed model achieved 93.3% accuracy and 94.58% AUC, demonstrating efficient performance and stability. Grad-CAM has shown to be a valuable tool for explaining CNN-based judgments in SPECT MPI images, allowing nuclear physicians to make fast and confident judgments by using the visual explanations offered. Conclusions: Prediction results indicate a robust and efficient model based on the deep learning methodology which is proposed for CAD diagnosis in nuclear medicine.
Collapse
|
17
|
Samee NA, Alhussan AA, Ghoneim VF, Atteia G, Alkanhel R, Al-antari MA, Kadah YM. A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms. SENSORS 2022; 22:s22134938. [PMID: 35808433 PMCID: PMC9269713 DOI: 10.3390/s22134938] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 12/16/2022]
Abstract
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Amel A. Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence:
| | | | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Reem Alkanhel
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Korea;
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia;
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
| |
Collapse
|
18
|
Deep Learning Applied to Chest Radiograph Classification—A COVID-19 Pneumonia Experience. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083712] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Due to the recent COVID-19 pandemic, a large number of reports present deep learning algorithms that support the detection of pneumonia caused by COVID-19 in chest radiographs. Few studies have provided the complete source code, limiting testing and reproducibility on different datasets. This work presents Cimatec_XCOV19, a novel deep learning system inspired by the Inception-V3 architecture that is able to (i) support the identification of abnormal chest radiographs and (ii) classify the abnormal radiographs as suggestive of COVID-19. The training dataset has 44,031 images with 2917 COVID-19 cases, one of the largest datasets in recent literature. We organized and published an external validation dataset of 1158 chest radiographs from a Brazilian hospital. Two experienced radiologists independently evaluated the radiographs. The Cimatec_XCOV19 algorithm obtained a sensitivity of 0.85, specificity of 0.82, and AUC ROC of 0.93. We compared the AUC ROC of our algorithm with a well-known public solution and did not find a statistically relevant difference between both performances. We provide full access to the code and the test dataset, enabling this work to be used as a tool for supporting the fast screening of COVID-19 on chest X-ray exams, serving as a reference for educators, and supporting further algorithm enhancements.
Collapse
|
19
|
Segmentation and Quantitative Analysis of Photoacoustic Imaging: A Review. PHOTONICS 2022. [DOI: 10.3390/photonics9030176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Photoacoustic imaging is an emerging biomedical imaging technique that combines optical contrast and ultrasound resolution to create unprecedented light absorption contrast in deep tissue. Thanks to its fusional imaging advantages, photoacoustic imaging can provide multiple structural and functional insights into biological tissues such as blood vasculatures and tumors and monitor the kinetic movements of hemoglobin and lipids. To better visualize and analyze the regions of interest, segmentation and quantitative analyses were used to extract several biological factors, such as the intensity level changes, diameter, and tortuosity of the tissues. Over the past 10 years, classical segmentation methods and advances in deep learning approaches have been utilized in research investigations. In this review, we provide a comprehensive review of segmentation and quantitative methods that have been developed to process photoacoustic imaging in preclinical and clinical experiments. We focus on the parametric reliability of quantitative analysis for semantic and instance-level segmentation. We also introduce the similarities and alternatives of deep learning models in qualitative measurements using classical segmentation methods for photoacoustic imaging.
Collapse
|
20
|
Ravikumar A, Sriraman H, Sai Saketh PM, Lokesh S, Karanam A. Effect of neural network structure in accelerating performance and accuracy of a convolutional neural network with GPU/TPU for image analytics. PeerJ Comput Sci 2022; 8:e909. [PMID: 35494877 PMCID: PMC9044238 DOI: 10.7717/peerj-cs.909] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 02/09/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND In deep learning the most significant breakthrough in the field of image recognition, object detection language processing was done by Convolutional Neural Network (CNN). Rapid growth in data and neural networks the performance of the DNN algorithms depends on the computation power and the storage capacity of the devices. METHODS In this paper, the convolutional neural network used for various image applications was studied and its acceleration in the various platforms like CPU, GPU, TPU was done. The neural network structure and the computing power and characteristics of the GPU, TPU was analyzed and summarized, the effect of these on accelerating the tasks is also explained. Cross-platform comparison of the CNN was done using three image applications the face mask detection (object detection/Computer Vision), Virus Detection in Plants (Image Classification: agriculture sector), and Pneumonia detection from X-ray Images (Image Classification/medical field). RESULTS The CNN implementation was done and a comprehensive comparison was done on the platforms to identify the performance, throughput, bottlenecks, and training time. The CNN layer-wise execution in GPU and TPU is explained with layer-wise analysis. The impact of the fully connected layer and convolutional layer on the network is analyzed. The challenges faced during the acceleration process were discussed and future works are identified.
Collapse
Affiliation(s)
- Aswathy Ravikumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Harini Sriraman
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - P. Maruthi Sai Saketh
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Saddikuti Lokesh
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Abhiram Karanam
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| |
Collapse
|