1
|
Alzubaidi L, Al-Dulaimi K, Salhi A, Alammar Z, Fadhel MA, Albahri AS, Alamoodi AH, Albahri OS, Hasan AF, Bai J, Gilliland L, Peng J, Branni M, Shuker T, Cutbush K, Santamaría J, Moreira C, Ouyang C, Duan Y, Manoufali M, Jomaa M, Gupta A, Abbosh A, Gu Y. Comprehensive review of deep learning in orthopaedics: Applications, challenges, trustworthiness, and fusion. Artif Intell Med 2024; 155:102935. [PMID: 39079201 DOI: 10.1016/j.artmed.2024.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 03/18/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
Deep learning (DL) in orthopaedics has gained significant attention in recent years. Previous studies have shown that DL can be applied to a wide variety of orthopaedic tasks, including fracture detection, bone tumour diagnosis, implant recognition, and evaluation of osteoarthritis severity. The utilisation of DL is expected to increase, owing to its ability to present accurate diagnoses more efficiently than traditional methods in many scenarios. This reduces the time and cost of diagnosis for patients and orthopaedic surgeons. To our knowledge, no exclusive study has comprehensively reviewed all aspects of DL currently used in orthopaedic practice. This review addresses this knowledge gap using articles from Science Direct, Scopus, IEEE Xplore, and Web of Science between 2017 and 2023. The authors begin with the motivation for using DL in orthopaedics, including its ability to enhance diagnosis and treatment planning. The review then covers various applications of DL in orthopaedics, including fracture detection, detection of supraspinatus tears using MRI, osteoarthritis, prediction of types of arthroplasty implants, bone age assessment, and detection of joint-specific soft tissue disease. We also examine the challenges for implementing DL in orthopaedics, including the scarcity of data to train DL and the lack of interpretability, as well as possible solutions to these common pitfalls. Our work highlights the requirements to achieve trustworthiness in the outcomes generated by DL, including the need for accuracy, explainability, and fairness in the DL models. We pay particular attention to fusion techniques as one of the ways to increase trustworthiness, which have also been used to address the common multimodality in orthopaedics. Finally, we have reviewed the approval requirements set forth by the US Food and Drug Administration to enable the use of DL applications. As such, we aim to have this review function as a guide for researchers to develop a reliable DL application for orthopaedic tasks from scratch for use in the market.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia.
| | - Khamael Al-Dulaimi
- Computer Science Department, College of Science, Al-Nahrain University, Baghdad, Baghdad 10011, Iraq; School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Asma Salhi
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Mohammed A Fadhel
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - A S Albahri
- Technical College, Imam Ja'afar Al-Sadiq University, Baghdad, Iraq
| | - A H Alamoodi
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional, Kajang 43000, Malaysia
| | - O S Albahri
- Australian Technical and Management College, Melbourne, Australia
| | - Amjad F Hasan
- Faculty of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Luke Gilliland
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Jing Peng
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Marco Branni
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Tristan Shuker
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Kenneth Cutbush
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén 23071, Spain
| | - Catarina Moreira
- Data Science Institute, University of Technology Sydney, Australia
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Ye Duan
- School of Computing, Clemson University, Clemson, 29631, SC, USA
| | - Mohamed Manoufali
- CSIRO, Kensington, WA 6151, Australia; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Mohammad Jomaa
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Ashish Gupta
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
2
|
Sarah P, Krishnapriya S, Saladi S, Karuna Y, Bavirisetti DP. A novel approach to brain tumor detection using K-Means++, SGLDM, ResNet50, and synthetic data augmentation. Front Physiol 2024; 15:1342572. [PMID: 39077759 PMCID: PMC11284281 DOI: 10.3389/fphys.2024.1342572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/24/2024] [Indexed: 07/31/2024] Open
Abstract
Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.
Collapse
Affiliation(s)
- Ponuku Sarah
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Srigiri Krishnapriya
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Saritha Saladi
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Yepuganti Karuna
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Yu B, Kaku A, Liu K, Parnandi A, Fokas E, Venkatesan A, Pandit N, Ranganath R, Schambra H, Fernandez-Granda C. Quantifying impairment and disease severity using AI models trained on healthy subjects. NPJ Digit Med 2024; 7:180. [PMID: 38969786 PMCID: PMC11226623 DOI: 10.1038/s41746-024-01173-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/21/2024] [Indexed: 07/07/2024] Open
Abstract
Automatic assessment of impairment and disease severity is a key challenge in data-driven medicine. We propose a framework to address this challenge, which leverages AI models trained exclusively on healthy individuals. The COnfidence-Based chaRacterization of Anomalies (COBRA) score exploits the decrease in confidence of these models when presented with impaired or diseased patients to quantify their deviation from the healthy population. We applied the COBRA score to address a key limitation of current clinical evaluation of upper-body impairment in stroke patients. The gold-standard Fugl-Meyer Assessment (FMA) requires in-person administration by a trained assessor for 30-45 minutes, which restricts monitoring frequency and precludes physicians from adapting rehabilitation protocols to the progress of each patient. The COBRA score, computed automatically in under one minute, is shown to be strongly correlated with the FMA on an independent test cohort for two different data modalities: wearable sensors (ρ = 0.814, 95% CI [0.700,0.888]) and video (ρ = 0.736, 95% C.I [0.584, 0.838]). To demonstrate the generalizability of the approach to other conditions, the COBRA score was also applied to quantify severity of knee osteoarthritis from magnetic-resonance imaging scans, again achieving significant correlation with an independent clinical assessment (ρ = 0.644, 95% C.I [0.585,0.696]).
Collapse
Affiliation(s)
- Boyang Yu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Aakash Kaku
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Kangning Liu
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
| | - Avinash Parnandi
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Emily Fokas
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Anita Venkatesan
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Natasha Pandit
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA
| | - Rajesh Ranganath
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA
| | - Heidi Schambra
- Department of Neurology, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
- Department of Rehabilitation Medicine, NYU Grossman School of Medicine, 550 1st Ave, New York, NY, 10016, USA.
| | - Carlos Fernandez-Granda
- Center for Data Science, New York University, 60 Fifth Ave, New York, NY, 10011, USA.
- Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY, 10012, USA.
| |
Collapse
|
4
|
Wang Z, Sui X, Song W, Xue F, Han W, Hu Y, Jiang J. Reinforcement learning for individualized lung cancer screening schedules: A nested case-control study. Cancer Med 2024; 13:e7436. [PMID: 38949177 PMCID: PMC11215689 DOI: 10.1002/cam4.7436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Revised: 05/28/2024] [Accepted: 06/18/2024] [Indexed: 07/02/2024] Open
Abstract
BACKGROUND The current guidelines for managing screen-detected pulmonary nodules offer rule-based recommendations for immediate diagnostic work-up or follow-up at intervals of 3, 6, or 12 months. Customized visit plans are lacking. PURPOSE To develop individualized screening schedules using reinforcement learning (RL) and evaluate the effectiveness of RL-based policy models. METHODS Using a nested case-control design, we retrospectively identified 308 patients with cancer who had positive screening results in at least two screening rounds in the National Lung Screening Trial. We established a control group that included cancer-free patients with nodules, matched (1:1) according to the year of cancer diagnosis. By generating 10,164 sequence decision episodes, we trained RL-based policy models, incorporating nodule diameter alone, combined with nodule appearance (attenuation and margin) and/or patient information (age, sex, smoking status, pack-years, and family history). We calculated rates of misdiagnosis, missed diagnosis, and delayed diagnosis, and compared the performance of RL-based policy models with rule-based follow-up protocols (National Comprehensive Cancer Network guideline; China Guideline for the Screening and Early Detection of Lung Cancer). RESULTS We identified significant interactions between certain variables (e.g., nodule shape and patient smoking pack-years, beyond those considered in guideline protocols) and the selection of follow-up testing intervals, thereby impacting the quality of the decision sequence. In validation, one RL-based policy model achieved rates of 12.3% for misdiagnosis, 9.7% for missed diagnosis, and 11.7% for delayed diagnosis. Compared with the two rule-based protocols, the three best-performing RL-based policy models consistently demonstrated optimal performance for specific patient subgroups based on disease characteristics (benign or malignant), nodule phenotypes (size, shape, and attenuation), and individual attributes. CONCLUSIONS This study highlights the potential of using an RL-based approach that is both clinically interpretable and performance-robust to develop personalized lung cancer screening schedules. Our findings present opportunities for enhancing the current cancer screening system.
Collapse
Affiliation(s)
- Zixing Wang
- Peking University People's HospitalPeking University Hepatology Institute, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver DiseasesBeijingChina
- Department of Epidemiology and BiostatisticsInstitute of Basic Medical Sciences, Chinese Academy of Medical Sciences & School of Basic Medicine, Peking Union Medical CollegeBeijingChina
| | - Xin Sui
- Department of RadiologyPeking Union Medical College HospitalBeijingChina
| | - Wei Song
- Department of RadiologyPeking Union Medical College HospitalBeijingChina
| | - Fang Xue
- Department of Epidemiology and BiostatisticsInstitute of Basic Medical Sciences, Chinese Academy of Medical Sciences & School of Basic Medicine, Peking Union Medical CollegeBeijingChina
| | - Wei Han
- Department of Epidemiology and BiostatisticsInstitute of Basic Medical Sciences, Chinese Academy of Medical Sciences & School of Basic Medicine, Peking Union Medical CollegeBeijingChina
| | - Yaoda Hu
- Department of Epidemiology and BiostatisticsInstitute of Basic Medical Sciences, Chinese Academy of Medical Sciences & School of Basic Medicine, Peking Union Medical CollegeBeijingChina
| | - Jingmei Jiang
- Department of Epidemiology and BiostatisticsInstitute of Basic Medical Sciences, Chinese Academy of Medical Sciences & School of Basic Medicine, Peking Union Medical CollegeBeijingChina
| |
Collapse
|
5
|
Yeh WC, Kuo CY, Chen JM, Ku TH, Yao DJ, Ho YC, Lin RY. Pioneering Data Processing for Convolutional Neural Networks to Enhance the Diagnostic Accuracy of Traditional Chinese Medicine Pulse Diagnosis for Diabetes. Bioengineering (Basel) 2024; 11:561. [PMID: 38927797 PMCID: PMC11201186 DOI: 10.3390/bioengineering11060561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 05/18/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024] Open
Abstract
Traditional Chinese medicine (TCM) has relied on pulse diagnosis as a cornerstone of healthcare assessment for thousands of years. Despite its long history and widespread use, TCM pulse diagnosis has faced challenges in terms of diagnostic accuracy and consistency due to its dependence on subjective interpretation and theoretical analysis. This study introduces an approach to enhance the accuracy of TCM pulse diagnosis for diabetes by leveraging the power of deep learning algorithms, specifically LeNet and ResNet models, for pulse waveform analysis. LeNet and ResNet models were applied to analyze TCM pulse waveforms using a diverse dataset comprising both healthy individuals and patients with diabetes. The integration of these advanced algorithms with modern TCM pulse measurement instruments shows great promise in reducing practitioner-dependent variability and improving the reliability of diagnoses. This research bridges the gap between ancient wisdom and cutting-edge technology in healthcare. LeNet-F, incorporating special feature extraction of a pulse based on TMC, showed improved training and test accuracies (73% and 67%, respectively, compared with LeNet's 70% and 65%). Moreover, ResNet models consistently outperformed LeNet, with ResNet18-F achieving the highest accuracy (82%) in training and 74% in testing. The advanced preprocessing techniques and additional features contribute significantly to ResNet18-F's superior performance, indicating the importance of feature engineering strategies. Furthermore, the study identifies potential avenues for future research, including optimizing preprocessing techniques to handle pulse waveform variations and noise levels, integrating additional time-frequency domain features, developing domain-specific feature selection algorithms, and expanding the scope to other diseases. These advancements aim to refine traditional Chinese medicine pulse diagnosis, enhancing its accuracy and reliability while integrating it into modern technology for more effective healthcare approaches.
Collapse
Affiliation(s)
- Wei-Chang Yeh
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013, Taiwan; (W.-C.Y.); (R.-Y.L.)
| | - Chen-Yi Kuo
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013, Taiwan; (W.-C.Y.); (R.-Y.L.)
| | | | | | - Da-Jeng Yao
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; (D.-J.Y.)
| | - Ya-Chi Ho
- Department of Power Mechanical Engineering, National Tsing Hua University, Hsinchu 30013, Taiwan; (D.-J.Y.)
| | - Ruei-Yu Lin
- Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 30013, Taiwan; (W.-C.Y.); (R.-Y.L.)
| |
Collapse
|
6
|
Kang H, Kim N, Ryu J. Attentional decoder networks for chest X-ray image recognition on high-resolution features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108198. [PMID: 38718718 DOI: 10.1016/j.cmpb.2024.108198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/31/2024] [Accepted: 04/21/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE This paper introduces an encoder-decoder-based attentional decoder network to recognize small-size lesions in chest X-ray images. In the encoder-only network, small-size lesions disappear during the down-sampling steps or are indistinguishable in the low-resolution feature maps. To address these issues, the proposed network processes images in the encoder-decoder architecture similar to U-Net families and classifies lesions by globally pooling high-resolution feature maps. However, two challenging obstacles prohibit U-Net families from being extended to classification: (1) the up-sampling procedure consumes considerable resources, and (2) there needs to be an effective pooling approach for the high-resolution feature maps. METHODS Therefore, the proposed network employs a lightweight attentional decoder and harmonic magnitude transform. The attentional decoder up-samples the given features with the low-resolution features as the key and value while the high-resolution features as the query. Since multi-scaled features interact, up-sampled features embody global context at a high resolution, maintaining pathological locality. In addition, harmonic magnitude transform is devised for pooling high-resolution feature maps in the frequency domain. We borrow the shift theorem of the Fourier transform to preserve the translation invariant property and further reduce the parameters of the pooling layer by an efficient embedding strategy. RESULTS The proposed network achieves state-of-the-art classification performance on the three public chest X-ray datasets, such as NIH, CheXpert, and MIMIC-CXR. CONCLUSIONS In conclusion, the proposed efficient encoder-decoder network recognizes small-size lesions well in chest X-ray images by efficiently up-sampling feature maps through an attentional decoder and processing high-resolution feature maps with harmonic magnitude transform. We open-source our implementation at https://github.com/Lab-LVM/ADNet.
Collapse
Affiliation(s)
- Hankyul Kang
- Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, Ulsan University, Seoul, Republic of Korea
| | - Jongbin Ryu
- Department of Artificial Intelligence, Ajou University, Suwon, Republic of Korea; Department of Software and Computer Engineering, Ajou University, Suwon, Republic of Korea.
| |
Collapse
|
7
|
Zhang M, Ye Z, Yuan E, Lv X, Zhang Y, Tan Y, Xia C, Tang J, Huang J, Li Z. Imaging-based deep learning in kidney diseases: recent progress and future prospects. Insights Imaging 2024; 15:50. [PMID: 38360904 PMCID: PMC10869329 DOI: 10.1186/s13244-024-01636-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
Kidney diseases result from various causes, which can generally be divided into neoplastic and non-neoplastic diseases. Deep learning based on medical imaging is an established methodology for further data mining and an evolving field of expertise, which provides the possibility for precise management of kidney diseases. Recently, imaging-based deep learning has been widely applied to many clinical scenarios of kidney diseases including organ segmentation, lesion detection, differential diagnosis, surgical planning, and prognosis prediction, which can provide support for disease diagnosis and management. In this review, we will introduce the basic methodology of imaging-based deep learning and its recent clinical applications in neoplastic and non-neoplastic kidney diseases. Additionally, we further discuss its current challenges and future prospects and conclude that achieving data balance, addressing heterogeneity, and managing data size remain challenges for imaging-based deep learning. Meanwhile, the interpretability of algorithms, ethical risks, and barriers of bias assessment are also issues that require consideration in future development. We hope to provide urologists, nephrologists, and radiologists with clear ideas about imaging-based deep learning and reveal its great potential in clinical practice.Critical relevance statement The wide clinical applications of imaging-based deep learning in kidney diseases can help doctors to diagnose, treat, and manage patients with neoplastic or non-neoplastic renal diseases.Key points• Imaging-based deep learning is widely applied to neoplastic and non-neoplastic renal diseases.• Imaging-based deep learning improves the accuracy of the delineation, diagnosis, and evaluation of kidney diseases.• The small dataset, various lesion sizes, and so on are still challenges for deep learning.
Collapse
Affiliation(s)
- Meng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Zheng Ye
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Enyu Yuan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Xinyang Lv
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yiteng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yuqi Tan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Chunchao Xia
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Jing Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Jin Huang
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Zhenlin Li
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
8
|
Hassan J, Saeed SM, Deka L, Uddin MJ, Das DB. Applications of Machine Learning (ML) and Mathematical Modeling (MM) in Healthcare with Special Focus on Cancer Prognosis and Anticancer Therapy: Current Status and Challenges. Pharmaceutics 2024; 16:260. [PMID: 38399314 PMCID: PMC10892549 DOI: 10.3390/pharmaceutics16020260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/25/2024] Open
Abstract
The use of data-driven high-throughput analytical techniques, which has given rise to computational oncology, is undisputed. The widespread use of machine learning (ML) and mathematical modeling (MM)-based techniques is widely acknowledged. These two approaches have fueled the advancement in cancer research and eventually led to the uptake of telemedicine in cancer care. For diagnostic, prognostic, and treatment purposes concerning different types of cancer research, vast databases of varied information with manifold dimensions are required, and indeed, all this information can only be managed by an automated system developed utilizing ML and MM. In addition, MM is being used to probe the relationship between the pharmacokinetics and pharmacodynamics (PK/PD interactions) of anti-cancer substances to improve cancer treatment, and also to refine the quality of existing treatment models by being incorporated at all steps of research and development related to cancer and in routine patient care. This review will serve as a consolidation of the advancement and benefits of ML and MM techniques with a special focus on the area of cancer prognosis and anticancer therapy, leading to the identification of challenges (data quantity, ethical consideration, and data privacy) which are yet to be fully addressed in current studies.
Collapse
Affiliation(s)
- Jasmin Hassan
- Drug Delivery & Therapeutics Lab, Dhaka 1212, Bangladesh; (J.H.); (S.M.S.)
| | | | - Lipika Deka
- Faculty of Computing, Engineering and Media, De Montfort University, Leicester LE1 9BH, UK;
| | - Md Jasim Uddin
- Department of Pharmaceutical Technology, Faculty of Pharmacy, Universiti Malaya, Kuala Lumpur 50603, Malaysia
| | - Diganta B. Das
- Department of Chemical Engineering, Loughborough University, Loughborough LE11 3TU, UK
| |
Collapse
|
9
|
Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare (Basel) 2024; 12:125. [PMID: 38255014 PMCID: PMC10815906 DOI: 10.3390/healthcare12020125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/27/2023] [Accepted: 01/02/2024] [Indexed: 01/24/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a crucial tool in healthcare with the primary aim of improving patient outcomes and optimizing healthcare delivery. By harnessing machine learning algorithms, natural language processing, and computer vision, AI enables the analysis of complex medical data. The integration of AI into healthcare systems aims to support clinicians, personalize patient care, and enhance population health, all while addressing the challenges posed by rising costs and limited resources. As a subdivision of computer science, AI focuses on the development of advanced algorithms capable of performing complex tasks that were once reliant on human intelligence. The ultimate goal is to achieve human-level performance with improved efficiency and accuracy in problem-solving and task execution, thereby reducing the need for human intervention. Various industries, including engineering, media/entertainment, finance, and education, have already reaped significant benefits by incorporating AI systems into their operations. Notably, the healthcare sector has witnessed rapid growth in the utilization of AI technology. Nevertheless, there remains untapped potential for AI to truly revolutionize the industry. It is important to note that despite concerns about job displacement, AI in healthcare should not be viewed as a threat to human workers. Instead, AI systems are designed to augment and support healthcare professionals, freeing up their time to focus on more complex and critical tasks. By automating routine and repetitive tasks, AI can alleviate the burden on healthcare professionals, allowing them to dedicate more attention to patient care and meaningful interactions. However, legal and ethical challenges must be addressed when embracing AI technology in medicine, alongside comprehensive public education to ensure widespread acceptance.
Collapse
Affiliation(s)
- Molly Bekbolatova
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| | - Jonathan Mayer
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| | - Chi Wei Ong
- School of Chemistry, Chemical Engineering, and Biotechnology, Nanyang Technological University, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Milan Toma
- Department of Osteopathic Manipulative Medicine, College of Osteopathic Medicine, New York Institute of Technology, Old Westbury, NY 11568, USA; (M.B.); (J.M.)
| |
Collapse
|
10
|
Li D, Wang J, Yang J, Zhao J, Yang X, Cui Y, Zhang K. RTAU-Net: A novel 3D rectal tumor segmentation model based on dual path fusion and attentional guidance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107842. [PMID: 37832426 DOI: 10.1016/j.cmpb.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/18/2023] [Accepted: 10/01/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the Global Cancer Statistics 2020, colorectal cancer has the third-highest diagnosis rate (10.0 %) and the second-highest mortality rate (9.4 %) among the 36 types. Rectal cancer accounts for a large proportion of colorectal cancer. The size and shape of the rectal tumor can directly affect the diagnosis and treatment by doctors. The existing rectal tumor segmentation methods are based on two-dimensional slices, which cannot analyze a patient's tumor as a whole and lose the correlation between slices of MRI image, so the practical application value is not high. METHODS In this paper, a three-dimensional rectal tumor segmentation model is proposed. Firstly, image preprocessing is performed to reduce the effect caused by the unbalanced proportion of background region and target region, and improve the quality of the image. Secondly, a dual-path fusion network is designed to extract both global features and local detail features of rectal tumors. The network includes two encoders, a residual encoder for enhancing the spatial detail information and feature representation of the tumor and a transformer encoder for extracting global contour information of the tumor. In the decoding stage, we merge the information extracted from the dual paths and decode them. In addition, for the problem of the complex morphology and different sizes of rectal tumors, a multi-scale fusion channel attention mechanism is designed, which can capture important contextual information of different scales. Finally, visualize the 3D rectal tumor segmentation results. RESULTS The RTAU-Net is evaluated on the data set provided by Shanxi Provincial Cancer Hospital and Xinhua Hospital. The experimental results showed that the Dice of tumor segmentation reached 0.7978 and 0.6792, respectively, which improved by 2.78 % and 7.02 % compared with suboptimal model. CONCLUSIONS Although the morphology of rectal tumors varies, RTAU-Net can precisely localize rectal tumors and learn the contour and details of tumors, which can relieve physicians' workload and improve diagnostic accuracy.
Collapse
Affiliation(s)
- Dengao Li
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China.
| | - Juan Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Jicheng Yang
- Computer technology, Ocean University of China, Qingdao 266100, China
| | - Jumin Zhao
- College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xiaotang Yang
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Yanfen Cui
- Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan 030013, China
| | - Kenan Zhang
- College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China; Key Laboratory of Big Data Fusion Analysis and Application of Shanxi Province, Taiyuan University of Technology, Taiyuan, Shanxi, China; Intelligent Perception Engineering Technology Center of Shanxi, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
11
|
Salmanpour MR, Hosseinzadeh M, Rezaeijo SM, Rahmim A. Fusion-based tensor radiomics using reproducible features: Application to survival prediction in head and neck cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107714. [PMID: 37473589 DOI: 10.1016/j.cmpb.2023.107714] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 05/19/2023] [Accepted: 07/07/2023] [Indexed: 07/22/2023]
Abstract
BACKGROUND Numerous features are commonly generated in radiomics applications as applied to medical imaging, and identification of robust radiomics features (RFs) can be an important step to derivation of reliable, reproducible solutions. In this work, we utilize a tensor radiomics (TR) framework, where numerous fusions are explored, to generate different flavours of RFs, and we aimed to identify RFs that are robust to fusion techniques in head and neck cancer. Overall, we aimed to predict progression-free survival (PFS) using Hybrid Machine Learning Systems (HMLS) and reproducible RFs. METHODS The study was performed on 408 patients with head and neck cancer from The Cancer Imaging Archive. After image preprocessing, 15 fusion techniques were employed to combine Positron Emission Tomography (PET) and Computed Tomography (CT) images. Subsequently, 215 RFs were extracted through a standardized radiomics software, with 17 'flavours' generated using PET-only, CT-only, and 15 fused PET&CT images. The variability of RFs across flavours was studied using the Intraclass Correlation Coefficient (ICC). Furthermore, the features were categorized into seven reliability groups, 106 reproducible RFs with ICC>0.75 were selected, highly correlated flavours were removed, Principal Component Analysis was used to convert 17 flavours to 1 attribute, the polynomial function was utilized to increase RFs, and Analysis of variance (ANOVA) was used to select the relevant attributes. Finally, 3 classifiers including Random Forest (RFC), Logistic regression (LR), and Multi-layer perceptron were applied to the preselected relevant attributes to predict binary PFS. In 5-fold cross-validation, 80% of 4 divisions were utilized to train the model, and the remaining 20% was utilized to evaluate the model. Further, the remaining fold was used for external nested testing. RESULTS Reliability analysis indicated that most morphological features belong to the high-reliability category. By contrast, local intensity and statistical features extracted from images belong to the low-reliability category. In the tensor framework, the highest 5-fold cross-validation accuracy of 76.7%±3.3% with an external nested testing of 70.6%±6.7% resulted from the reproducible TR+polynomial function+ANOVA+LR algorithm while the accuracy of 70.0%±4.2% with the external nested testing of 67.7%±4.9% was achieved through the PCA fusion+RFC (non-tensor paradigm). CONCLUSIONS This study demonstrated that using reproducible RFs as utilized within a tensor fusion radiomics framework, linked with ANOVA and LR, added value to prediction of progression-free survival outcome in head and neck cancer patients.
Collapse
Affiliation(s)
- Mohammad R Salmanpour
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Technological Virtual Collaboration (TECVICO Corp.), Vancouver, BC, Canada.
| | - Mahdi Hosseinzadeh
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver, BC, Canada; Department of Electrical & Computer Engineering, University of Tarbiat Modares, Tehran, Iran
| | - Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada; Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
12
|
Elshahawy M, Elnemr A, Oproescu M, Schiopu AG, Elgarayhi A, Elmogy MM, Sallah M. Early Melanoma Detection Based on a Hybrid YOLOv5 and ResNet Technique. Diagnostics (Basel) 2023; 13:2804. [PMID: 37685342 PMCID: PMC10486497 DOI: 10.3390/diagnostics13172804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/11/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
Skin cancer, specifically melanoma, is a serious health issue that arises from the melanocytes, the cells that produce melanin, the pigment responsible for skin color. With skin cancer on the rise, the timely identification of skin lesions is crucial for effective treatment. However, the similarity between some skin lesions can result in misclassification, which is a significant problem. It is important to note that benign skin lesions are more prevalent than malignant ones, which can lead to overly cautious algorithms and incorrect results. As a solution, researchers are developing computer-assisted diagnostic tools to detect malignant tumors early. First, a new model based on the combination of "you only look once" (YOLOv5) and "ResNet50" is proposed for melanoma detection with its degree using humans against a machine with 10,000 training images (HAM10000). Second, feature maps integrate gradient change, which allows rapid inference, boosts precision, and reduces the number of hyperparameters in the model, making it smaller. Finally, the current YOLOv5 model is changed to obtain the desired outcomes by adding new classes for dermatoscopic images of typical lesions with pigmented skin. The proposed approach improves melanoma detection with a real-time speed of 0.4 MS of non-maximum suppression (NMS) per image. The performance metrics average is 99.0%, 98.6%, 98.8%, 99.5, 98.3%, and 98.7% for the precision, recall, dice similarity coefficient (DSC), accuracy, mean average precision (MAP) from 0.0 to 0.5, and MAP from 0.5 to 0.95, respectively. Compared to current melanoma detection approaches, the provided approach is more efficient in using deep features.
Collapse
Affiliation(s)
- Manar Elshahawy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Elnemr
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mihai Oproescu
- Faculty of Electronics, Communication, and Computer Science, University of Pitesti, 110040 Pitesti, Romania
| | - Adriana-Gabriela Schiopu
- Department of Manufacturing and Industrial Management, Faculty of Mechanics and Technology, University of Pitesti, 110040 Pitesti, Romania;
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt; (A.E.); (A.E.)
| | - Mohammed M. Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Sallah
- Department of Physics, College of Sciences, University of Bisha, P.O. Box 344, Bisha 61922, Saudi Arabia;
| |
Collapse
|
13
|
Dobrijević D, Vilotijević-Dautović G, Katanić J, Horvat M, Horvat Z, Pastor K. Rapid Triage of Children with Suspected COVID-19 Using Laboratory-Based Machine-Learning Algorithms. Viruses 2023; 15:1522. [PMID: 37515208 PMCID: PMC10383367 DOI: 10.3390/v15071522] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
In order to limit the spread of the novel betacoronavirus (SARS-CoV-2), it is necessary to detect positive cases as soon as possible and isolate them. For this purpose, machine-learning algorithms, as a field of artificial intelligence, have been recognized as a promising tool. The aim of this study was to assess the utility of the most common machine-learning algorithms in the rapid triage of children with suspected COVID-19 using easily accessible and inexpensive laboratory parameters. A cross-sectional study was conducted on 566 children treated for respiratory diseases: 280 children with PCR-confirmed SARS-CoV-2 infection and 286 children with respiratory symptoms who were SARS-CoV-2 PCR-negative (control group). Six machine-learning algorithms, based on the blood laboratory data, were tested: random forest, support vector machine, linear discriminant analysis, artificial neural network, k-nearest neighbors, and decision tree. The training set was validated through stratified cross-validation, while the performance of each algorithm was confirmed by an independent test set. Random forest and support vector machine models demonstrated the highest accuracy of 85% and 82.1%, respectively. The models demonstrated better sensitivity than specificity and better negative predictive value than positive predictive value. The F1 score was higher for the random forest than for the support vector machine model, 85.2% and 82.3%, respectively. This study might have significant clinical applications, helping healthcare providers identify children with COVID-19 in the early stage, prior to PCR and/or antigen testing. Additionally, machine-learning algorithms could improve overall testing efficiency with no extra costs for the healthcare facility.
Collapse
Affiliation(s)
- Dejan Dobrijević
- Faculty of Medicine, University of Novi Sad, 21000 Novi Sad, Serbia
- Institute for Child and Youth Health Care of Vojvodina, 21000 Novi Sad, Serbia
| | - Gordana Vilotijević-Dautović
- Faculty of Medicine, University of Novi Sad, 21000 Novi Sad, Serbia
- Institute for Child and Youth Health Care of Vojvodina, 21000 Novi Sad, Serbia
| | - Jasmina Katanić
- Faculty of Medicine, University of Novi Sad, 21000 Novi Sad, Serbia
- Institute for Child and Youth Health Care of Vojvodina, 21000 Novi Sad, Serbia
| | - Mirjana Horvat
- Faculty of Civil Engineering Subotica, University of Novi Sad, 24000 Subotica, Serbia
| | - Zoltan Horvat
- Faculty of Civil Engineering Subotica, University of Novi Sad, 24000 Subotica, Serbia
| | - Kristian Pastor
- Faculty of Technology, University of Novi Sad, 21000 Novi Sad, Serbia
| |
Collapse
|
14
|
Apivanichkul K, Phasukkit P, Dankulchai P, Sittiwong W, Jitwatcharakomol T. Enhanced Deep-Learning-Based Automatic Left-Femur Segmentation Scheme with Attribute Augmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:5720. [PMID: 37420884 PMCID: PMC10305208 DOI: 10.3390/s23125720] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 05/27/2023] [Accepted: 06/14/2023] [Indexed: 07/09/2023]
Abstract
This research proposes augmenting cropped computed tomography (CT) slices with data attributes to enhance the performance of a deep-learning-based automatic left-femur segmentation scheme. The data attribute is the lying position for the left-femur model. In the study, the deep-learning-based automatic left-femur segmentation scheme was trained, validated, and tested using eight categories of CT input datasets for the left femur (F-I-F-VIII). The segmentation performance was assessed by Dice similarity coefficient (DSC) and intersection over union (IoU); and the similarity between the predicted 3D reconstruction images and ground-truth images was determined by spectral angle mapper (SAM) and structural similarity index measure (SSIM). The left-femur segmentation model achieved the highest DSC (88.25%) and IoU (80.85%) under category F-IV (using cropped and augmented CT input datasets with large feature coefficients), with an SAM and SSIM of 0.117-0.215 and 0.701-0.732. The novelty of this research lies in the use of attribute augmentation in medical image preprocessing to enhance the performance of the deep-learning-based automatic left-femur segmentation scheme.
Collapse
Affiliation(s)
- Kamonchat Apivanichkul
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand;
| | - Pattarapong Phasukkit
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand;
- King Mongkut Chaokhunthahan Hospital (KMCH), King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| | - Pittaya Dankulchai
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (P.D.); (W.S.); (T.J.)
| | - Wiwatchai Sittiwong
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (P.D.); (W.S.); (T.J.)
| | - Tanun Jitwatcharakomol
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (P.D.); (W.S.); (T.J.)
| |
Collapse
|
15
|
Terzi DS, Azginoglu N. In-Domain Transfer Learning Strategy for Tumor Detection on Brain MRI. Diagnostics (Basel) 2023; 13:2110. [PMID: 37371005 DOI: 10.3390/diagnostics13122110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 06/16/2023] [Accepted: 06/16/2023] [Indexed: 06/29/2023] Open
Abstract
Transfer learning has gained importance in areas where there is a labeled data shortage. However, it is still controversial as to what extent natural image datasets as pre-training sources contribute scientifically to success in different fields, such as medical imaging. In this study, the effect of transfer learning for medical object detection was quantitatively compared using natural and medical image datasets. Within the scope of this study, transfer learning strategies based on five different weight initialization methods were discussed. A natural image dataset MS COCO and brain tumor dataset BraTS 2020 were used as the transfer learning source, and Gazi Brains 2020 was used for the target. Mask R-CNN was adopted as a deep learning architecture for its capability to effectively handle both object detection and segmentation tasks. The experimental results show that transfer learning from the medical image dataset was found to be 10% more successful and showed 24% better convergence performance than the MS COCO pre-trained model, although it contains fewer data. While the effect of data augmentation on the natural image pre-trained model was 5%, the same domain pre-trained model was measured as 2%. According to the most widely used object detection metric, transfer learning strategies using MS COCO weights and random weights showed the same object detection performance as data augmentation. The performance of the most effective strategies identified in the Mask R-CNN model was also tested with YOLOv8. Results showed that even if the amount of data is less than the natural dataset, in-domain transfer learning is more efficient than cross-domain transfer learning. Moreover, this study demonstrates the first use of the Gazi Brains 2020 dataset, which was generated to address the lack of labeled and qualified brain MRI data in the medical field for in-domain transfer learning. Thus, knowledge transfer was carried out from the deep neural network, which was trained with brain tumor data and tested on a different brain tumor dataset.
Collapse
Affiliation(s)
- Duygu Sinanc Terzi
- Department of Computer Engineering, Amasya University, Amasya 05100, Turkey
| | - Nuh Azginoglu
- Department of Computer Engineering, Kayseri University, Kayseri 38280, Turkey
| |
Collapse
|
16
|
Mohamed AAA, Hançerlioğullari A, Rahebi J, Ray MK, Roy S. Colon Disease Diagnosis with Convolutional Neural Network and Grasshopper Optimization Algorithm. Diagnostics (Basel) 2023; 13:diagnostics13101728. [PMID: 37238212 DOI: 10.3390/diagnostics13101728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
This paper presents a robust colon cancer diagnosis method based on the feature selection method. The proposed method for colon disease diagnosis can be divided into three steps. In the first step, the images' features were extracted based on the convolutional neural network. Squeezenet, Resnet-50, AlexNet, and GoogleNet were used for the convolutional neural network. The extracted features are huge, and the number of features cannot be appropriate for training the system. For this reason, the metaheuristic method is used in the second step to reduce the number of features. This research uses the grasshopper optimization algorithm to select the best features from the feature data. Finally, using machine learning methods, colon disease diagnosis was found to be accurate and successful. Two classification methods are applied for the evaluation of the proposed method. These methods include the decision tree and the support vector machine. The sensitivity, specificity, accuracy, and F1Score have been used to evaluate the proposed method. For Squeezenet based on the support vector machine, we obtained results of 99.34%, 99.41%, 99.12%, 98.91% and 98.94% for sensitivity, specificity, accuracy, precision, and F1Score, respectively. In the end, we compared the suggested recognition method's performance to the performances of other methods, including 9-layer CNN, random forest, 7-layer CNN, and DropBlock. We demonstrated that our solution outperformed the others.
Collapse
Affiliation(s)
- Amna Ali A Mohamed
- Department of Material Science and Engineering, University of Kastamonu, Kastamonu 37150, Turkey
| | | | - Javad Rahebi
- Department of Software Engineering, Istanbul Topkapi University, Istanbul 34087, Turkey
| | - Mayukh K Ray
- Department of Physics, Amity Institute of Applied Sciences, Amity University, Kolkata 700135, India
| | - Sudipta Roy
- Artificial Intelligence & Data Science, Jio Institute, Navi Mumbai 410206, India
| |
Collapse
|
17
|
Asif S, Zhao M, Chen X, Zhu Y. BMRI-NET: A Deep Stacked Ensemble Model for Multi-class Brain Tumor Classification from MRI Images. Interdiscip Sci 2023:10.1007/s12539-023-00571-1. [PMID: 37171681 DOI: 10.1007/s12539-023-00571-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/13/2023]
Abstract
Brain tumors are one of the most dangerous health problems for adults and children in many countries. Any failure in the diagnosis of brain tumors may lead to shortening of human life. Accurate and timely diagnosis of brain tumors provides appropriate treatment to increase the patient's chances of survival. Due to the different characteristics of tumors, one of the challenging problems is the classification of three types of brain tumors. With the advent of deep learning (DL) models, three classes of brain tumor classification have been addressed. However, the accuracy of these methods requires significant improvements in brain image classification. The main goal of this article is to design a new method for classifying the three types of brain tumors with extremely high accuracy. In this paper, we propose a novel deep stacked ensemble model called "BMRI-NET" that can detect brain tumors from MR images with high accuracy and recall. The stacked ensemble proposed in this article adapts three pre-trained models, namely DenseNe201, ResNet152V2, and InceptionResNetV2, to improve the generalization capability. We combine decisions from the three models using the stacking technique to obtain final results that are much more accurate than individual models for detecting brain tumors. The efficacy of the proposed model is evaluated on the Figshare brain MRI dataset of three types of brain tumors consisting of 3064 images. The experimental results clearly highlight the robustness of the proposed BMRI-NET model by achieving an overall classification of 98.69% and an average recall, F1-score and MCC of 98.33%, 98.40, and 97.95%, respectively. The results indicate that the proposed BMRI-NET model is superior to existing methods and can assist healthcare professionals in the diagnosis of brain tumors.
Collapse
Affiliation(s)
- Sohaib Asif
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Ming Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Xuehan Chen
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yusen Zhu
- School of Mathematics, Hunan University, Changsha, China
| |
Collapse
|
18
|
Salmanpour MR, Rezaeijo SM, Hosseinzadeh M, Rahmim A. Deep versus Handcrafted Tensor Radiomics Features: Prediction of Survival in Head and Neck Cancer Using Machine Learning and Fusion Techniques. Diagnostics (Basel) 2023; 13:1696. [PMID: 37238180 PMCID: PMC10217462 DOI: 10.3390/diagnostics13101696] [Citation(s) in RCA: 28] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 03/22/2023] [Accepted: 04/21/2023] [Indexed: 05/28/2023] Open
Abstract
BACKGROUND Although handcrafted radiomics features (RF) are commonly extracted via radiomics software, employing deep features (DF) extracted from deep learning (DL) algorithms merits significant investigation. Moreover, a "tensor'' radiomics paradigm where various flavours of a given feature are generated and explored can provide added value. We aimed to employ conventional and tensor DFs, and compare their outcome prediction performance to conventional and tensor RFs. METHODS 408 patients with head and neck cancer were selected from TCIA. PET images were first registered to CT, enhanced, normalized, and cropped. We employed 15 image-level fusion techniques (e.g., dual tree complex wavelet transform (DTCWT)) to combine PET and CT images. Subsequently, 215 RFs were extracted from each tumor in 17 images (or flavours) including CT only, PET only, and 15 fused PET-CT images through the standardized-SERA radiomics software. Furthermore, a 3 dimensional autoencoder was used to extract DFs. To predict the binary progression-free-survival-outcome, first, an end-to-end CNN algorithm was employed. Subsequently, we applied conventional and tensor DFs vs. RFs as extracted from each image to three sole classifiers, namely multilayer perceptron (MLP), random-forest, and logistic regression (LR), linked with dimension reduction algorithms. RESULTS DTCWT fusion linked with CNN resulted in accuracies of 75.6 ± 7.0% and 63.4 ± 6.7% in five-fold cross-validation and external-nested-testing, respectively. For the tensor RF-framework, polynomial transform algorithms + analysis of variance feature selector (ANOVA) + LR enabled 76.67 ± 3.3% and 70.6 ± 6.7% in the mentioned tests. For the tensor DF framework, PCA + ANOVA + MLP arrived at 87.0 ± 3.5% and 85.3 ± 5.2% in both tests. CONCLUSIONS This study showed that tensor DF combined with proper machine learning approaches enhanced survival prediction performance compared to conventional DF, tensor and conventional RF, and end-to-end CNN frameworks.
Collapse
Affiliation(s)
- Mohammad R. Salmanpour
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Technological Virtual Collaboration (TECVICO CORP.), Vancouver, BC V5E 3J7, Canada
| | - Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz 6135715794, Iran
| | - Mahdi Hosseinzadeh
- Technological Virtual Collaboration (TECVICO CORP.), Vancouver, BC V5E 3J7, Canada
- Department of Electrical & Computer Engineering, University of Tarbiat Modares, Tehran 14115111, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Department of Physics & Astronomy, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| |
Collapse
|
19
|
Hosseinzadeh M, Gorji A, Fathi Jouzdani A, Rezaeijo SM, Rahmim A, Salmanpour MR. Prediction of Cognitive Decline in Parkinson's Disease Using Clinical and DAT SPECT Imaging Features, and Hybrid Machine Learning Systems. Diagnostics (Basel) 2023; 13:1691. [PMID: 37238175 PMCID: PMC10217464 DOI: 10.3390/diagnostics13101691] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 04/28/2023] [Accepted: 05/04/2023] [Indexed: 05/28/2023] Open
Abstract
BACKGROUND We aimed to predict Montreal Cognitive Assessment (MoCA) scores in Parkinson's disease patients at year 4 using handcrafted radiomics (RF), deep (DF), and clinical (CF) features at year 0 (baseline) applied to hybrid machine learning systems (HMLSs). METHODS 297 patients were selected from the Parkinson's Progressive Marker Initiative (PPMI) database. The standardized SERA radiomics software and a 3D encoder were employed to extract RFs and DFs from single-photon emission computed tomography (DAT-SPECT) images, respectively. The patients with MoCA scores over 26 were indicated as normal; otherwise, scores under 26 were indicated as abnormal. Moreover, we applied different combinations of feature sets to HMLSs, including the Analysis of Variance (ANOVA) feature selection, which was linked with eight classifiers, including Multi-Layer Perceptron (MLP), K-Neighbors Classifier (KNN), Extra Trees Classifier (ETC), and others. We employed 80% of the patients to select the best model in a 5-fold cross-validation process, and the remaining 20% were employed for hold-out testing. RESULTS For the sole usage of RFs and DFs, ANOVA and MLP resulted in averaged accuracies of 59 ± 3% and 65 ± 4% for 5-fold cross-validation, respectively, with hold-out testing accuracies of 59 ± 1% and 56 ± 2%, respectively. For sole CFs, a higher performance of 77 ± 8% for 5-fold cross-validation and a hold-out testing performance of 82 + 2% were obtained from ANOVA and ETC. RF+DF obtained a performance of 64 ± 7%, with a hold-out testing performance of 59 ± 2% through ANOVA and XGBC. Usage of CF+RF, CF+DF, and RF+DF+CF enabled the highest averaged accuracies of 78 ± 7%, 78 ± 9%, and 76 ± 8% for 5-fold cross-validation, and hold-out testing accuracies of 81 ± 2%, 82 ± 2%, and 83 ± 4%, respectively. CONCLUSIONS We demonstrated that CFs vitally contribute to predictive performance, and combining them with appropriate imaging features and HMLSs can result in the best prediction performance.
Collapse
Affiliation(s)
- Mahdi Hosseinzadeh
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver, BC V5E 3J7, Canada;
- Department of Electrical & Computer Engineering, University of Tarbiat Modares, Tehran 14115111, Iran
| | - Arman Gorji
- Neuroscience and Artificial Intelligence Research Group (NAIRG), Student Research Committee, Hamadan University of Medical Sciences, Hamadan 6517838736, Iran
| | - Ali Fathi Jouzdani
- Neuroscience and Artificial Intelligence Research Group (NAIRG), Student Research Committee, Hamadan University of Medical Sciences, Hamadan 6517838736, Iran
| | - Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz 6135715794, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Mohammad R. Salmanpour
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver, BC V5E 3J7, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
| |
Collapse
|
20
|
Khalid A, Senan EM, Al-Wagih K, Ali Al-Azzam MM, Alkhraisha ZM. Hybrid Techniques of X-ray Analysis to Predict Knee Osteoarthritis Grades Based on Fusion Features of CNN and Handcrafted. Diagnostics (Basel) 2023; 13:diagnostics13091609. [PMID: 37175000 PMCID: PMC10178472 DOI: 10.3390/diagnostics13091609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 04/25/2023] [Accepted: 04/28/2023] [Indexed: 05/15/2023] Open
Abstract
Knee osteoarthritis (KOA) is a chronic disease that impedes movement, especially in the elderly, affecting more than 5% of people worldwide. KOA goes through many stages, from the mild grade that can be treated to the severe grade in which the knee must be replaced. Therefore, early diagnosis of KOA is essential to avoid its development to the advanced stages. X-rays are one of the vital techniques for the early detection of knee infections, which requires highly experienced doctors and radiologists to distinguish Kellgren-Lawrence (KL) grading. Thus, artificial intelligence techniques solve the shortcomings of manual diagnosis. This study developed three methodologies for the X-ray analysis of both the Osteoporosis Initiative (OAI) and Rani Channamma University (RCU) datasets for diagnosing KOA and discrimination between KL grades. In all methodologies, the Principal Component Analysis (PCA) algorithm was applied after the CNN models to delete the unimportant and redundant features and keep the essential features. The first methodology for analyzing x-rays and diagnosing the degree of knee inflammation uses the VGG-19 -FFNN and ResNet-101 -FFNN systems. The second methodology of X-ray analysis and diagnosis of KOA grade by Feed Forward Neural Network (FFNN) is based on the combined features of VGG-19 and ResNet-101 before and after PCA. The third methodology for X-ray analysis and diagnosis of KOA grade by FFNN is based on the fusion features of VGG-19 and handcrafted features, and fusion features of ResNet-101 and handcrafted features. For an OAI dataset with fusion features of VGG-19 and handcrafted features, FFNN obtained an AUC of 99.25%, an accuracy of 99.1%, a sensitivity of 98.81%, a specificity of 100%, and a precision of 98.24%. For the RCU dataset with the fusion features of VGG-19 and the handcrafted features, FFNN obtained an AUC of 99.07%, an accuracy of 98.20%, a sensitivity of 98.16%, a specificity of 99.73%, and a precision of 98.08%.
Collapse
Affiliation(s)
- Ahmed Khalid
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | - Khalil Al-Wagih
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | | | |
Collapse
|
21
|
Mohammed AS, Hasanaath AA, Latif G, Bashar A. Knee Osteoarthritis Detection and Severity Classification Using Residual Neural Networks on Preprocessed X-ray Images. Diagnostics (Basel) 2023; 13:diagnostics13081380. [PMID: 37189481 DOI: 10.3390/diagnostics13081380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/19/2023] [Accepted: 04/06/2023] [Indexed: 05/17/2023] Open
Abstract
One of the most common and challenging medical conditions to deal with in old-aged people is the occurrence of knee osteoarthritis (KOA). Manual diagnosis of this disease involves observing X-ray images of the knee area and classifying it under five grades using the Kellgren-Lawrence (KL) system. This requires the physician's expertise, suitable experience, and a lot of time, and even after that the diagnosis can be prone to errors. Therefore, researchers in the ML/DL domain have employed the capabilities of deep neural network (DNN) models to identify and classify KOA images in an automated, faster, and accurate manner. To this end, we propose the application of six pretrained DNN models, namely, VGG16, VGG19, ResNet101, MobileNetV2, InceptionResNetV2, and DenseNet121 for KOA diagnosis using images obtained from the Osteoarthritis Initiative (OAI) dataset. More specifically, we perform two types of classification, namely, a binary classification, which detects the presence or absence of KOA and secondly, classifying the severity of KOA in a three-class classification. For a comparative analysis, we experiment on three datasets (Dataset I, Dataset II, and Dataset III) with five, two, and three classes of KOA images, respectively. We achieved maximum classification accuracies of 69%, 83%, and 89%, respectively, with the ResNet101 DNN model. Our results show an improved performance from the existing work in the literature.
Collapse
Affiliation(s)
- Abdul Sami Mohammed
- Computer Engineering Department, Prince Mohammad Bin Fahd University, Al-Khobar 31952, Saudi Arabia
| | - Ahmed Abul Hasanaath
- Computer Science Department, Prince Mohammad Bin Fahd University, Al-Khobar 31952, Saudi Arabia
| | - Ghazanfar Latif
- Computer Science Department, Prince Mohammad Bin Fahd University, Al-Khobar 31952, Saudi Arabia
| | - Abul Bashar
- Computer Engineering Department, Prince Mohammad Bin Fahd University, Al-Khobar 31952, Saudi Arabia
| |
Collapse
|
22
|
Cheng CT, Hsu CP, Ooyang CH, Chou CY, Lin NY, Lin JY, Ku YK, Lin HS, Kao SK, Chen HW, Wu YT, Liao CH. Evaluation of ensemble strategy on the development of multiple view ankle fracture detection algorithm. Br J Radiol 2023; 96:20220924. [PMID: 36930721 PMCID: PMC10161902 DOI: 10.1259/bjr.20220924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023] Open
Abstract
OBJECTIVE To identify the feasibility and efficiency of deep convolutional neural networks (DCNNs) in the detection of ankle fractures and to explore ensemble strategies that applied multiple projections of radiographs.Ankle radiographs (AXRs) are the primary tool used to diagnose ankle fractures. Applying DCNN algorithms on AXRs can potentially improve the diagnostic accuracy and efficiency of detecting ankle fractures. METHODS A DCNN was trained using a trauma image registry, including 3102 AXRs. We separately trained the DCNN on anteroposterior (AP) and lateral (Lat) AXRs. Different ensemble methods, such as "sum-up," "severance-OR," and "severance-Both," were evaluated to incorporate the results of the model using different projections of view. RESULTS The AP/Lat model's individual sensitivity, specificity, positive-predictive value, accuracy, and F1 score were 79%/84%, 90%/86%, 88%/86%, 83%/85%, and 0.816/0.850, respectively. Furthermore, the area under the receiver operating characteristic curve (AUROC) of the AP/Lat model was 0.890/0.894 (95% CI: 0.826-0.954/0.831-0.953). The sum-up method generated balanced results by applying both models and obtained an AUROC of 0.917 (95% CI: 0.863-0.972) with 87% accuracy. The severance-OR method resulted in a better sensitivity of 90%, and the severance-Both method obtained a high specificity of 94%. CONCLUSION Ankle fracture in the AXR could be identified by the trained DCNN algorithm. The selection of ensemble methods can depend on the clinical situation which might help clinicians detect ankle fractures efficiently without interrupting the current clinical pathway. ADVANCES IN KNOWLEDGE This study demonstrated different ensemble strategies of AI algorithms on multiple view AXRs to optimize the performance in various clinical needs.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Chih-Po Hsu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Chia-Yi Chou
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Nai-Yu Lin
- Department of Surgery, Chang Gung Memorial Hospital, Linkou; Chang Gung University, Taoyuan, Taiwan
| | - Jia-Yen Lin
- School of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yi-Kang Ku
- Department of Medical Imaging and Intervention, New Taipei Municipal TuCheng Hospital, Chang Gung Medical Foundation, New Taipei, Taiwan
| | - Hou-Shian Lin
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Shao-Ku Kao
- Department of Electrical Engineering and Green Technology Research Center, School of Electrical and Computer Engineering, College of Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Huan-Wu Chen
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan
| | - Yu-Tung Wu
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou ; Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
23
|
Local-Ternary-Pattern-Based Associated Histogram Equalization Technique for Cervical Cancer Detection. Diagnostics (Basel) 2023; 13:diagnostics13030548. [PMID: 36766652 PMCID: PMC9914420 DOI: 10.3390/diagnostics13030548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/28/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
Every year, cervical cancer is a leading cause of mortality in women all over the world. This cancer can be cured if it is detected early and patients are treated promptly. This study proposes a new strategy for the detection of cervical cancer using cervigram pictures. The associated histogram equalization (AHE) technique is used to improve the edges of the cervical image, and then the finite ridgelet transform is used to generate a multi-resolution picture. Then, from this converted multi-resolution cervical picture, features such as ridgelets, gray-level run-length matrices, moment invariant, and enhanced local ternary pattern are retrieved. A feed-forward backward propagation neural network is used to train and test these extracted features in order to classify the cervical images as normal or abnormal. To detect and segment cancer regions, morphological procedures are applied to the abnormal cervical images. The cervical cancer detection system's performance metrics include 98.11% sensitivity, 98.97% specificity, 99.19% accuracy, a PPV of 98.88%, an NPV of 91.91%, an LPR of 141.02%, an LNR of 0.0836, 98.13% precision, 97.15% FPs, and 90.89% FNs. The simulation outcomes show that the proposed method is better at detecting and segmenting cervical cancer than the traditional methods.
Collapse
|
24
|
Zhang D, Wang H, Deng J, Wang T, Shen C, Feng J. CAMS-Net: An attention-guided feature selection network for rib segmentation in chest X-rays. Comput Biol Med 2023. [DOI: 10.1016/j.compbiomed.2023.106702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
25
|
Hung KF, Ai QYH, Wong LM, Yeung AWK, Li DTS, Leung YY. Current Applications of Deep Learning and Radiomics on CT and CBCT for Maxillofacial Diseases. Diagnostics (Basel) 2022; 13:110. [PMID: 36611402 PMCID: PMC9818323 DOI: 10.3390/diagnostics13010110] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
The increasing use of computed tomography (CT) and cone beam computed tomography (CBCT) in oral and maxillofacial imaging has driven the development of deep learning and radiomics applications to assist clinicians in early diagnosis, accurate prognosis prediction, and efficient treatment planning of maxillofacial diseases. This narrative review aimed to provide an up-to-date overview of the current applications of deep learning and radiomics on CT and CBCT for the diagnosis and management of maxillofacial diseases. Based on current evidence, a wide range of deep learning models on CT/CBCT images have been developed for automatic diagnosis, segmentation, and classification of jaw cysts and tumors, cervical lymph node metastasis, salivary gland diseases, temporomandibular (TMJ) disorders, maxillary sinus pathologies, mandibular fractures, and dentomaxillofacial deformities, while CT-/CBCT-derived radiomics applications mainly focused on occult lymph node metastasis in patients with oral cancer, malignant salivary gland tumors, and TMJ osteoarthritis. Most of these models showed high performance, and some of them even outperformed human experts. The models with performance on par with human experts have the potential to serve as clinically practicable tools to achieve the earliest possible diagnosis and treatment, leading to a more precise and personalized approach for the management of maxillofacial diseases. Challenges and issues, including the lack of the generalizability and explainability of deep learning models and the uncertainty in the reproducibility and stability of radiomic features, should be overcome to gain the trust of patients, providers, and healthcare organizers for daily clinical use of these models.
Collapse
Affiliation(s)
- Kuo Feng Hung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Qi Yong H. Ai
- Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lun M. Wong
- Imaging and Interventional Radiology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Dion Tik Shun Li
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Yiu Yan Leung
- Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
26
|
Hamza A, Khan MA, Alhaisoni M, Al Hejaili A, Shaban KA, Alsubai S, Alasiry A, Marzougui M. D 2BOF-COVIDNet: A Framework of Deep Bayesian Optimization and Fusion-Assisted Optimal Deep Features for COVID-19 Classification Using Chest X-ray and MRI Scans. Diagnostics (Basel) 2022; 13:101. [PMID: 36611393 PMCID: PMC9818184 DOI: 10.3390/diagnostics13010101] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 12/31/2022] Open
Abstract
BACKGROUND AND OBJECTIVE In 2019, a corona virus disease (COVID-19) was detected in China that affected millions of people around the world. On 11 March 2020, the WHO declared this disease a pandemic. Currently, more than 200 countries in the world have been affected by this disease. The manual diagnosis of this disease using chest X-ray (CXR) images and magnetic resonance imaging (MRI) is time consuming and always requires an expert person; therefore, researchers introduced several computerized techniques using computer vision methods. The recent computerized techniques face some challenges, such as low contrast CTX images, the manual initialization of hyperparameters, and redundant features that mislead the classification accuracy. METHODS In this paper, we proposed a novel framework for COVID-19 classification using deep Bayesian optimization and improved canonical correlation analysis (ICCA). In this proposed framework, we initially performed data augmentation for better training of the selected deep models. After that, two pre-trained deep models were employed (ResNet50 and InceptionV3) and trained using transfer learning. The hyperparameters of both models were initialized through Bayesian optimization. Both trained models were utilized for feature extractions and fused using an ICCA-based approach. The fused features were further optimized using an improved tree growth optimization algorithm that finally was classified using a neural network classifier. RESULTS The experimental process was conducted on five publically available datasets and achieved an accuracy of 99.6, 98.5, 99.9, 99.5, and 100%. CONCLUSION The comparison with recent methods and t-test-based analysis showed the significance of this proposed framework.
Collapse
Affiliation(s)
- Ameer Hamza
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | | | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Abdullah Al Hejaili
- Faculty of Computers & Information Technology, Computer Science Department, University of Tabuk, Tabuk 71491, Saudi Arabia
| | - Khalid Adel Shaban
- Computer Science Department, College of Computing and Informatics, Saudi Electronic University, Riyadh 11673, Saudi Arabia
| | - Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia
| |
Collapse
|
27
|
The Systematic Review of Artificial Intelligence Applications in Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 13:diagnostics13010045. [PMID: 36611337 PMCID: PMC9818874 DOI: 10.3390/diagnostics13010045] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 12/28/2022] Open
Abstract
Several studies have demonstrated the value of artificial intelligence (AI) applications in breast cancer diagnosis. The systematic review of AI applications in breast cancer diagnosis includes several studies that compare breast cancer diagnosis and AI. However, they lack systematization, and each study appears to be conducted uniquely. The purpose and contributions of this study are to offer elaborative knowledge on the applications of AI in the diagnosis of breast cancer through citation analysis in order to categorize the main area of specialization that attracts the attention of the academic community, as well as thematic issue analysis to identify the species being researched in each category. In this study, a total number of 17,900 studies addressing breast cancer and AI published between 2012 and 2022 were obtained from these databases: IEEE, Embase: Excerpta Medica Database Guide-Ovid, PubMed, Springer, Web of Science, and Google Scholar. We applied inclusion and exclusion criteria to the search; 36 studies were identified. The vast majority of AI applications used classification models for the prediction of breast cancer. Howbeit, accuracy (99%) has the highest number of performance metrics, followed by specificity (98%) and area under the curve (0.95). Additionally, the Convolutional Neural Network (CNN) was the best model of choice in several studies. This study shows that the quantity and caliber of studies that use AI applications in breast cancer diagnosis will continue to rise annually. As a result, AI-based applications are viewed as a supplement to doctors' clinical reasoning, with the ultimate goal of providing quality healthcare that is both affordable and accessible to everyone worldwide.
Collapse
|
28
|
Integrated Deep Learning and Supervised Machine Learning Model for Predictive Fetal Monitoring. Diagnostics (Basel) 2022; 12:diagnostics12112843. [PMID: 36428902 PMCID: PMC9689398 DOI: 10.3390/diagnostics12112843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/06/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022] Open
Abstract
Asphyxiation associated with metabolic acidosis is one of the common causes of fetal deaths. The paper aims to develop a feature extraction and prediction algorithm capable of identifying most of the features in the SISPORTO software package and late and variable decelerations. The resulting features were used for classification based on umbilical cord pH data. The algorithms developed here were used to predict cord pH levels. The prediction system assists the obstetricians in assessing the state of the fetus better than the category methods, as only about 30% of the patients in the pathological category suffer from acidosis, while the majority of acidotic babies were in the suspect category, which is considered lower risk. By predicting the direct indicator of acidosis, umbilical cord pH, this work demonstrates a methodology, which uses fetal heart rate and uterine activity, to identify acidosis. This paper introduces a forecasting model based on deep learning to predict heart rate and uterine contractions, integrated with the classification algorithm, resulting in a robust tool for predictive fetal monitoring. The hybrid algorithm resulted in a model capable of providing future conditions of the fetus, which obstetricians can use for diagnosis and planning interventions. The ensemble classification algorithm had a test accuracy of 85% (n = 24) in predicting fetal acidosis on the features extracted from the cardiotocography data. When integrated with the classification model, the results from the prediction model (long short-term memory network) can effectively identify fetal acidosis 2 or 4 min in the future.
Collapse
|