1
|
Xu X, Yang Y, Tan X, Zhang Z, Wang B, Yang X, Weng C, Yu R, Zhao Q, Quan S. Hepatic encephalopathy post-TIPS: Current status and prospects in predictive assessment. Comput Struct Biotechnol J 2024; 24:493-506. [PMID: 39076168 PMCID: PMC11284497 DOI: 10.1016/j.csbj.2024.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 07/05/2024] [Accepted: 07/05/2024] [Indexed: 07/31/2024] Open
Abstract
Transjugular intrahepatic portosystemic shunt (TIPS) is an essential procedure for the treatment of portal hypertension but can result in hepatic encephalopathy (HE), a serious complication that worsens patient outcomes. Investigating predictors of HE after TIPS is essential to improve prognosis. This review analyzes risk factors and compares predictive models, weighing traditional scores such as Child-Pugh, Model for End-Stage Liver Disease (MELD), and albumin-bilirubin (ALBI) against emerging artificial intelligence (AI) techniques. While traditional scores provide initial insights into HE risk, they have limitations in dealing with clinical complexity. Advances in machine learning (ML), particularly when integrated with imaging and clinical data, offer refined assessments. These innovations suggest the potential for AI to significantly improve the prediction of post-TIPS HE. The study provides clinicians with a comprehensive overview of current prediction methods, while advocating for the integration of AI to increase the accuracy of post-TIPS HE assessments. By harnessing the power of AI, clinicians can better manage the risks associated with TIPS and tailor interventions to individual patient needs. Future research should therefore prioritize the development of advanced AI frameworks that can assimilate diverse data streams to support clinical decision-making. The goal is not only to more accurately predict HE, but also to improve overall patient care and quality of life.
Collapse
Affiliation(s)
- Xiaowei Xu
- Department of Gastroenterology Nursing Unit, Ward 192, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China
| | - Yun Yang
- School of Nursing, Wenzhou Medical University, Wenzhou 325001, China
| | - Xinru Tan
- The First School of Medicine, School of Information and Engineering, Wenzhou Medical University, Wenzhou 325001, China
| | - Ziyang Zhang
- School of Clinical Medicine, Guizhou Medical University, Guiyang 550025, China
| | - Boxiang Wang
- The First School of Medicine, School of Information and Engineering, Wenzhou Medical University, Wenzhou 325001, China
| | - Xiaojie Yang
- Wenzhou Medical University Renji College, Wenzhou 325000, China
| | - Chujun Weng
- The Fourth Affiliated Hospital Zhejiang University School of Medicine, Yiwu 322000, China
| | - Rongwen Yu
- Wenzhou Institute, University of Chinese Academy of Sciences, Wenzhou 325000, China
| | - Qi Zhao
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan 114051, China
| | - Shichao Quan
- Department of Big Data in Health Science, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China
| |
Collapse
|
2
|
Muhammad D, Bendechache M. Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis. Comput Struct Biotechnol J 2024; 24:542-560. [PMID: 39252818 PMCID: PMC11382209 DOI: 10.1016/j.csbj.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/07/2024] [Accepted: 08/07/2024] [Indexed: 09/11/2024] Open
Abstract
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
Collapse
Affiliation(s)
- Dost Muhammad
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| | - Malika Bendechache
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| |
Collapse
|
3
|
Islam O, Assaduzzaman M, Hasan MZ. An explainable AI-based blood cell classification using optimized convolutional neural network. J Pathol Inform 2024; 15:100389. [PMID: 39161471 PMCID: PMC11332798 DOI: 10.1016/j.jpi.2024.100389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/16/2024] [Accepted: 06/24/2024] [Indexed: 08/21/2024] Open
Abstract
White blood cells (WBCs) are a vital component of the immune system. The efficient and precise classification of WBCs is crucial for medical professionals to diagnose diseases accurately. This study presents an enhanced convolutional neural network (CNN) for detecting blood cells with the help of various image pre-processing techniques. Various image pre-processing techniques, such as padding, thresholding, erosion, dilation, and masking, are utilized to minimize noise and improve feature enhancement. Additionally, performance is further enhanced by experimenting with various architectural structures and hyperparameters to optimize the proposed model. A comparative evaluation is conducted to compare the performance of the proposed model with three transfer learning models, including Inception V3, MobileNetV2, and DenseNet201.The results indicate that the proposed model outperforms existing models, achieving a testing accuracy of 99.12%, precision of 99%, and F1-score of 99%. In addition, We utilized SHAP (Shapley Additive explanations) and LIME (Local Interpretable Model-agnostic Explanations) techniques in our study to improve the interpretability of the proposed model, providing valuable insights into how the model makes decisions. Furthermore, the proposed model has been further explained using the Grad-CAM and Grad-CAM++ techniques, which is a class-discriminative localization approach, to improve trust and transparency. Grad-CAM++ performed slightly better than Grad-CAM in identifying the predicted area's location. Finally, the most efficient model has been integrated into an end-to-end (E2E) system, accessible through both web and Android platforms for medical professionals to classify blood cell.
Collapse
Affiliation(s)
- Oahidul Islam
- Dept. of EEE, Daffodil International University, Dhaka, Bangladesh
| | - Md Assaduzzaman
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Dept. of CSE, Daffodil International University, Dhaka, Bangladesh
| |
Collapse
|
4
|
Wangweera C, Zanini P. Comparison review of image classification techniques for early diagnosis of diabetic retinopathy. Biomed Phys Eng Express 2024; 10:062001. [PMID: 39173657 DOI: 10.1088/2057-1976/ad7267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 08/22/2024] [Indexed: 08/24/2024]
Abstract
Diabetic retinopathy (DR) is one of the leading causes of vision loss in adults and is one of the detrimental side effects of the mass prevalence of Diabetes Mellitus (DM). It is crucial to have an efficient screening method for early diagnosis of DR to prevent vision loss. This paper compares and analyzes the various Machine Learning (ML) techniques, from traditional ML to advanced Deep Learning models. We compared and analyzed the efficacy of Convolutional Neural Networks (CNNs), Capsule Networks (CapsNet), K-Nearest Neighbor (KNN), Support Vector Machine (SVM), decision trees, and Random Forests. This paper also considers determining factors in the evaluation, including contrast enhancements, noise reduction, grayscaling, etc We analyze recent research studies and compare methodologies and metrics, including accuracy, precision, sensitivity, and specificity. The findings highlight the advanced performance of Deep Learning (DL) models, with CapsNet achieving a remarkable accuracy of up to 97.98% and a high precision rate, outperforming other traditional ML methods. The Contrast Limited Adaptive Histogram Equalization (CLAHE) preprocessing technique substantially enhanced the model's efficiency. Each ML method's computational requirements are also considered. While most advanced deep learning methods performed better according to the metrics, they are more computationally complex, requiring more resources and data input. We also discussed how datasets like MESSIDOR could be more straightforward and contribute to highly evaluated performance and that there is a lack of consistency regarding benchmark datasets across papers in the field. Using the DL models facilitates accurate early detection for DR screening, can potentially reduce vision loss risks, and improves accessibility and cost-efficiency of eye screening. Further research is recommended to extend our findings by building models with public datasets, experimenting with ensembles of DL and traditional ML models, and considering testing high-performing models like CapsNet.
Collapse
Affiliation(s)
| | - Plinio Zanini
- Center of Engineering, Modeling and Applied Social Science, Federal University of ABC (UFABC), Santo André, Brazil
| |
Collapse
|
5
|
Niu S, Dong R, Jiang G, Zhang Y. Identification of diagnostic signature and immune microenvironment subtypes of venous thromboembolism. Cytokine 2024; 181:156685. [PMID: 38945040 DOI: 10.1016/j.cyto.2024.156685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/02/2024]
Abstract
The close link between immune and pathogenesis of venous thromboembolism (VTE) has been recognized, but not fully elucidated. The current study was designed to identify immune microenvironment related signature and subtypes using explainable machine learning in VTE. We first observed an alteration of immune microenvironment in VTE patients and identified eight key immune cells involved in VTE. Then PTPN6, ITGB2, CR2, FPR2, MMP9 and ISG15 were determined as key immune microenvironment-related genes, which could divide VTE patients into two subtypes with different immune and metabolic characteristics. Also, we found that prunetin and torin-2 may be most promising to treat VTE patients in Cluster 1 and 2, respectively. By comparing six machine learning models in both training and external validation sets, XGboost was identified as the best one to predict the risk of VTE, followed by the interpretation of each immune microenvironment-related gene contributing to the model. Moreover, CR2 and FPR2 had high accuracy in distinguishing VTE and control, which may act as diagnostic biomarkers of VTE, and their expressions were validated by qPCR. Collectively, immune microenvironment related PTPN6, ITGB2, CR2, FPR2, MMP9 and ISG15 are key genes involved in the pathogenesis of VTE. The VTE risk prediction model and immune microenvironment subtypes based on those genes might benefit prevention, diagnosis, and the individualized treatment strategy in clinical practice of VTE.
Collapse
Affiliation(s)
- Shuai Niu
- Department of Vascular Surgery, the Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China; Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Ruoyu Dong
- Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Guangwei Jiang
- Department of Vascular Surgery, Hebei General Hospital, Shijiazhuang, Hebei, China
| | - Yanrong Zhang
- Department of Vascular Surgery, the Third Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
| |
Collapse
|
6
|
Huang W, Wang C, Chen J. Reply-letter to the editor. Clin Nutr 2024; 43:2283-2284. [PMID: 39138078 DOI: 10.1016/j.clnu.2024.07.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Accepted: 07/31/2024] [Indexed: 08/15/2024]
Affiliation(s)
- Weijia Huang
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Congjun Wang
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Junqiang Chen
- Department of Gastrointestinal Gland Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning 530021, China; Guangxi Key Laboratory of Enhanced Recovery after Surgery for Gastrointestinal Cancer, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Clinical Research Center for Enhanced Recovery after Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, China; Guangxi Zhuang Autonomous Region Engineering Research Center for Artificial Intelligence Analysis of Multimodal Tumor Images, The First Affiliated Hospital of Guangxi Medical University, Nanning, China.
| |
Collapse
|
7
|
Lee SB. Development of a chest X-ray machine learning convolutional neural network model on a budget and using artificial intelligence explainability techniques to analyze patterns of machine learning inference. JAMIA Open 2024; 7:ooae035. [PMID: 38699648 PMCID: PMC11064095 DOI: 10.1093/jamiaopen/ooae035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 04/03/2024] [Accepted: 04/10/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Machine learning (ML) will have a large impact on medicine and accessibility is important. This study's model was used to explore various concepts including how varying features of a model impacted behavior. Materials and Methods This study built an ML model that classified chest X-rays as normal or abnormal by using ResNet50 as a base with transfer learning. A contrast enhancement mechanism was implemented to improve performance. After training with a dataset of publicly available chest radiographs, performance metrics were determined with a test set. The ResNet50 base was substituted with deeper architectures (ResNet101/152) and visualization methods used to help determine patterns of inference. Results Performance metrics were an accuracy of 79%, recall 69%, precision 96%, and area under the curve of 0.9023. Accuracy improved to 82% and recall to 74% with contrast enhancement. When visualization methods were applied and the ratio of pixels used for inference measured, deeper architectures resulted in the model using larger portions of the image for inference as compared to ResNet50. Discussion The model performed on par with many existing models despite consumer-grade hardware and smaller datasets. Individual models vary thus a single model's explainability may not be generalizable. Therefore, this study varied architecture and studied patterns of inference. With deeper ResNet architectures, the machine used larger portions of the image to make decisions. Conclusion An example using a custom model showed that AI (Artificial Intelligence) can be accessible on consumer-grade hardware, and it also demonstrated an example of studying themes of ML explainability by varying ResNet architectures.
Collapse
Affiliation(s)
- Stephen B Lee
- Division of Infectious Diseases, Department of Medicine, College of Medicine, University of Saskatchewan, Regina, S4P 0W5, Canada
| |
Collapse
|
8
|
Kothari S, Sharma S, Shejwal S, Kazi A, D'Silva M, Karthikeyan M. An explainable AI-assisted web application in cancer drug value prediction. MethodsX 2024; 12:102696. [PMID: 38633421 PMCID: PMC11022087 DOI: 10.1016/j.mex.2024.102696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 04/02/2024] [Indexed: 04/19/2024] Open
Abstract
In recent years, there has been an increase in the interest in adopting Explainable Artificial Intelligence (XAI) for healthcare. The proposed system includes•An XAI model for cancer drug value prediction. The model provides data that is easy to understand and explain, which is critical for medical decision-making. It also produces accurate projections.•A model outperformed existing models due to extensive training and evaluation on a large cancer medication chemical compounds dataset.•Insights into the causation and correlation between the dependent and independent actors in the chemical composition of the cancer cell. While the model is evaluated on Lung Cancer data, the architecture offered in the proposed solution is cancer agnostic. It may be scaled out to other cancer cell data if the properties are similar. The work presents a viable route for customizing treatments and improving patient outcomes in oncology by combining XAI with a large dataset. This research attempts to create a framework where a user can upload a test case and receive forecasts with explanations, all in a portable PDF report.
Collapse
Affiliation(s)
- Sonali Kothari
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Shivanandana Sharma
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Sanskruti Shejwal
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Aqsa Kazi
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - Michela D'Silva
- Symbiosis Institute of Technology – Pune Campus, Symbiosis International (Deemed University), Pune, India
| | - M. Karthikeyan
- Senior Principal Scientist, Chemical Engineering and Process Development, NCL-CSIR, Pune, India
| |
Collapse
|
9
|
Lo ZJ, Mak MHW, Liang S, Chan YM, Goh CC, Lai T, Tan A, Thng P, Rodriguez J, Weyde T, Smit S. Development of an explainable artificial intelligence model for Asian vascular wound images. Int Wound J 2024; 21:e14565. [PMID: 38146127 PMCID: PMC10961881 DOI: 10.1111/iwj.14565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 12/04/2023] [Indexed: 12/27/2023] Open
Abstract
Chronic wounds contribute to significant healthcare and economic burden worldwide. Wound assessment remains challenging given its complex and dynamic nature. The use of artificial intelligence (AI) and machine learning methods in wound analysis is promising. Explainable modelling can help its integration and acceptance in healthcare systems. We aim to develop an explainable AI model for analysing vascular wound images among an Asian population. Two thousand nine hundred and fifty-seven wound images from a vascular wound image registry from a tertiary institution in Singapore were utilized. The dataset was split into training, validation and test sets. Wound images were classified into four types (neuroischaemic ulcer [NIU], surgical site infections [SSI], venous leg ulcers [VLU], pressure ulcer [PU]), measured with automatic estimation of width, length and depth and segmented into 18 wound and peri-wound features. Data pre-processing was performed using oversampling and augmentation techniques. Convolutional and deep learning models were utilized for model development. The model was evaluated with accuracy, F1 score and receiver operating characteristic (ROC) curves. Explainability methods were used to interpret AI decision reasoning. A web browser application was developed to demonstrate results of the wound AI model with explainability. After development, the model was tested on additional 15 476 unlabelled images to evaluate effectiveness. After the development on the training and validation dataset, the model performance on unseen labelled images in the test set achieved an AUROC of 0.99 for wound classification with mean accuracy of 95.9%. For wound measurements, the model achieved AUROC of 0.97 with mean accuracy of 85.0% for depth classification, and AUROC of 0.92 with mean accuracy of 87.1% for width and length determination. For wound segmentation, an AUROC of 0.95 and mean accuracy of 87.8% was achieved. Testing on unlabelled images, the model confidence score for wound classification was 82.8% with an explainability score of 60.6%. Confidence score was 87.6% for depth classification with 68.0% explainability score, while width and length measurement obtained 93.0% accuracy score with 76.6% explainability. Confidence score for wound segmentation was 83.9%, while explainability was 72.1%. Using explainable AI models, we have developed an algorithm and application for analysis of vascular wound images from an Asian population with accuracy and explainability. With further development, it can be utilized as a clinical decision support system and integrated into existing healthcare electronic systems.
Collapse
Affiliation(s)
- Zhiwen Joseph Lo
- Department of SurgeryWoodlands HealthSingaporeSingapore
- Lee Kong Chian School of MedicineNanyang Technological UniversitySingaporeSingapore
| | | | | | - Yam Meng Chan
- Department of General SurgeryTan Tock Seng HospitalSingaporeSingapore
| | - Cheng Cheng Goh
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Tina Lai
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Audrey Tan
- Wound and Stoma Care, Nursing SpecialityTan Tock Seng HospitalSingaporeSingapore
| | - Patrick Thng
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Jorge Rodriguez
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Tillman Weyde
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| | - Sylvia Smit
- AITIS ‐ Advanced Intelligence and Technology InnovationsLondonUnited Kingdom
| |
Collapse
|
10
|
Alnahedh TA, Taha M. Role of Machine Learning and Artificial Intelligence in the Diagnosis and Treatment of Refractive Errors for Enhanced Eye Care: A Systematic Review. Cureus 2024; 16:e57706. [PMID: 38711688 PMCID: PMC11071623 DOI: 10.7759/cureus.57706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2024] [Indexed: 05/08/2024] Open
Abstract
A significant contributor to blindness and visual impairment globally is uncorrected refractive error. To plan effective interventions, eye care professionals must promptly identify people at a high risk of acquiring myopia, and monitor disease progress. Artificial intelligence (AI) and machine learning (ML) have enormous potential to improve diagnosis and treatment. This systematic review explores the current state of ML and AI applications in the diagnoses and treatment of refractory errors in optometry. A systematic review and meta-analysis of studies evaluating the diagnostic performance of AI-based tools in PubMed was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. To find relevant studies on the use of ML or AI in the diagnosis or treatment of refractive errors in optometry, a thorough search was conducted in various electronic databases such as PubMed, Google Scholar, and Web of Science. The search was limited to studies published between January 2015 and December 2022. The search terms used were "refractive errors," "myopia," "optometry," "machine learning," "ophthalmology," and "artificial intelligence." A total of nine studies met the inclusion criteria and were included in the final analysis. ML is increasingly being utilized for automating clinical data processing as AI technology progresses, making the formerly labor-intensive work possible. AI models that primarily use a neural network demonstrated exceptional efficiency and performance in the analysis of vast medical data, rivaling board-certified, healthcare professionals. Several studies showed that ML models could support diagnosis and clinical decision-making. Moreover, an ML algorithm predicted future refraction values in patients with myopia. AI and ML models have great potential to improve the diagnosis and treatment of refractive errors in optometry.
Collapse
Affiliation(s)
- Taghreed A Alnahedh
- Optometry, King Abdullah International Medical Research Center (KAIMRC), National Guard Health Affairs, Riyadh, SAU
- Academic Affairs, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| | - Mohammed Taha
- Ophthalmology, King Saud Bin Abdulaziz University for Health Sciences College of Medicine, Riyadh, SAU
| |
Collapse
|
11
|
Xu Z, Liao H, Huang L, Chen Q, Lan W, Li S. IBPGNET: lung adenocarcinoma recurrence prediction based on neural network interpretability. Brief Bioinform 2024; 25:bbae080. [PMID: 38557672 PMCID: PMC10982951 DOI: 10.1093/bib/bbae080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/31/2024] [Accepted: 02/07/2024] [Indexed: 04/04/2024] Open
Abstract
Lung adenocarcinoma (LUAD) is the most common histologic subtype of lung cancer. Early-stage patients have a 30-50% probability of metastatic recurrence after surgical treatment. Here, we propose a new computational framework, Interpretable Biological Pathway Graph Neural Networks (IBPGNET), based on pathway hierarchy relationships to predict LUAD recurrence and explore the internal regulatory mechanisms of LUAD. IBPGNET can integrate different omics data efficiently and provide global interpretability. In addition, our experimental results show that IBPGNET outperforms other classification methods in 5-fold cross-validation. IBPGNET identified PSMC1 and PSMD11 as genes associated with LUAD recurrence, and their expression levels were significantly higher in LUAD cells than in normal cells. The knockdown of PSMC1 and PSMD11 in LUAD cells increased their sensitivity to afatinib and decreased cell migration, invasion and proliferation. In addition, the cells showed significantly lower EGFR expression, indicating that PSMC1 and PSMD11 may mediate therapeutic sensitivity through EGFR expression.
Collapse
Affiliation(s)
- Zhanyu Xu
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Haibo Liao
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Liuliu Huang
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Qingfeng Chen
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Wei Lan
- School of computer, Electronic and Information, Guangxi University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| | - Shikang Li
- Department of Thoracic and Cardiovascular Surgery, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi Zhuang Autonomous Region 530021, China
| |
Collapse
|
12
|
Dhanalakshmi S, Maanasaa RS, Maalikaa RS, Senthil R. A review of emergent intelligent systems for the detection of Parkinson's disease. Biomed Eng Lett 2023; 13:591-612. [PMID: 37872986 PMCID: PMC10590348 DOI: 10.1007/s13534-023-00319-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/11/2023] [Accepted: 09/07/2023] [Indexed: 10/25/2023] Open
Abstract
Parkinson's disease (PD) is a neurodegenerative disorder affecting people worldwide. The PD symptoms are divided into motor and non-motor symptoms. Detection of PD is very crucial and essential. Such challenges can be overcome by applying artificial intelligence to diagnose PD. Many studies have also proposed the implementation of computer-aided diagnosis for the detection of PD. This systematic review comprehensively analyzed all appropriate algorithms for detecting and assessing PD based on the literature from 2012 to 2023 which are conducted as per PRISMA model. This review focused on motor symptoms, namely handwriting dynamics, voice impairments and gait, multimodal features, and brain observation using single photon emission computed tomography, magnetic resonance and electroencephalogram signals. The significant challenges are critically analyzed, and appropriate recommendations are provided. The critical discussion of this review article can be helpful in today's PD community in such a way that it allows clinicians to provide proper treatment and timely medication.
Collapse
Affiliation(s)
- Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramesh Sai Maanasaa
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramesh Sai Maalikaa
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| | - Ramalingam Senthil
- Department of Mechanical Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203 India
| |
Collapse
|
13
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLOv5-FPN: A Robust Framework for Multi-Sized Cell Counting in Fluorescence Images. Diagnostics (Basel) 2023; 13:2280. [PMID: 37443674 DOI: 10.3390/diagnostics13132280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/02/2023] [Accepted: 06/11/2023] [Indexed: 07/15/2023] Open
Abstract
Cell counting in fluorescence microscopy is an essential task in biomedical research for analyzing cellular dynamics and studying disease progression. Traditional methods for cell counting involve manual counting or threshold-based segmentation, which are time-consuming and prone to human error. Recently, deep learning-based object detection methods have shown promising results in automating cell counting tasks. However, the existing methods mainly focus on segmentation-based techniques that require a large amount of labeled data and extensive computational resources. In this paper, we propose a novel approach to detect and count multiple-size cells in a fluorescence image slide using You Only Look Once version 5 (YOLOv5) with a feature pyramid network (FPN). Our proposed method can efficiently detect multiple cells with different sizes in a single image, eliminating the need for pixel-level segmentation. We show that our method outperforms state-of-the-art segmentation-based approaches in terms of accuracy and computational efficiency. The experimental results on publicly available datasets demonstrate that our proposed approach achieves an average precision of 0.8 and a processing time of 43.9 ms per image. Our approach addresses the research gap in the literature by providing a more efficient and accurate method for cell counting in fluorescence microscopy that requires less computational resources and labeled data.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|