1
|
Asteris PG, Gandomi AH, Armaghani DJ, Kokoris S, Papandreadi AT, Roumelioti A, Papanikolaou S, Tsoukalas MZ, Triantafyllidis L, Koutras EI, Bardhan A, Mohammed AS, Naderpour H, Paudel S, Samui P, Ntanasis-Stathopoulos I, Dimopoulos MA, Terpos E. Prognosis of COVID-19 severity using DERGA, a novel machine learning algorithm. Eur J Intern Med 2024; 125:67-73. [PMID: 38458880 DOI: 10.1016/j.ejim.2024.02.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/23/2024] [Accepted: 02/29/2024] [Indexed: 03/10/2024]
Abstract
It is important to determine the risk for admission to the intensive care unit (ICU) in patients with COVID-19 presenting at the emergency department. Using artificial neural networks, we propose a new Data Ensemble Refinement Greedy Algorithm (DERGA) based on 15 easily accessible hematological indices. A database of 1596 patients with COVID-19 was used; it was divided into 1257 training datasets (80 % of the database) for training the algorithms and 339 testing datasets (20 % of the database) to check the reliability of the algorithms. The optimal combination of hematological indicators that gives the best prediction consists of only four hematological indicators as follows: neutrophil-to-lymphocyte ratio (NLR), lactate dehydrogenase, ferritin, and albumin. The best prediction corresponds to a particularly high accuracy of 97.12 %. In conclusion, our novel approach provides a robust model based only on basic hematological parameters for predicting the risk for ICU admission and optimize COVID-19 patient management in the clinical practice.
Collapse
Affiliation(s)
- Panagiotis G Asteris
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Amir H Gandomi
- Faculty of Engineering & IT, University of Technology Sydney, Sydney, NSW 2007, Australia; University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary
| | - Danial J Armaghani
- School of Civil and Environmental Engineering, University of Technology Sydney, NSW 2007, Australia
| | - Styliani Kokoris
- Laboratory of Hematology and Hospital Blood Transfusion Department, University General Hospital "Attikon", National and Kapodistrian University of Athens, Medical School, Greece
| | - Anastasia T Papandreadi
- Software and Applications Department, University General Hospital "Attikon", National and Kapodistrian University of Athens, Medical School, Greece
| | - Anna Roumelioti
- Department of Hematology and Lymphoma BMTU, Evangelismos General Hospital, Athens, Greece
| | - Stefanos Papanikolaou
- NOMATEN Centre of Excellence, National Center for Nuclear Research, ulica A. Sołtana 7, 05-400 Swierk/Otwock, Poland
| | - Markos Z Tsoukalas
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Leonidas Triantafyllidis
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Evangelos I Koutras
- Computational Mechanics Laboratory, School of Pedagogical and Technological Education, Athens, Greece
| | - Abidhan Bardhan
- Civil Engineering Department, National Institute of Technology Patna, Bihar, India
| | - Ahmed Salih Mohammed
- Engineering Department, American University of Iraq, Sulaimani, Kurdistan-Region, Iraq
| | - Hosein Naderpour
- Institute of Industrial Science, University of Tokyo, Tokyo, Japan
| | - Satish Paudel
- Department of Civil and Environmental Engineering, University of Nevada, Reno, US
| | - Pijush Samui
- Civil Engineering Department, National Institute of Technology Patna, Bihar, India
| | - Ioannis Ntanasis-Stathopoulos
- Department of Clinical Therapeutics, Medical School, Faculty of Medicine, National Kapodistrian University of Athens, Athens, Greece
| | - Meletios A Dimopoulos
- Department of Clinical Therapeutics, Medical School, Faculty of Medicine, National Kapodistrian University of Athens, Athens, Greece
| | - Evangelos Terpos
- Department of Clinical Therapeutics, Medical School, Faculty of Medicine, National Kapodistrian University of Athens, Athens, Greece.
| |
Collapse
|
2
|
Smithmaitrie P, Khaonualsri M, Sae-Lim W, Wangkulangkul P, Jearanai S, Cheewatanakornkul S. Development of deep learning framework for anatomical landmark detection and guided dissection line during laparoscopic cholecystectomy. Heliyon 2024; 10:e25210. [PMID: 38327394 PMCID: PMC10847946 DOI: 10.1016/j.heliyon.2024.e25210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 11/15/2023] [Accepted: 01/23/2024] [Indexed: 02/09/2024] Open
Abstract
Background Bile duct injuries during laparoscopic cholecystectomy can arise from misinterpretation of biliary anatomy, leading to dissection in improper areas. The integration of a deep learning framework into laparoscopic procedures offers the potential for real-time anatomical landmark recognition, ensuring accurate dissection. The objective of this study is to develop a deep learning framework that can precisely identify anatomical landmarks, including Rouviere's sulcus and the liver base of segment IV, and provide a guided dissection line during laparoscopic cholecystectomy. Methods We retrospectively collected 40 laparoscopic cholecystectomy videos and extracted 80 images form each video to establish the dataset. Three surgeons annotated the bounding boxes of anatomical landmarks on a total of 3200 images. The YOLOv7 model was trained to detect Rouviere's sulcus and the liver base of segment IV as anatomical landmarks. Additionally, the guided dissection line was generated between these two landmarks by the proposed algorithm. To evaluate the performance of the detection model, mean average precision (mAP), precision, and recall were calculated. Furthermore, the accuracy of the guided dissection line was evaluated by three surgeons. The performance of the detection model was compared to the scaled-YOLOv4 and YOLOv5 models. Finally, the proposed framework was deployed in the operating room for real-time detection and visualization. Results The overall performance of the YOLOv7 model on validation set and testing set were 98.1 % and 91.3 %, respectively. Surgeons accepted the visualization of guide dissection line with a rate of 95.71 %. In the operating room, the well-trained model accurately identified the anatomical landmarks and generated the guided dissection line in real-time. Conclusions The proposed framework effectively identifies anatomical landmarks and generates a guided dissection line in real-time during laparoscopic cholecystectomy. This research underscores the potential of using deep learning models as computer-assisted tools in surgery, providing an assistant tool to accommodate with surgeons.
Collapse
Affiliation(s)
- Pruittikorn Smithmaitrie
- Department of Mechanical and Mechatronics Engineering, Faculty of Engineering, Prince of Songkla University, Thailand
| | - Methasit Khaonualsri
- Department of Mechanical and Mechatronics Engineering, Faculty of Engineering, Prince of Songkla University, Thailand
| | - Wannipa Sae-Lim
- Department of Computer Science, Faculty of Science, Prince of Songkla University, Thailand
| | - Piyanun Wangkulangkul
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Thailand
| | - Supakool Jearanai
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Thailand
| | - Siripong Cheewatanakornkul
- Minimally Invasive Surgery Unit, Department of Surgery, Faculty of Medicine, Prince of Songkla University, Thailand
| |
Collapse
|
3
|
Lindow T, Maanja M, Schelbert EB, Ribeiro AH, Ribeiro ALP, Schlegel TT, Ugander M. Heart age gap estimated by explainable advanced electrocardiography is associated with cardiovascular risk factors and survival. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2023; 4:384-392. [PMID: 37794867 PMCID: PMC10545529 DOI: 10.1093/ehjdh/ztad045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 06/05/2023] [Indexed: 10/06/2023]
Abstract
Aims Deep neural network artificial intelligence (DNN-AI)-based Heart Age estimations have been presented and used to show that the difference between an electrocardiogram (ECG)-estimated Heart Age and chronological age is associated with prognosis. An accurate ECG Heart Age, without DNNs, has been developed using explainable advanced ECG (A-ECG) methods. We aimed to evaluate the prognostic value of the explainable A-ECG Heart Age and compare its performance to a DNN-AI Heart Age. Methods and results Both A-ECG and DNN-AI Heart Age were applied to patients who had undergone clinical cardiovascular magnetic resonance imaging. The association between A-ECG or DNN-AI Heart Age Gap and cardiovascular risk factors was evaluated using logistic regression. The association between Heart Age Gaps and death or heart failure (HF) hospitalization was evaluated using Cox regression adjusted for clinical covariates/comorbidities. Among patients [n = 731, 103 (14.1%) deaths, 52 (7.1%) HF hospitalizations, median (interquartile range) follow-up 5.7 (4.7-6.7) years], A-ECG Heart Age Gap was associated with risk factors and outcomes [unadjusted hazard ratio (HR) (95% confidence interval) (5 year increments): 1.23 (1.13-1.34) and adjusted HR 1.11 (1.01-1.22)]. DNN-AI Heart Age Gap was associated with risk factors and outcomes after adjustments [HR (5 year increments): 1.11 (1.01-1.21)], but not in unadjusted analyses [HR 1.00 (0.93-1.08)], making it less easily applicable in clinical practice. Conclusion A-ECG Heart Age Gap is associated with cardiovascular risk factors and HF hospitalization or death. Explainable A-ECG Heart Age Gap has the potential for improving clinical adoption and prognostic performance compared with existing DNN-AI-type methods.
Collapse
Affiliation(s)
- Thomas Lindow
- Kolling Institute, Royal North Shore Hospital, University of Sydney, Sydney, Australia
- Department of Clinical Physiology, Research and Development, Växjö Central Hospital, Region Kronoberg, Sweden
- Clinical Physiology, Clinical Sciences, Lund University, Sweden
| | - Maren Maanja
- Department of Clinical Physiology, Karolinska University Hospital and Karolinska Institutet, Stockholm, Sweden
| | | | - Antônio H Ribeiro
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Antonio Luiz P Ribeiro
- Telehealth Center, Hospital das Clínicas, and Internal Medicine Department, Faculdade de Medicina, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Todd T Schlegel
- Department of Clinical Physiology, Karolinska University Hospital and Karolinska Institutet, Stockholm, Sweden
- Nicollier-Schlegel SARL, Trélex, Switzerland
| | - Martin Ugander
- Kolling Institute, Royal North Shore Hospital, University of Sydney, Sydney, Australia
- Department of Clinical Physiology, Karolinska University Hospital and Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
4
|
Bhatt A, Ganatra A. Weapon operating pose detection and suspicious human activity classification using skeleton graphs. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:2669-2690. [PMID: 36899552 DOI: 10.3934/mbe.2023125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Spurt upsurge in violent protest and armed conflict in populous, civil areas has upstretched momentous concern worldwide. The unrelenting strategy of the law enforcement agencies focuses on thwarting the conspicuous impact of violent events. Increased surveillance using a widespread visual network supports the state actors in maintaining vigilance. Minute, simultaneous monitoring of numerous surveillance feeds is a workforce-intensive, idiosyncratic, and otiose method. Significant advancements in Machine Learning (ML) show potential in realizing precise models to detect suspicious activities in the mob. Existing pose estimation techniques have privations in detecting weapon operation activity. The paper proposes a comprehensive, customized human activity recognition approach using human body skeleton graphs. The VGG-19 backbone extracted 6600 body coordinates from the customized dataset. The methodology categorizes human activities into eight classes experienced during violent clashes. It facilitates alarm triggers in a specific activity, i.e., stone pelting or weapon handling while walking, standing, and kneeling is considered a regular activity. The end-to-end pipeline presents a robust model for multiple human tracking, mapping a skeleton graph for each person in consecutive surveillance video frames with the improved categorization of suspicious human activities, realizing effective crowd management. LSTM-RNN Network, trained on a customized dataset superimposed with Kalman filter, attained 89.09% accuracy for real-time pose identification.
Collapse
Affiliation(s)
- Anant Bhatt
- Devang Patel Institute of Advance Technology and Research (DEPSTAR), Charotar University of Science and Technology (CHARUSAT), Nadiad Petlad Road, Changa, Gujarat-388421, India
| | - Amit Ganatra
- Devang Patel Institute of Advance Technology and Research (DEPSTAR), Charotar University of Science and Technology (CHARUSAT), Nadiad Petlad Road, Changa, Gujarat-388421, India
| |
Collapse
|
5
|
Mostafa FA, Elrefaei LA, Fouda MM, Hossam A. A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images. Diagnostics (Basel) 2022; 12:3034. [PMID: 36553041 PMCID: PMC9777249 DOI: 10.3390/diagnostics12123034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
Thoracic diseases refer to disorders that affect the lungs, heart, and other parts of the rib cage, such as pneumonia, novel coronavirus disease (COVID-19), tuberculosis, cardiomegaly, and fracture. Millions of people die every year from thoracic diseases. Therefore, early detection of these diseases is essential and can save many lives. Earlier, only highly experienced radiologists examined thoracic diseases, but recent developments in image processing and deep learning techniques are opening the door for the automated detection of these diseases. In this paper, we present a comprehensive review including: types of thoracic diseases; examination types of thoracic images; image pre-processing; models of deep learning applied to the detection of thoracic diseases (e.g., pneumonia, COVID-19, edema, fibrosis, tuberculosis, chronic obstructive pulmonary disease (COPD), and lung cancer); transfer learning background knowledge; ensemble learning; and future initiatives for improving the efficacy of deep learning models in applications that detect thoracic diseases. Through this survey paper, researchers may be able to gain an overall and systematic knowledge of deep learning applications in medical thoracic images. The review investigates a performance comparison of various models and a comparison of various datasets.
Collapse
Affiliation(s)
- Fatma A. Mostafa
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Lamiaa A. Elrefaei
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, College of Science and Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Aya Hossam
- Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
| |
Collapse
|
6
|
Automatic Mushroom Species Classification Model for Foodborne Disease Prevention Based on Vision Transformer. J FOOD QUALITY 2022. [DOI: 10.1155/2022/1173102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Mushrooms are the fleshy, spore-bearing structure of certain fungi, produced by a group of mycelia and buried in a substratum. Mushrooms are classified as edible, medicinal, and poisonous. However, many poisoning incidents occur yearly by consuming wild mushrooms. Thousands of poisoning incidents are reported each year globally, and 80% of these are from unidentified species of mushrooms. Mushroom poisoning is one of the most serious food safety issues worldwide. Motivated by this problem, this study uses an open-source mushroom dataset and employs several data augmentation approaches to decrease the probability of model overfitting. We propose a novel deep learning pipeline (ViT-Mushroom) for mushroom classification using the Vision Transformer large network (ViT-L/32). We compared the performance of our method against that of a convolutional neural network (CNN). We visualized the high-dimensional outputs of the ViT-L/32 model to achieve the interpretability of ViT-L/32 using the t-distributed stochastic neighbor embedding (t-SNE) method. The results show that ViT-L/32 is the best on the testing dataset, with an accuracy score of 95.97%. These results surpass previous approaches in reducing intraclass variability and generating well-separated feature embeddings. The proposed method is a promising deep learning model capable of automatically classifying mushroom species, helping wild mushroom consumers avoid eating toxic mushrooms, safeguarding food safety, and preventing public health incidents of food poisoning. The results will offer valuable resources for food scientists, nutritionists, and the public health sector regarding the safety and quality of mushrooms.
Collapse
|
7
|
SRIF-RCNN: Sparsely represented inputs fusion of different sensors for 3D object detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03594-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Multi-task learning based on geometric invariance discriminative features. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03617-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
Face detection and grimace scale prediction of white furred mice. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
10
|
Abstract
Texture analysis is key to better understanding of the relationships between the microstructures of the materials and their properties, as well as the use of models in process systems using raw signals or images as input. Recently, new methods based on transfer learning with deep neural networks have become established as highly competitive approaches to classical texture analysis. In this study, three traditional approaches, based on the use of grey level co-occurrence matrices, local binary patterns and textons are compared with five transfer learning approaches, based on the use of AlexNet, VGG19, ResNet50, GoogLeNet and MobileNetV2. This is done based on two simulated and one real-world case study. In the simulated case studies, material microstructures were simulated with Voronoi graphic representations and in the real-world case study, the appearance of ultrahigh carbon steel is cast as a textural pattern recognition pattern. The ability of random forest models, as well as the convolutional neural networks themselves, to discriminate between different textures with the image features as input was used as the basis for comparison. The texton algorithm performed better than the LBP and GLCM algorithms and similar to the deep learning approaches when these were used directly, without any retraining. Partial or full retraining of the convolutional neural networks yielded considerably better results, with GoogLeNet and MobileNetV2 yielding the best results.
Collapse
|
11
|
COVID-19 Patient Detection Based on Fusion of Transfer Learning and Fuzzy Ensemble Models Using CXR Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311423] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The COVID-19 pandemic has claimed the lives of millions of people and put a significant strain on healthcare facilities. To combat this disease, it is necessary to monitor affected patients in a timely and cost-effective manner. In this work, CXR images were used to identify COVID-19 patients. We compiled a CXR dataset with equal number of 2313 COVID positive, pneumonia and normal CXR images and utilized various transfer learning models as base classifiers, including VGG16, GoogleNet, and Xception. The proposed methodology combines fuzzy ensemble techniques, such as Majority Voting, Sugeno Integral, and Choquet Fuzzy, and adaptively combines the decision scores of the transfer learning models to identify coronavirus infection from CXR images. The proposed fuzzy ensemble methods outperformed each individual transfer learning technique and several state-of-the-art ensemble techniques in terms of accuracy and prediction. Specifically, VGG16 + Choquet Fuzzy, GoogleNet + Choquet Fuzzy, and Xception + Choquet Fuzzy achieved accuracies of 97.04%, 98.48%, and 99.57%, respectively. The results of this work are intended to help medical practitioners achieve an earlier detection of coronavirus compared to other detection strategies, which can further save millions of lives and advantageously influence society.
Collapse
|
12
|
Taresh MM, Zhu N, Ali TAA, Alghaili M, Hameed AS, Mutar ML. KL-MOB: automated COVID-19 recognition using a novel approach based on image enhancement and a modified MobileNet CNN. PeerJ Comput Sci 2021; 7:e694. [PMID: 34616885 PMCID: PMC8459788 DOI: 10.7717/peerj-cs.694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/05/2021] [Indexed: 06/13/2023]
Abstract
The emergence of the novel coronavirus pneumonia (COVID-19) pandemic at the end of 2019 led to worldwide chaos. However, the world breathed a sigh of relief when a few countries announced the development of a vaccine and gradually began to distribute it. Nevertheless, the emergence of another wave of this pandemic returned us to the starting point. At present, early detection of infected people is the paramount concern of both specialists and health researchers. This paper proposes a method to detect infected patients through chest x-ray images by using the large dataset available online for COVID-19 (COVIDx), which consists of 2128 X-ray images of COVID-19 cases, 8,066 normal cases, and 5,575 cases of pneumonia. A hybrid algorithm is applied to improve image quality before undertaking neural network training. This algorithm combines two different noise-reduction filters in the image, followed by a contrast enhancement algorithm. To detect COVID-19, we propose a novel convolution neural network (CNN) architecture called KL-MOB (COVID-19 detection network based on the MobileNet structure). The performance of KL-MOB is boosted by adding the Kullback-Leibler (KL) divergence loss function when trained from scratch. The KL divergence loss function is adopted for content-based image retrieval and fine-grained classification to improve the quality of image representation. The results are impressive: the overall benchmark accuracy, sensitivity, specificity, and precision are 98.7%, 98.32%, 98.82% and 98.37%, respectively. These promising results should help other researchers develop innovative methods to aid specialists. The tremendous potential of the method proposed herein can also be used to detect COVID-19 quickly and safely in patients throughout the world.
Collapse
Affiliation(s)
| | - Ningbo Zhu
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Talal Ahmed Ali Ali
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Mohammed Alghaili
- College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
| | - Asaad Shakir Hameed
- Department of Mathematics, General Directorate of Thi-Qar Education, Ministry of Education, Thi-Qar, Iraq
| | - Modhi Lafta Mutar
- Department of Mathematics, General Directorate of Thi-Qar Education, Ministry of Education, Thi-Qar, Iraq
| |
Collapse
|