201
|
Machine learning-assisted system using digital facial images to predict the clinical activity score in thyroid-associated orbitopathy. Sci Rep 2022; 12:22085. [PMID: 36543834 PMCID: PMC9772205 DOI: 10.1038/s41598-022-25887-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/06/2022] [Indexed: 12/24/2022] Open
Abstract
Although the clinical activity score (CAS) is a validated scoring system for identifying disease activity of thyroid-associated orbitopathy (TAO), it may produce differing results depending on the evaluator, and an experienced ophthalmologist is required for accurate evaluation. In this study, we developed a machine learning (ML)-assisted system to mimic an expert's CAS assessment using digital facial images and evaluated its accuracy for predicting the CAS and diagnosing active TAO (CAS ≥ 3). An ML-assisted system was designed to assess five CAS components related to inflammatory signs (redness of the eyelids, redness of the conjunctiva, swelling of the eyelids, inflammation of the caruncle and/or plica, and conjunctival edema) in patients' facial images and to predict the CAS by considering two components of subjective symptoms (spontaneous retrobulbar pain and pain on gaze). To train and test the system, 3,060 cropped images from 1020 digital facial images of TAO patients were used. The reference CAS for each image was scored by three ophthalmologists, each with > 15 years of clinical experience. We repeated the experiments for 30 randomly split training and test sets at a ratio of 8:2. The sensitivity and specificity of the ML-assisted system for diagnosing active TAO were 72.7% and 83.2% in the test set constructed from the entire dataset. For the test set constructed from the dataset with consistent results for the three ophthalmologists, the sensitivity and specificity for diagnosing active TAO were 88.1% and 86.9%. In the test sets from the entire dataset and from the dataset with consistent results, 40.0% and 49.9% of the predicted CAS values were the same as the reference CAS, respectively. The system predicted the CAS within 1 point of the reference CAS in 84.6% and 89.0% of cases when tested using the entire dataset and in the dataset with consistent results, respectively. An ML-assisted system estimated the clinical activity of TAO and detect inflammatory active TAO with reasonable accuracy. The accuracy could be improved further by obtaining more data. This ML-assisted system can help evaluate the disease activity consistently as well as accurately and enable the early diagnosis and timely treatment of active TAO.
Collapse
|
202
|
Qadri S, Yki-Järvinen H. The quest for the missing links in fatty liver genetics: Deep learning to the rescue! Cell Rep Med 2022; 3:100862. [PMID: 36543096 PMCID: PMC9798017 DOI: 10.1016/j.xcrm.2022.100862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Park, MacLean, et al. conduct an exome-wide association study of liver fat content in the Penn Medicine BioBank.1 By leveraging machine learning-assisted analysis of clinical CT scans to quantify steatosis, they uncover previously undescribed liver fat-associated genetic variants.
Collapse
Affiliation(s)
- Sami Qadri
- University of Helsinki and Helsinki University Hospital, Helsinki, Finland,Minerva Foundation Institute for Medical Research, Helsinki, Finland
| | - Hannele Yki-Järvinen
- University of Helsinki and Helsinki University Hospital, Helsinki, Finland,Minerva Foundation Institute for Medical Research, Helsinki, Finland,Corresponding author
| |
Collapse
|
203
|
Using AI and computer vision to analyze technical proficiency in robotic surgery. Surg Endosc 2022; 37:3010-3017. [PMID: 36536082 DOI: 10.1007/s00464-022-09781-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 11/27/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Intraoperative skills assessment is time-consuming and subjective; an efficient and objective computer vision-based approach for feedback is desired. In this work, we aim to design and validate an interpretable automated method to evaluate technical proficiency using colorectal robotic surgery videos with artificial intelligence. METHODS 92 curated clips of peritoneal closure were characterized by both board-certified surgeons and a computer vision AI algorithm to compare the measures of surgical skill. For human ratings, six surgeons graded clips according to the GEARS assessment tool; for AI assessment, deep learning computer vision algorithms for surgical tool detection and tracking were developed and implemented. RESULTS For the GEARS category of efficiency, we observe a positive correlation between human expert ratings of technical efficiency and AI-determined total tool movement (r = - 0.72). Additionally, we show that more proficient surgeons perform closure with significantly less tool movement compared to less proficient surgeons (p < 0.001). For the GEARS category of bimanual dexterity, a positive correlation between expert ratings of bimanual dexterity and the AI model's calculated measure of bimanual movement based on simultaneous tool movement (r = 0.48) was also observed. On average, we also find that higher skill clips have significantly more simultaneous movement in both hands compared to lower skill clips (p < 0.001). CONCLUSIONS In this study, measurements of technical proficiency extracted from AI algorithms are shown to correlate with those given by expert surgeons. Although we target measurements of efficiency and bimanual dexterity, this work suggests that artificial intelligence through computer vision holds promise for efficiently standardizing grading of surgical technique, which may help in surgical skills training.
Collapse
|
204
|
Rouzrokh P, Khosravi B, Vahdati S, Moassefi M, Faghani S, Mahmoudi E, Chalian H, Erickson BJ. Machine Learning in Cardiovascular Imaging: A Scoping Review of Published Literature. CURRENT RADIOLOGY REPORTS 2022; 11:34-45. [PMID: 36531124 PMCID: PMC9742664 DOI: 10.1007/s40134-022-00407-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2022] [Indexed: 12/14/2022]
Abstract
Purpose of Review In this study, we planned and carried out a scoping review of the literature to learn how machine learning (ML) has been investigated in cardiovascular imaging (CVI). Recent Findings During our search, we found numerous studies that developed or utilized existing ML models for segmentation, classification, object detection, generation, and regression applications involving cardiovascular imaging data. We first quantitatively investigated the different aspects of study characteristics, data handling, model development, and performance evaluation in all studies that were included in our review. We then supplemented these findings with a qualitative synthesis to highlight the common themes in the studied literature and provided recommendations to pave the way for upcoming research. Summary ML is a subfield of artificial intelligence (AI) that enables computers to learn human-like decision-making from data. Due to its novel applications, ML is gaining more and more attention from researchers in the healthcare industry. Cardiovascular imaging is an active area of research in medical imaging with lots of room for incorporating new technologies, like ML. Supplementary Information The online version contains supplementary material available at 10.1007/s40134-022-00407-8.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Sanaz Vahdati
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Shahriar Faghani
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Elham Mahmoudi
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| | - Hamid Chalian
- Department of Radiology, Cardiothoracic Imaging, University of Washington, Seattle, WA USA
| | - Bradley J. Erickson
- Artificial Intelligence Laboratory, Mayo Clinic, Rochester, MN 55905 USA
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN USA
| |
Collapse
|
205
|
Sangers TE, Wakkee M, Moolenburgh FJ, Nijsten T, Lugtenberg M. Towards successful implementation of artificial intelligence in skin cancer care: a qualitative study exploring the views of dermatologists and general practitioners. Arch Dermatol Res 2022; 315:1187-1195. [PMID: 36477587 PMCID: PMC9734890 DOI: 10.1007/s00403-022-02492-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 10/17/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022]
Abstract
Recent studies show promising potential for artificial intelligence (AI) to assist healthcare providers (HCPs) in skin cancer care. The aim of this study is to explore the views of dermatologists and general practitioners (GPs) regarding the successful implementation of AI when assisting HCPs in skin cancer care. We performed a qualitative focus group study, consisting of six focus groups with 16 dermatologists and 17 GPs, varying in prior knowledge and experience with AI, gender, and age. An in-depth inductive thematic content analysis was deployed. Perceived benefits, barriers, and preconditions were identified as main themes. Dermatologists and GPs perceive substantial benefits of AI, particularly an improved health outcome and care pathway between primary and secondary care. Doubts about accuracy, risk of health inequalities, and fear of replacement were among the most stressed barriers. Essential preconditions included adequate algorithm content, sufficient usability, and accessibility of AI. In conclusion, dermatologists and GPs perceive significant benefits from implementing AI in skin cancer care. However, to successfully implement AI, key barriers need to be addressed. Efforts should focus on ensuring algorithm transparency, validation, accessibility for all skin types, and adequate regulation of algorithms. Simultaneously, improving knowledge about AI could reduce the fear of replacement.
Collapse
Affiliation(s)
- Tobias E. Sangers
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Doctor Molewaterplein 40, 3015 GD Rotterdam, The Netherlands
| | - Marlies Wakkee
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Doctor Molewaterplein 40, 3015 GD Rotterdam, The Netherlands
| | - Folkert J. Moolenburgh
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Doctor Molewaterplein 40, 3015 GD Rotterdam, The Netherlands
| | - Tamar Nijsten
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Doctor Molewaterplein 40, 3015 GD Rotterdam, The Netherlands
| | - Marjolein Lugtenberg
- Department of Dermatology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Doctor Molewaterplein 40, 3015 GD Rotterdam, The Netherlands
| |
Collapse
|
206
|
Howlader K, Liu L. Transfer Learning Pre-training Dataset and Fine-tuning Effect Analysis on Cancer Histopathology Images. 2022 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM) 2022:3015-3022. [DOI: 10.1109/bibm55620.2022.9995076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
Affiliation(s)
| | - Lu Liu
- North Dakota State University,ND,USA
| |
Collapse
|
207
|
Ienaga N, Takahata S, Terayama K, Enomoto D, Ishihara H, Noda H, Hagihara H. Development and Verification of Postural Control Assessment Using Deep-Learning-Based Pose Estimators: Towards Clinical Applications. Occup Ther Int 2022; 2022:6952999. [PMID: 36531757 PMCID: PMC9729024 DOI: 10.1155/2022/6952999] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 09/29/2022] [Accepted: 09/30/2022] [Indexed: 09/10/2024] Open
Abstract
Occupational therapists evaluate various aspects of a client's occupational performance. Among these, postural control is one of the fundamental skills that need assessment. Recently, several methods have been proposed to estimate postural control abilities using deep-learning-based approaches. Such techniques allow for the potential to provide automated, precise, fine-grained quantitative indices simply by evaluating videos of a client engaging in a postural control task. However, the clinical applicability of these assessment tools requires further investigation. In the current study, we compared three deep-learning-based pose estimators to assess their clinical applicability in terms of accuracy of pose estimations and processing speed. In addition, we verified which of the proposed quantitative indices for postural controls best reflected the clinical evaluations of occupational therapists. A framework using deep-learning techniques broadens the possibility of quantifying clients' postural control in a more fine-grained way compared with conventional coarse indices, which can lead to improved occupational therapy practice.
Collapse
Affiliation(s)
- Naoto Ienaga
- Faculty of Engineering, Information and Systems, University of Tsukuba, Japan
| | - Shuhei Takahata
- Aino University, Osaka, Japan
- Graduate School of Medical Life Science, Yokohama City University, Kanagawa, Japan
| | - Kei Terayama
- Graduate School of Medical Life Science, Yokohama City University, Kanagawa, Japan
| | | | | | - Haruka Noda
- LITALICO Inc., Tokyo, Japan
- Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Hiromichi Hagihara
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| |
Collapse
|
208
|
Huang J, Si H, Guo X, Zhong K. Co-Occurrence Fingerprint Data-Based Heterogeneous Transfer Learning Framework for Indoor Positioning. SENSORS (BASEL, SWITZERLAND) 2022; 22:9127. [PMID: 36501829 PMCID: PMC9737723 DOI: 10.3390/s22239127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/16/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Distribution discrepancy is an intrinsic challenge in existing fingerprint-based indoor positioning system(s) (FIPS) due to real-time environmental variations; thus, the positioning model needs to be reconstructed frequently based on newly collected training data. However, it is expensive or impossible to collect adequate training samples to reconstruct the fingerprint database. Fortunately, transfer learning has proven to be an effective solution to mitigate the distribution discrepancy, enabling us to update the positioning model using newly collected training data in real time. However, in practical applications, traditional transfer learning algorithms no longer act well to feature space heterogeneity caused by different types or holding postures of fingerprint collection devices (such as smartphones). Moreover, current heterogeneous transfer methods typically require enough accurately labeled samples in the target domain, which is practically expensive and even unavailable. Aiming to solve these problems, a heterogeneous transfer learning framework based on co-occurrence data (HTL-CD) is proposed for FIPS, which can realize higher positioning accuracy and robustness against environmental changes without reconstructing the fingerprint database repeatedly. Specifically, the source domain samples are mapped into the feature space in the target domain, then the marginal and conditional distributions of the source and target samples are aligned in order to minimize the distribution divergence caused by collection device heterogeneity and environmental changes. Moreover, the utilized co-occurrence fingerprint data enables us to calculate correlation coefficients between heterogeneous samples without accurately labeled target samples. Furthermore, by resorting to the adopted correlation restriction mechanism, more valuable knowledge will be transferred to the target domain if the source samples are related to the target ones, which remarkably relieves the "negative transfer" issue. Real-world experimental performance implies that, even without accurately labeled samples in the target domain, the proposed HTL-CD can obtain at least 17.15% smaller average localization errors (ALEs) than existing transfer learning-based positioning methods, which further validates the effectiveness and superiority of our algorithm.
Collapse
Affiliation(s)
- Jian Huang
- Department of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Haonan Si
- Department of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China
| | - Xiansheng Guo
- Department of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China
| | - Ke Zhong
- Department of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
209
|
Smartphone video nystagmography using convolutional neural networks: ConVNG. J Neurol 2022; 270:2518-2530. [PMID: 36422668 PMCID: PMC10129923 DOI: 10.1007/s00415-022-11493-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 11/14/2022] [Accepted: 11/16/2022] [Indexed: 11/27/2022]
Abstract
Abstract
Background
Eye movement abnormalities are commonplace in neurological disorders. However, unaided eye movement assessments lack granularity. Although videooculography (VOG) improves diagnostic accuracy, resource intensiveness precludes its broad use. To bridge this care gap, we here validate a framework for smartphone video-based nystagmography capitalizing on recent computer vision advances.
Methods
A convolutional neural network was fine-tuned for pupil tracking using > 550 annotated frames: ConVNG. In a cross-sectional approach, slow-phase velocity of optokinetic nystagmus was calculated in 10 subjects using ConVNG and VOG. Equivalence of accuracy and precision was assessed using the “two one-sample t-test” (TOST) and Bayesian interval-null approaches. ConVNG was systematically compared to OpenFace and MediaPipe as computer vision (CV) benchmarks for gaze estimation.
Results
ConVNG tracking accuracy reached 9–15% of an average pupil diameter. In a fully independent clinical video dataset, ConVNG robustly detected pupil keypoints (median prediction confidence 0.85). SPV measurement accuracy was equivalent to VOG (TOST p < 0.017; Bayes factors (BF) > 24). ConVNG, but not MediaPipe, achieved equivalence to VOG in all SPV calculations. Median precision was 0.30°/s for ConVNG, 0.7°/s for MediaPipe and 0.12°/s for VOG. ConVNG precision was significantly higher than MediaPipe in vertical planes, but both algorithms’ precision was inferior to VOG.
Conclusions
ConVNG enables offline smartphone video nystagmography with an accuracy comparable to VOG and significantly higher precision than MediaPipe, a benchmark computer vision application for gaze estimation. This serves as a blueprint for highly accessible tools with potential to accelerate progress toward precise and personalized Medicine.
Collapse
|
210
|
Al-Garadi MA, Yang YC, Sarker A. The Role of Natural Language Processing during the COVID-19 Pandemic: Health Applications, Opportunities, and Challenges. Healthcare (Basel) 2022; 10:2270. [PMID: 36421593 PMCID: PMC9690240 DOI: 10.3390/healthcare10112270] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/03/2022] [Accepted: 11/06/2022] [Indexed: 07/30/2023] Open
Abstract
The COVID-19 pandemic is the most devastating public health crisis in at least a century and has affected the lives of billions of people worldwide in unprecedented ways. Compared to pandemics of this scale in the past, societies are now equipped with advanced technologies that can mitigate the impacts of pandemics if utilized appropriately. However, opportunities are currently not fully utilized, particularly at the intersection of data science and health. Health-related big data and technological advances have the potential to significantly aid the fight against such pandemics, including the current pandemic's ongoing and long-term impacts. Specifically, the field of natural language processing (NLP) has enormous potential at a time when vast amounts of text-based data are continuously generated from a multitude of sources, such as health/hospital systems, published medical literature, and social media. Effectively mitigating the impacts of the pandemic requires tackling challenges associated with the application and deployment of NLP systems. In this paper, we review the applications of NLP to address diverse aspects of the COVID-19 pandemic. We outline key NLP-related advances on a chosen set of topics reported in the literature and discuss the opportunities and challenges associated with applying NLP during the current pandemic and future ones. These opportunities and challenges can guide future research aimed at improving the current health and social response systems and pandemic preparedness.
Collapse
Affiliation(s)
- Mohammed Ali Al-Garadi
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37240, USA
| | - Yuan-Chi Yang
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| | - Abeed Sarker
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
211
|
He L, Ai Q, Lei Y, Pan L, Ren Y, Xu Z. Edge Enhancement Improves Adversarial Robustness in Image Classification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.10.059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
212
|
Liu X, Flanagan C, Fang J, Lei Y, McGrath L, Wang J, Guo X, Guo J, McGrath H, Han Y. Comparative analysis of popular predictors for difficult laryngoscopy using hybrid intelligent detection methods. Heliyon 2022; 8:e11761. [DOI: 10.1016/j.heliyon.2022.e11761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 09/27/2022] [Accepted: 11/14/2022] [Indexed: 11/24/2022] Open
|
213
|
Akbarzadeh F, Ebrahimi A, Akhlaghi S, Rajai Z, Rezaei Kalat A, Jafarzadeh Esfehani R, Garmehi S, Sangsefidy Z. Implementation of Educational-Interactive-Psychiatric Management Software for Patients with Bipolar Disorder. Med J Islam Repub Iran 2022; 36:126. [PMID: 36447554 PMCID: PMC9700403 DOI: 10.47176/mjiri.36.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Indexed: 12/04/2022] Open
Abstract
Background: Bipolar disorder is considered a psychiatric disease without any effective screening questionnaire to monitor and manage Iranian patients. This study aims to implement a researcher-made questionnaire in the form of educational interactive software for better management of patients with bipolar disorder and prevent further complications. Methods: The present cross-sectional study evaluated the efficacy of psychoeducational-interactive-therapeutic software for patients with bipolar disorder, which is a network-based software providing a researcher-made questionnaire in a planned manner. This software can predict the occurrence of future bipolar episodes for each patient by using artificial intelligence algorithms after the occurrence of two mood episodes as the training phase. The patients with bipolar disorder were asked to use the software for a year and their mood episodes were compared before and after using the software. We evaluate the reliability of the questionnaires in the software with internal consistency using alpha Cronbach test and test-retest analysis. Face validity and content validity were also evaluated. Results: The content validity index of the instrument was 93%, and the Cronbach's alpha coefficient of the whole questionnaire was 0.955. Also, the ICC coefficient for this questionnaire is above 0.70, and the correlation coefficient of the answers in all constructs of the questionnaire is more than 0.8. Thirty male patients with bipolar disorder who experienced four episodes of mood swings per year experienced an average of 2 mood episodes per year following the use of this software. Conclusion: Our Psychoeducational-interactive-therapeutic software is the first Persian language software based on artificial intelligence to monitor clinical symptoms in patients with bipolar disorder, which uses a standard questionnaire to predict the incidence of episodes of depression and mania in these patients.
Collapse
Affiliation(s)
- Farzad Akbarzadeh
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Alireza Ebrahimi
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Saeid Akhlaghi
- Department of Biostatistics, School of Health, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Zahra Rajai
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran,Corresponding author: Dr. Zahra Rajai,
| | - Afsaneh Rezaei Kalat
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Reza Jafarzadeh Esfehani
- Blood Born Research Center, Academic Center for Education, Culture and Research – Khorasan Branch, Mashhad, Iran
| | - Sima Garmehi
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Zahra Sangsefidy
- Department of Psychiatry, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| |
Collapse
|
214
|
Wang A, Xiu X, Liu S, Qian Q, Wu S. Characteristics of Artificial Intelligence Clinical Trials in the Field of Healthcare: A Cross-Sectional Study on ClinicalTrials.gov. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13691. [PMID: 36294269 PMCID: PMC9602501 DOI: 10.3390/ijerph192013691] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 10/13/2022] [Accepted: 10/20/2022] [Indexed: 06/16/2023]
Abstract
Artificial intelligence (AI) has driven innovative transformation in healthcare service patterns, despite a lack of understanding of its performance in clinical practice. We conducted a cross-sectional analysis of AI-related trials in healthcare based on ClinicalTrials.gov, intending to investigate the trial characteristics and AI's development status. Additionally, the Neo4j graph database and visualization technology were employed to construct an AI technology application graph, achieving a visual representation and analysis of research hotspots in healthcare AI. A total of 1725 eligible trials that were registered in ClinicalTrials.gov up to 31 March 2022 were included in this study. The number of trial registrations has dramatically grown each year since 2016. However, the AI-related trials had some design drawbacks and problems with poor-quality result reporting. The proportion of trials with prospective and randomized designs was insufficient, and most studies did not report results upon completion. Currently, most healthcare AI application studies are based on data-driven learning algorithms, covering various disease areas and healthcare scenarios. As few studies have publicly reported results on ClinicalTrials.gov, there is not enough evidence to support an assessment of AI's actual performance. The widespread implementation of AI technology in healthcare still faces many challenges and requires more high-quality prospective clinical validation.
Collapse
Affiliation(s)
| | | | | | | | - Sizhu Wu
- Correspondence: ; Tel.: +86-10-5232-8760
| |
Collapse
|
215
|
Sheu RK, Pardeshi MS. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. SENSORS (BASEL, SWITZERLAND) 2022; 22:8068. [PMID: 36298417 PMCID: PMC9609212 DOI: 10.3390/s22208068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/18/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient's conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human-machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field.
Collapse
Affiliation(s)
- Ruey-Kai Sheu
- Department of Computer Science, Tunghai University, No. 1727, Section 4, Taiwan Blvd, Xitun District, Taichung 407224, Taiwan
| | - Mayuresh Sunil Pardeshi
- AI Center, Tunghai University, No. 1727, Section 4, Taiwan Blvd, Xitun District, Taichung 407224, Taiwan
| |
Collapse
|
216
|
Artificial Intelligence Confirming Treatment Success: The Role of Gender- and Age-Specific Scales in Performance Evaluation. Plast Reconstr Surg 2022; 150:34S-40S. [PMID: 36170434 PMCID: PMC9512241 DOI: 10.1097/prs.0000000000009671] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
In plastic surgery and cosmetic dermatology, photographic data are an invaluable element of research and clinical practice. Additionally, the use of before and after images is a standard documentation method for procedures, and these images are particularly useful in consultations for effective communication with the patient. An artificial intelligence (AI)-based approach has been proven to have significant results in medical dermatology, plastic surgery, and antiaging procedures in recent years, with applications ranging from skin cancer screening to 3D face reconstructions, the prediction of biological age and perceived age. The increasing use of AI and computer vision methods is due to their noninvasive nature and their potential to provide remote diagnostics. This is especially helpful in instances where traveling to a physical office is complicated, as we have experienced in recent years with the global coronavirus pandemic. However, one question remains: how should the results of AI-based analysis be presented to enable personalization? In this paper, the author investigates the benefit of using gender- and age-specific scales to present skin parameter scores calculated using AI-based systems when analyzing image data.
Collapse
|
217
|
Kim S, Jeong WK, Choi JH, Kim JH, Chun M. Development of deep learning-assisted overscan decision algorithm in low-dose chest CT: Application to lung cancer screening in Korean National CT accreditation program. PLoS One 2022; 17:e0275531. [PMID: 36174098 PMCID: PMC9522252 DOI: 10.1371/journal.pone.0275531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 09/19/2022] [Indexed: 12/01/2022] Open
Abstract
We propose a deep learning-assisted overscan decision algorithm in chest low-dose computed tomography (LDCT) applicable to the lung cancer screening. The algorithm reflects the radiologists’ subjective evaluation criteria according to the Korea institute for accreditation of medical imaging (KIAMI) guidelines, where it judges whether a scan range is beyond landmarks’ criterion. The algorithm consists of three stages: deep learning-based landmark segmentation, rule-based logical operations, and overscan determination. A total of 210 cases from a single institution (internal data) and 50 cases from 47 institutions (external data) were utilized for performance evaluation. Area under the receiver operating characteristic (AUROC), accuracy, sensitivity, specificity, and Cohen’s kappa were used as evaluation metrics. Fisher’s exact test was performed to present statistical significance for the overscan detectability, and univariate logistic regression analyses were performed for validation. Furthermore, an excessive effective dose was estimated by employing the amount of overscan and the absorbed dose to effective dose conversion factor. The algorithm presented AUROC values of 0.976 (95% confidence interval [CI]: 0.925–0.987) and 0.997 (95% CI: 0.800–0.999) for internal and external dataset, respectively. All metrics showed average performance scores greater than 90% in each evaluation dataset. The AI-assisted overscan decision and the radiologist’s manual evaluation showed a statistically significance showing a p-value less than 0.001 in Fisher’s exact test. In the logistic regression analysis, demographics (age and sex), data source, CT vendor, and slice thickness showed no statistical significance on the algorithm (each p-value > 0.05). Furthermore, the estimated excessive effective doses were 0.02 ± 0.01 mSv and 0.03 ± 0.05 mSv for each dataset, not a concern within slight deviations from an acceptable scan range. We hope that our proposed overscan decision algorithm enables the retrospective scan range monitoring in LDCT for lung cancer screening program, and follows an as low as reasonably achievable (ALARA) principle.
Collapse
Affiliation(s)
- Sihwan Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
| | - Woo Kyoung Jeong
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jin Hwa Choi
- Department of Radiation Oncology, Chung-Ang University College of Medicine, Seoul, Republic of Korea
| | - Jong Hyo Kim
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
- ClariPi Research, Seoul, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Minsoo Chun
- Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
- Department of Radiation Oncology, Chung-Ang University Gwang Myeong Hospital, Gyeonggi-do, Republic of Korea
- * E-mail:
| |
Collapse
|
218
|
Alqudah AM, Qazan S, Obeidat YM. Deep learning models for detecting respiratory pathologies from raw lung auscultation sounds. Soft comput 2022; 26:13405-13429. [PMID: 36186666 PMCID: PMC9510581 DOI: 10.1007/s00500-022-07499-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/09/2022] [Indexed: 11/23/2022]
Abstract
In recent years deep learning models improve the diagnosis performance of many diseases especially respiratory diseases. This paper will propose an evaluation for the performance of different deep learning models associated with the raw lung auscultation sounds in detecting respiratory pathologies to help in providing diagnostic of respiratory pathologies in digital recorded respiratory sounds. Also, we will find out the best deep learning model for this task. In this paper, three different deep learning models have been evaluated on non-augmented and augmented datasets, where two different datasets have been utilized to generate four different sub-datasets. The results show that all the proposed deep learning methods were successful and achieved high performance in classifying the raw lung sounds, the methods were applied on different datasets and used either augmentation or non-augmentation. Among all proposed deep learning models, the CNN–LSTM model was the best model in all datasets for both augmentation and non-augmentation cases. The accuracy of CNN–LSTM model using non-augmentation was 99.6%, 99.8%, 82.4%, and 99.4% for datasets 1, 2, 3, and 4, respectively, and using augmentation was 100%, 99.8%, 98.0%, and 99.5% for datasets 1, 2, 3, and 4, respectively. While the augmentation process successfully helps the deep learning models in enhancing their performance on the testing datasets with a notable value. Moreover, the hybrid model that combines both CNN and LSTM techniques performed better than models that are based only on one of these techniques, this mainly refers to the use of CNN for automatic deep features extraction from lung sound while LSTM is used for classification.
Collapse
Affiliation(s)
- Ali Mohammad Alqudah
- Department of Biomedical Systems and Informatics Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Shoroq Qazan
- Department of Computer Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| | - Yusra M Obeidat
- Department of Electronic Engineering, Hijjawi Faculty for Engineering Technology, Yarmouk University, Irbid, Jordan
| |
Collapse
|
219
|
|
220
|
Paderno A, Gennarini F, Sordi A, Montenegro C, Lancini D, Villani FP, Moccia S, Piazza C. Artificial intelligence in clinical endoscopy: Insights in the field of videomics. Front Surg 2022; 9:933297. [PMID: 36171813 PMCID: PMC9510389 DOI: 10.3389/fsurg.2022.933297] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 08/22/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence is being increasingly seen as a useful tool in medicine. Specifically, these technologies have the objective to extract insights from complex datasets that cannot easily be analyzed by conventional statistical methods. While promising results have been obtained for various -omics datasets, radiological images, and histopathologic slides, analysis of videoendoscopic frames still represents a major challenge. In this context, videomics represents a burgeoning field wherein several methods of computer vision are systematically used to organize unstructured data from frames obtained during diagnostic videoendoscopy. Recent studies have focused on five broad tasks with increasing complexity: quality assessment of endoscopic images, classification of pathologic and nonpathologic frames, detection of lesions inside frames, segmentation of pathologic lesions, and in-depth characterization of neoplastic lesions. Herein, we present a broad overview of the field, with a focus on conceptual key points and future perspectives.
Collapse
Affiliation(s)
- Alberto Paderno
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
- Correspondence: Alberto Paderno
| | - Francesca Gennarini
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Alessandra Sordi
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Claudia Montenegro
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Davide Lancini
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
| | - Francesca Pia Villani
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| |
Collapse
|
221
|
Medical Data Classification Assisted by Machine Learning Strategy. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9699612. [PMID: 36124172 PMCID: PMC9482495 DOI: 10.1155/2022/9699612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
With the development of science and technology, data plays an increasingly important role in our daily life. Therefore, much attention has been paid to the field of data mining. Data classification is the premise of data mining, and how well the data is classified directly affects the performance of subsequent models. In particular, in the medical field, data classification can help accurately determine the location of patients' lesions and reduce the workload of doctors in the treatment process. However, medical data has the characteristics of high noise, strong correlation, and high data dimension, which brings great challenges to the traditional classification model. Therefore, it is very important to design an advanced model to improve the effect of medical data classification. In this context, this paper first introduces the structure and characteristics of the convolutional neural network (CNN) model and then demonstrates its unique advantages in medical data processing, especially in data classification. Secondly, we design a new kind of medical data classification model based on the CNN model. Finally, the simulation results show that the proposed method achieves higher classification accuracy with faster model convergence speed and the lower training error when compared with conventional machine leaning methods, which has demonstrated the effectiveness of the new method in respect to medical data classification.
Collapse
|
222
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
223
|
Wei Q, Clark RA, Demer JL. Can Binocular Alignment Distinguish Hypertropia in Sagging Eye Syndrome From Superior Oblique Palsy? Invest Ophthalmol Vis Sci 2022; 63:13. [PMID: 36136043 PMCID: PMC9513738 DOI: 10.1167/iovs.63.10.13] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 08/27/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Although the three-step test (3ST) is typically used to diagnose superior oblique palsy (SOP), sagging eye syndrome (SES) has clinical similarities. We sought to determine if alignment measurements can distinguish unilateral SOP from hypertropia in SES. Methods We studied hypertropic subjects who underwent surface-coil magnetic resonance imaging (MRI) demonstrating either SO cross-section reduction indicative of congenital or acquired palsy (SOP group) or lateral rectus muscle sag (SES group). Alignment was measured by Hess screen and prism-cover testing. Multiple supervised machine learning methods were employed to evaluate diagnostic accuracy. Rectus pulley coordinates were determined in SES cases fulfilling the 3ST. Results Twenty-three subjects had unilateral SOP manifested by SO atrophy. Eighteen others had normal SO size but MRI findings of SES. Maximum cross-section of the palsied SO was much smaller than contralaterally and in SES (P < 2 × 10-5). Inferior oblique cross-sections were similar in SOP and SES. In both SOP and SES, hypertropia increased in contralateral and decreased in ipsilateral gaze and was greater in ipsilateral than contralateral head tilt. In SES, nine subjects (50%) fulfilled the 3ST and had greater infraplacement of the lateral than medial rectus pulleys in the hypotropic orbit. Supervised machine learning of alignment data distinguished the diagnoses with areas under the receiver operating curves up to 0.93, representing excellent yet imperfect differential diagnosis. Conclusions Because the 3ST is often positive in SES, clinical alignment patterns may confound SES with unilateral SOP, particularly acquired SOP. Machine learning substantially but imperfectly improves classification accuracy.
Collapse
Affiliation(s)
- Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, Virginia, United States
| | - Robert A. Clark
- Department of Ophthalmology, University of California, Los Angeles, California, United States
- UCLA Stein Eye Institute, University of California, Los Angeles, California, United States
| | - Joseph L. Demer
- Department of Ophthalmology, University of California, Los Angeles, California, United States
- UCLA Stein Eye Institute, University of California, Los Angeles, California, United States
- Department of Neurology, University of California, Los Angeles, California, United States
| |
Collapse
|
224
|
González-Gonzalo C, Thee EF, Klaver CCW, Lee AY, Schlingemann RO, Tufail A, Verbraak F, Sánchez CI. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2022; 90:101034. [PMID: 34902546 PMCID: PMC11696120 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
Affiliation(s)
- Cristina González-Gonzalo
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Diagnostic Image Analysis Group, Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands.
| | - Eric F Thee
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Caroline C W Klaver
- Department of Ophthalmology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Epidemiology, Erasmus Medical Center, Rotterdam, the Netherlands; Department of Ophthalmology, Radboud University Medical Center, Nijmegen, the Netherlands; Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Aaron Y Lee
- Department of Ophthalmology, School of Medicine, University of Washington, Seattle, WA, USA
| | - Reinier O Schlingemann
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Adnan Tufail
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom; Institute of Ophthalmology, University College London, London, United Kingdom
| | - Frank Verbraak
- Department of Ophthalmology, Amsterdam University Medical Center, Amsterdam, the Netherlands
| | - Clara I Sánchez
- Eye Lab, qurAI Group, Informatics Institute, University of Amsterdam, Amsterdam, the Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, Amsterdam, the Netherlands
| |
Collapse
|
225
|
Natarajan R, Matai HD, Raman S, Kumar S, Ravichandran S, Swaminathan S, Rani Alex JS. Advances in the diagnosis of herpes simplex stromal necrotising keratitis: A feasibility study on deep learning approach. Indian J Ophthalmol 2022; 70:3279-3283. [PMID: 36018103 DOI: 10.4103/ijo.ijo_178_22] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Purpose Infectious keratitis, especially viral keratitis (VK), in resource-limited settings, can be a challenge to diagnose and carries a high risk of misdiagnosis contributing to significant ocular morbidity. We aimed to employ and study the application of artificial intelligence-based deep learning (DL) algorithms to diagnose VK. Methods A single-center retrospective study was conducted in a tertiary care center from January 2017 to December 2019 employing DL algorithm to diagnose VK from slit-lamp (SL) photographs. Three hundred and seven diffusely illuminated SL photographs from 285 eyes with polymerase chain reaction-proven herpes simplex viral stromal necrotizing keratitis (HSVNK) and culture-proven nonviral keratitis (NVK) were included. Patients having only HSV epithelial dendrites, endothelitis, mixed infection, and those with no SL photographs were excluded. DenseNet is a convolutional neural network, and the two main image datasets were divided into two subsets, one for training and the other for testing the algorithm. The performance of DenseNet was also compared with ResNet and Inception. Sensitivity, specificity, receiver operating characteristic (ROC) curve, and the area under the curve (AUC) were calculated. Results The accuracy of DenseNet on the test dataset was 72%, and it performed better than ResNet and Inception in the given task. The AUC for HSVNK was 0.73 with a sensitivity of 69.6% and specificity of 76.5%. The results were also validated using gradient-weighted class activation mapping (Grad-CAM), which successfully visualized the regions of input, which are significant for accurate predictions from these DL-based models. Conclusion DL algorithm can be a positive aid to diagnose VK, especially in primary care centers where appropriate laboratory facilities or expert manpower are not available.
Collapse
Affiliation(s)
- Radhika Natarajan
- Department of Cornea and Refractive Surgery, Sankara Nethralaya, Medical Research Foundation, 18 College Road, Nungambakkam, Chennai, Tamil Nadu, India
| | - Hiren D Matai
- Department of Cornea and Refractive Surgery, Sankara Nethralaya, Medical Research Foundation, 18 College Road, Nungambakkam, Chennai, Tamil Nadu, India
| | - Sundaresan Raman
- Department of Computer Science and Information Systems, Birla Institute of Technology and Science, Pilani, Rajasthan, India
| | - Subham Kumar
- Department of Computer Science and Information Systems, Birla Institute of Technology and Science, Pilani, Rajasthan, India
| | - Swetha Ravichandran
- Department of Cornea and Refractive Surgery, Sankara Nethralaya, Medical Research Foundation, 18 College Road, Nungambakkam, Chennai, Tamil Nadu, India
| | - Samyuktha Swaminathan
- Department of Computer Science and Engineering, Meenakshi Sundararajan Engineering College, Chennai, Tamil Nadu, India
| | - John Sahaya Rani Alex
- Centre for Healthcare Advancement, Innovation and Research, VIT, Chennai, Tamil Nadu, India
| |
Collapse
|
226
|
Ngo B, Nguyen D, vanSonnenberg E. The Cases for and against Artificial Intelligence in the Medical School Curriculum. Radiol Artif Intell 2022; 4:e220074. [PMID: 36204540 PMCID: PMC9530767 DOI: 10.1148/ryai.220074] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 07/26/2022] [Accepted: 08/02/2022] [Indexed: 06/02/2023]
Abstract
Although artificial intelligence (AI) has immense potential to shape the future of medicine, its place in undergraduate medical education currently is unclear. Numerous arguments exist both for and against including AI in the medical school curriculum. AI likely will affect all medical specialties, perhaps radiology more so than any other. The purpose of this article is to present a balanced perspective on whether AI should be included officially in the medical school curriculum. After presenting the balanced point-counterpoint arguments, the authors provide a compromise. Keywords: Artificial Intelligence, Medical Education, Medical School Curriculum, Medical Students, Radiology, Use of AI in Education © RSNA, 2022.
Collapse
Affiliation(s)
- Brandon Ngo
- From the University of Arizona College of Medicine – Phoenix, HSEB C536, 475 N 5th St, Phoenix, AZ 85004
| | - Diep Nguyen
- From the University of Arizona College of Medicine – Phoenix, HSEB C536, 475 N 5th St, Phoenix, AZ 85004
| | - Eric vanSonnenberg
- From the University of Arizona College of Medicine – Phoenix, HSEB C536, 475 N 5th St, Phoenix, AZ 85004
| |
Collapse
|
227
|
Chen HSL, Chen GA, Syu JY, Chuang LH, Su WW, Wu WC, Liu JH, Chen JR, Huang SC, Kang EYC. Early Glaucoma Detection by Using Style Transfer to Predict Retinal Nerve Fiber Layer Thickness Distribution on the Fundus Photograph. OPHTHALMOLOGY SCIENCE 2022; 2:100180. [PMID: 36245759 PMCID: PMC9559108 DOI: 10.1016/j.xops.2022.100180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/16/2022] [Accepted: 06/06/2022] [Indexed: 12/03/2022]
Abstract
Objective We aimed to develop a deep learning (DL)-based algorithm for early glaucoma detection based on color fundus photographs that provides information on defects in the retinal nerve fiber layer (RNFL) and its thickness from the mapping and translating relations of spectral domain OCT (SD-OCT) thickness maps. Design Developing and evaluating an artificial intelligence detection tool. Subjects Pretraining paired data of color fundus photographs and SD-OCT images from 189 healthy participants and 371 patients with early glaucoma were used. Methods The variational autoencoder (VAE) network training architecture was used for training, and the correlation between the fundus photographs and RNFL thickness distribution was determined through the deep neural network. The reference standard was defined as a vertical cup-to-disc ratio of ≥0.7, other typical changes in glaucomatous optic neuropathy, and RNFL defects. Convergence indicates that the VAE has learned a distribution that would enable us to produce corresponding synthetic OCT scans. Main Outcome Measures Similarly to wide-field OCT scanning, the proposed model can extract the results of RNFL thickness analysis. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to assess signal strength and the similarity in the structure of the color fundus images converted to an RNFL thickness distribution model. The differences between the model-generated images and original images were quantified. Results We developed and validated a novel DL-based algorithm to extract thickness information from the color space of fundus images similarly to that from OCT images and to use this information to regenerate RNFL thickness distribution images. The generated thickness map was sufficient for clinical glaucoma detection, and the generated images were similar to ground truth (PSNR: 19.31 decibels; SSIM: 0.44). The inference results were similar to the OCT-generated original images in terms of the ability to predict RNFL thickness distribution. Conclusions The proposed technique may aid clinicians in early glaucoma detection, especially when only color fundus photographs are available.
Collapse
Affiliation(s)
- Henry Shen-Lih Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Guan-An Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jhen-Yang Syu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Lan-Hsin Chuang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Ophthalmology, Keelung Chang Gung Memorial Hospital, Keelung, Taiwan
| | - Wei-Wen Su
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Jian-Hong Liu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jian-Ren Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Su-Chen Huang
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
228
|
Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P, Vicharueang S. AI-based analysis of oral lesions using novel deep convolutional neural networks for early detection of oral cancer. PLoS One 2022; 17:e0273508. [PMID: 36001628 PMCID: PMC9401150 DOI: 10.1371/journal.pone.0273508] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 08/09/2022] [Indexed: 11/18/2022] Open
Abstract
Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Wasit Limprasert
- College of Interdisciplinary Studies, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Siriwan Suebnukarn
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | | | | | | |
Collapse
|
229
|
A lightweight hybrid deep learning system for cardiac valvular disease classification. Sci Rep 2022; 12:14297. [PMID: 35995814 PMCID: PMC9395359 DOI: 10.1038/s41598-022-18293-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 08/09/2022] [Indexed: 12/21/2022] Open
Abstract
Cardiovascular diseases (CVDs) are a prominent cause of death globally. The introduction of medical big data and Artificial Intelligence (AI) technology encouraged the effort to develop and deploy deep learning models for distinguishing heart sound abnormalities. These systems employ phonocardiogram (PCG) signals because of their lack of sophistication and cost-effectiveness. Automated and early diagnosis of cardiovascular diseases (CVDs) helps alleviate deadly complications. In this research, a cardiac diagnostic system that combined CNN and LSTM components was developed, it uses phonocardiogram (PCG) signals, and utilizes either augmented or non-augmented datasets. The proposed model discriminates five heart valvular conditions, namely normal, Aortic Stenosis (AS), Mitral Regurgitation (MR), Mitral Stenosis (MS), and Mitral Valve Prolapse (MVP). The findings demonstrate that the suggested end-to-end architecture yields outstanding performance concerning all important evaluation metrics. For the five classes problem using the open heart sound dataset, accuracy was 98.5%, F1-score was 98.501%, and Area Under the Curve (AUC) was 0.9978 for the non-augmented dataset and accuracy was 99.87%, F1-score was 99.87%, and AUC was 0.9985 for the augmented dataset. Model performance was further evaluated using the PhysioNet/Computing in Cardiology 2016 challenge dataset, for the two classes problem, accuracy was 93.76%, F1-score was 85.59%, and AUC was 0.9505. The achieved results show that the proposed system outperforms all previous works that use the same audio signal databases. In the future, the findings will help build a multimodal structure that uses both PCG and ECG signals.
Collapse
|
230
|
Xie E, Sung E, Saad E, Trayanova N, Wu KC, Chrispin J. Advanced imaging for risk stratification for ventricular arrhythmias and sudden cardiac death. Front Cardiovasc Med 2022; 9:884767. [PMID: 36072882 PMCID: PMC9441865 DOI: 10.3389/fcvm.2022.884767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 08/02/2022] [Indexed: 11/13/2022] Open
Abstract
Sudden cardiac death (SCD) is a leading cause of mortality, comprising approximately half of all deaths from cardiovascular disease. In the US, the majority of SCD (85%) occurs in patients with ischemic cardiomyopathy (ICM) and a subset in patients with non-ischemic cardiomyopathy (NICM), who tend to be younger and whose risk of mortality is less clearly delineated than in ischemic cardiomyopathies. The conventional means of SCD risk stratification has been the determination of the ejection fraction (EF), typically via echocardiography, which is currently a means of determining candidacy for primary prevention in the form of implantable cardiac defibrillators (ICDs). Advanced cardiac imaging methods such as cardiac magnetic resonance imaging (CMR), single-photon emission computerized tomography (SPECT) and positron emission tomography (PET), and computed tomography (CT) have emerged as promising and non-invasive means of risk stratification for sudden death through their characterization of the underlying myocardial substrate that predisposes to SCD. Late gadolinium enhancement (LGE) on CMR detects myocardial scar, which can inform ICD decision-making. Overall scar burden, region-specific scar burden, and scar heterogeneity have all been studied in risk stratification. PET and SPECT are nuclear methods that determine myocardial viability and innervation, as well as inflammation. CT can be used for assessment of myocardial fat and its association with reentrant circuits. Emerging methodologies include the development of "virtual hearts" using complex electrophysiologic modeling derived from CMR to attempt to predict arrhythmic susceptibility. Recent developments have paired novel machine learning (ML) algorithms with established imaging techniques to improve predictive performance. The use of advanced imaging to augment risk stratification for sudden death is increasingly well-established and may soon have an expanded role in clinical decision-making. ML could help shift this paradigm further by advancing variable discovery and data analysis.
Collapse
Affiliation(s)
- Eric Xie
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Eric Sung
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Elie Saad
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Natalia Trayanova
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Katherine C. Wu
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Jonathan Chrispin
- Division of Cardiology, Department of Medicine, Section of Cardiac Electrophysiology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
231
|
Nanni L, Brahnam S, Paci M, Ghidoni S. Comparison of Different Convolutional Neural Network Activation Functions and Methods for Building Ensembles for Small to Midsize Medical Data Sets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22166129. [PMID: 36015898 PMCID: PMC9415767 DOI: 10.3390/s22166129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 08/09/2022] [Accepted: 08/12/2022] [Indexed: 05/08/2023]
Abstract
CNNs and other deep learners are now state-of-the-art in medical imaging research. However, the small sample size of many medical data sets dampens performance and results in overfitting. In some medical areas, it is simply too labor-intensive and expensive to amass images numbering in the hundreds of thousands. Building Deep CNN ensembles of pre-trained CNNs is one powerful method for overcoming this problem. Ensembles combine the outputs of multiple classifiers to improve performance. This method relies on the introduction of diversity, which can be introduced on many levels in the classification workflow. A recent ensembling method that has shown promise is to vary the activation functions in a set of CNNs or within different layers of a single CNN. This study aims to examine the performance of both methods using a large set of twenty activations functions, six of which are presented here for the first time: 2D Mexican ReLU, TanELU, MeLU + GaLU, Symmetric MeLU, Symmetric GaLU, and Flexible MeLU. The proposed method was tested on fifteen medical data sets representing various classification tasks. The best performing ensemble combined two well-known CNNs (VGG16 and ResNet50) whose standard ReLU activation layers were randomly replaced with another. Results demonstrate the superiority in performance of this approach.
Collapse
Affiliation(s)
- Loris Nanni
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
| | - Sheryl Brahnam
- Department of Information Technology and Cybersecurity, Missouri State University, 901 S. National Street, Springfield, MO 65804, USA
- Correspondence:
| | - Michelangelo Paci
- BioMediTech, Faculty of Medicine and Health Technology, Tampere University, Arvo Ylpön katu 34, D 219, FI-33520 Tampere, Finland
| | - Stefano Ghidoni
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy
| |
Collapse
|
232
|
Zhang H, Zhang L, Wang S, Zhang L. Online water quality monitoring based on UV-Vis spectrometry and artificial neural networks in a river confluence near Sherfield-on-Loddon. ENVIRONMENTAL MONITORING AND ASSESSMENT 2022; 194:630. [PMID: 35920913 PMCID: PMC9349112 DOI: 10.1007/s10661-022-10118-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 05/15/2022] [Indexed: 06/15/2023]
Abstract
Water quality monitoring is very important in agricultural catchments. UV-Vis spectrometry is widely used in place of traditional analytical methods because it is cost effective and fast and there is no chemical waste. In recent years, artificial neural networks have been extensively studied and used in various areas. In this study, we plan to simplify water quality monitoring with UV-Vis spectrometry and artificial neural networks. Samples were collected and immediately taken back to a laboratory for analysis. The absorption spectra of the water sample were acquired within a wavelength range from 200 to 800 nm. Convolutional neural network (CNN) and partial least squares (PLS) methods are used to calculate water parameters and obtain accurate results. The experimental results of this study show that both PLS and CNN methods may obtain an accurate result: linear correlation coefficient (R2) between predicted value and true values of TOC concentrations is 0.927 with PLS model and 0.953 with CNN model, R2 between predicted value and true values of TSS concentrations is 0.827 with PLS model and 0.915 with CNN model. CNN method may obtain a better linear correlation coefficient (R2) even with small number of samples and can be used for online water quality monitoring combined with UV-Vis spectrometry in agricultural catchment.
Collapse
Affiliation(s)
- Hongming Zhang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Lifu Zhang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China
| | - Sa Wang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China
| | - LinShan Zhang
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China
| |
Collapse
|
233
|
Manickam P, Mariappan SA, Murugesan SM, Hansda S, Kaushik A, Shinde R, Thipperudraswamy SP. Artificial Intelligence (AI) and Internet of Medical Things (IoMT) Assisted Biomedical Systems for Intelligent Healthcare. BIOSENSORS 2022; 12:bios12080562. [PMID: 35892459 PMCID: PMC9330886 DOI: 10.3390/bios12080562] [Citation(s) in RCA: 93] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) is a modern approach based on computer science that develops programs and algorithms to make devices intelligent and efficient for performing tasks that usually require skilled human intelligence. AI involves various subsets, including machine learning (ML), deep learning (DL), conventional neural networks, fuzzy logic, and speech recognition, with unique capabilities and functionalities that can improve the performances of modern medical sciences. Such intelligent systems simplify human intervention in clinical diagnosis, medical imaging, and decision-making ability. In the same era, the Internet of Medical Things (IoMT) emerges as a next-generation bio-analytical tool that combines network-linked biomedical devices with a software application for advancing human health. In this review, we discuss the importance of AI in improving the capabilities of IoMT and point-of-care (POC) devices used in advanced healthcare sectors such as cardiac measurement, cancer diagnosis, and diabetes management. The role of AI in supporting advanced robotic surgeries developed for advanced biomedical applications is also discussed in this article. The position and importance of AI in improving the functionality, detection accuracy, decision-making ability of IoMT devices, and evaluation of associated risks assessment is discussed carefully and critically in this review. This review also encompasses the technological and engineering challenges and prospects for AI-based cloud-integrated personalized IoMT devices for designing efficient POC biomedical systems suitable for next-generation intelligent healthcare.
Collapse
Affiliation(s)
- Pandiaraj Manickam
- Electrodics and Electrocatalysis Division, CSIR-Central Electrochemical Research Institute (CECRI), Karaikudi, Sivagangai 630003, Tamil Nadu, India; (S.A.M.); (S.M.M.)
- Academy of Scientific & Innovative Research (AcSIR), Ghaziabad 201002, Uttar Pradesh, India; (S.H.); (S.P.T.)
- Correspondence:
| | - Siva Ananth Mariappan
- Electrodics and Electrocatalysis Division, CSIR-Central Electrochemical Research Institute (CECRI), Karaikudi, Sivagangai 630003, Tamil Nadu, India; (S.A.M.); (S.M.M.)
- Academy of Scientific & Innovative Research (AcSIR), Ghaziabad 201002, Uttar Pradesh, India; (S.H.); (S.P.T.)
| | - Sindhu Monica Murugesan
- Electrodics and Electrocatalysis Division, CSIR-Central Electrochemical Research Institute (CECRI), Karaikudi, Sivagangai 630003, Tamil Nadu, India; (S.A.M.); (S.M.M.)
| | - Shekhar Hansda
- Academy of Scientific & Innovative Research (AcSIR), Ghaziabad 201002, Uttar Pradesh, India; (S.H.); (S.P.T.)
- Corrosion and Materials Protection Division, CSIR-Central Electrochemical Research Institute (CECRI), Karaikudi, Sivagangai 630003, Tamil Nadu, India
| | - Ajeet Kaushik
- School of Engineering, University of Petroleum and Energy Studies (UPES), Dehradun 248001, Uttarakhand, India;
- NanoBioTech Laboratory, Department of Environmental Engineering, Florida Polytechnic University, Lakeland, FL 33805-8531, USA
| | - Ravikumar Shinde
- Department of Zoology, Shri Pundlik Maharaj Mahavidyalaya Nandura, Buldana 443404, Maharashtra, India;
| | - S. P. Thipperudraswamy
- Academy of Scientific & Innovative Research (AcSIR), Ghaziabad 201002, Uttar Pradesh, India; (S.H.); (S.P.T.)
- Central Instrument Facility, CSIR-Central Electrochemical Research Institute, Karaikudi, Sivagangai 630003, Tamil Nadu, India
| |
Collapse
|
234
|
Krammer S, Li Y, Jakob N, Boehm AS, Wolff H, Tang P, Lasser T, French LE, Hartmann D. Deep learning-based classification of dermatological lesions given a limited amount of labeled data. J Eur Acad Dermatol Venereol 2022; 36:2516-2524. [PMID: 35876737 DOI: 10.1111/jdv.18460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/10/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Artificial intelligence (AI) techniques are promising in early diagnosis of skin diseases. However, a precondition for their success is the access to large-scaled annotated data. Until now, obtaining this data has only been feasible with very high personnel and financial resources. OBJECTIVES The aim of this study was to overcome the obstacle caused by the scarcity of labeled data. METHODS To simulate the scenario of label shortage, we discarded a proportion of labels of the training set. The training set consisted of both labeled and unlabeled images. We then leveraged a self-supervised learning technique to pre-train the AI model on the unlabeled images. Next, we fine-tuned the pre-trained model on the labeled images. RESULTS When the images in the training dataset were fully labeled, the self-supervised pre-trained model achieved 95.7% of accuracy, 91.7% of precision and 90.7% of sensitivity. When only 10% of the data was labeled, the model could still yield 87.7% of accuracy, 81.7% of precision and 68.6% of sensitivity. In addition, we also empirically verified that the AI model and dermatologists are consistent in visually inspecting the skin images. CONCLUSIONS The experimental results demonstrate the great potential of the self-supervised learning in alleviating the scarcity of annotated data.
Collapse
Affiliation(s)
- S Krammer
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Y Li
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - N Jakob
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - A S Boehm
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - H Wolff
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - P Tang
- Department of Informatics, School of Computations, Information, and Technology, and Munich Institute of Biomedical Engineering, Technical University of Munich, Munich, Germany
| | - T Lasser
- Department of Informatics, School of Computations, Information, and Technology, and Munich Institute of Biomedical Engineering, Technical University of Munich, Munich, Germany
| | - L E French
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - D Hartmann
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
235
|
Atteia G, Alhussan AA, Samee NA. BO-ALLCNN: Bayesian-Based Optimized CNN for Acute Lymphoblastic Leukemia Detection in Microscopic Blood Smear Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155520. [PMID: 35898023 PMCID: PMC9329984 DOI: 10.3390/s22155520] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 07/21/2022] [Accepted: 07/21/2022] [Indexed: 06/12/2023]
Abstract
Acute lymphoblastic leukemia (ALL) is a deadly cancer characterized by aberrant accumulation of immature lymphocytes in the blood or bone marrow. Effective treatment of ALL is strongly associated with the early diagnosis of the disease. Current practice for initial ALL diagnosis is performed through manual evaluation of stained blood smear microscopy images, which is a time-consuming and error-prone process. Deep learning-based human-centric biomedical diagnosis has recently emerged as a powerful tool for assisting physicians in making medical decisions. Therefore, numerous computer-aided diagnostic systems have been developed to autonomously identify ALL in blood images. In this study, a new Bayesian-based optimized convolutional neural network (CNN) is introduced for the detection of ALL in microscopic smear images. To promote classification performance, the architecture of the proposed CNN and its hyperparameters are customized to input data through the Bayesian optimization approach. The Bayesian optimization technique adopts an informed iterative procedure to search the hyperparameter space for the optimal set of network hyperparameters that minimizes an objective error function. The proposed CNN is trained and validated using a hybrid dataset which is formed by integrating two public ALL datasets. Data augmentation has been adopted to further supplement the hybrid image set to boost classification performance. The Bayesian search-derived optimal CNN model recorded an improved performance of image-based ALL classification on test set. The findings of this study reveal the superiority of the proposed Bayesian-optimized CNN over other optimized deep learning ALL classification models.
Collapse
Affiliation(s)
- Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (G.A.); (N.A.S.)
| | - Amel A. Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (G.A.); (N.A.S.)
| |
Collapse
|
236
|
Hofmann SM, Beyer F, Lapuschkin S, Goltermann O, Loeffler M, Müller KR, Villringer A, Samek W, Witte AV. Towards the Interpretability of Deep Learning Models for Multi-modal Neuroimaging: Finding Structural Changes of the Ageing Brain. Neuroimage 2022; 261:119504. [PMID: 35882272 DOI: 10.1016/j.neuroimage.2022.119504] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 07/15/2022] [Accepted: 07/21/2022] [Indexed: 11/17/2022] Open
Abstract
Brain-age (BA) estimates based on deep learning are increasingly used as neuroimaging biomarker for brain health; however, the underlying neural features have remained unclear. We combined ensembles of convolutional neural networks with Layer-wise Relevance Propagation (LRP) to detect which brain features contribute to BA. Trained on magnetic resonance imaging (MRI) data of a population-based study (n=2637, 18-82 years), our models estimated age accurately based on single and multiple modalities, regionally restricted and whole-brain images (mean absolute errors 3.37-3.86 years). We find that BA estimates capture aging at both small and large-scale changes, revealing gross enlargements of ventricles and subarachnoid spaces, as well as white matter lesions, and atrophies that appear throughout the brain. Divergence from expected aging reflected cardiovascular risk factors and accelerated aging was more pronounced in the frontal lobe. Applying LRP, our study demonstrates how superior deep learning models detect brain-aging in healthy and at-risk individuals throughout adulthood.
Collapse
Affiliation(s)
- Simon M Hofmann
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Department of Artificial Intelligence, Fraunhofer Institute Heinrich Hertz, 10587 Berlin, Germany; Clinic for Cognitive Neurology, University of Leipzig Medical Center, 04103 Leipzig, Germany.
| | - Frauke Beyer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Clinic for Cognitive Neurology, University of Leipzig Medical Center, 04103 Leipzig, Germany
| | - Sebastian Lapuschkin
- Department of Artificial Intelligence, Fraunhofer Institute Heinrich Hertz, 10587 Berlin, Germany
| | - Ole Goltermann
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Max Planck School of Cognition, 04103 Leipzig, Germany; Institute of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| | | | - Klaus-Robert Müller
- Department of Electrical Engineering and Computer Science, Technical University Berlin, 10623 Berlin, Germany; Department of Artificial Intelligence, Korea University, 02841 Seoul, Korea (the Republic of); Brain Team, Google Research, 10117 Berlin, Germany; Max Planck Institute for Informatics, 66123 Saarbrücken, Germany; BIFOLD - Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Clinic for Cognitive Neurology, University of Leipzig Medical Center, 04103 Leipzig, Germany; MindBrainBody Institute, Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany; Center for Stroke Research, Charité - Universitätsmedizin Berlin, 10117 Berlin, Germany
| | - Wojciech Samek
- Department of Artificial Intelligence, Fraunhofer Institute Heinrich Hertz, 10587 Berlin, Germany; Department of Electrical Engineering and Computer Science, Technical University Berlin, 10623 Berlin, Germany; BIFOLD - Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany
| | - A Veronica Witte
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany; Clinic for Cognitive Neurology, University of Leipzig Medical Center, 04103 Leipzig, Germany
| |
Collapse
|
237
|
McBee P, Zulqarnain F, Syed S, Brown DE. Image-Level Uncertainty in Pseudo-Label Selection for Semi-Supervised Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4740-4744. [PMID: 36086227 PMCID: PMC10445335 DOI: 10.1109/embc48229.2022.9871359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Advancements in deep learning techniques have proved useful in biomedical image segmentation. However, the large amount of unlabeled data inherent in biomedical imagery, particularly in digital pathology, creates a semi-supervised learning paradigm. Specifically, because of the time consuming nature of producing pixel-wise annotations and the high cost of having a pathologist dedicate time to labeling, there is a large amount of unlabeled data that we wish to utilize in training segmentation algorithms. Pseudo-labeling is one method to leverage the unlabeled data to increase overall model performance. We adapt a method used for image classification pseudo-labeling to select images for segmentation pseudo-labeling and apply it to 3 digital pathology datasets. To select images for pseudo-labeling, we create and explore different thresholds for confidence and uncertainty on an image level basis. Furthermore, we study the relationship between image-level uncertainty and confidence with model performance. We find that the certainty metrics do not consistently correlate with performance intuitively, and abnormal correlations serve as an indicator of a model's ability to produce pseudo-labels that are useful in training. Clinical relevance - The proposed approach adapts image-level confidence and uncertainty measures for segmentation pseudo-labeling on digital pathology datasets. Increased model performance enables better disease quantification for histopathology.
Collapse
|
238
|
Loftus TJ, Shickel B, Ozrazgat-Baslanti T, Ren Y, Glicksberg BS, Cao J, Singh K, Chan L, Nadkarni GN, Bihorac A. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol 2022; 18:452-465. [PMID: 35459850 PMCID: PMC9379375 DOI: 10.1038/s41581-022-00562-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2022] [Indexed: 12/12/2022]
Abstract
Kidney pathophysiology is often complex, nonlinear and heterogeneous, which limits the utility of hypothetical-deductive reasoning and linear, statistical approaches to diagnosis and treatment. Emerging evidence suggests that artificial intelligence (AI)-enabled decision support systems - which use algorithms based on learned examples - may have an important role in nephrology. Contemporary AI applications can accurately predict the onset of acute kidney injury before notable biochemical changes occur; can identify modifiable risk factors for chronic kidney disease onset and progression; can match or exceed human accuracy in recognizing renal tumours on imaging studies; and may augment prognostication and decision-making following renal transplantation. Future AI applications have the potential to make real-time, continuous recommendations for discrete actions and yield the greatest probability of achieving optimal kidney health outcomes. Realizing the clinical integration of AI applications will require cooperative, multidisciplinary commitment to ensure algorithm fairness, overcome barriers to clinical implementation, and build an AI-competent workforce. AI-enabled decision support should preserve the pre-eminence of wisdom and augment rather than replace human decision-making. By anchoring intuition with objective predictions and classifications, this approach should favour clinician intuition when it is honed by experience.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL, USA
| | - Benjamin Shickel
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | | | - Yuanfang Ren
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | - Benjamin S Glicksberg
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jie Cao
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Karandeep Singh
- Department of Learning Health Sciences and Internal Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Lili Chan
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Division of Nephrology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish N Nadkarni
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Azra Bihorac
- Department of Medicine, University of Florida Health, Gainesville, FL, USA.
| |
Collapse
|
239
|
Neural Network Detection of Pacemakers for MRI Safety. J Digit Imaging 2022; 35:1673-1680. [PMID: 35768751 DOI: 10.1007/s10278-022-00663-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 04/23/2022] [Accepted: 05/30/2022] [Indexed: 10/17/2022] Open
Abstract
Flagging the presence of cardiac devices such as pacemakers before an MRI scan is essential to allow appropriate safety checks. We assess the accuracy with which a machine learning model can classify the presence or absence of a pacemaker on pre-existing chest radiographs. A total of 7973 chest radiographs were collected, 3996 with pacemakers visible and 3977 without. Images were identified from information available on the radiology information system (RIS) and correlated with report text. Manual review of images by two board certified radiologists was performed to ensure correct labeling. The data set was divided into training, validation, and a hold-back test set. The data were used to retrain a pre-trained image classification neural network. Final model performance was assessed on the test set. Accuracy of 99.67% on the test set was achieved. Re-testing the final model on the full training and validation data revealed a few additional misclassified examples which are further analyzed. Neural network image classification could be used to screen for the presence of cardiac devices, in addition to current safety processes, providing notification of device presence in advance of safety questionnaires. Computational power to run the model is low. Further work on misclassified examples could improve accuracy on edge cases. The focus of many healthcare applications of computer vision techniques has been for diagnosis and guiding management. This work illustrates an application of computer vision image classification to enhance current processes and improve patient safety.
Collapse
|
240
|
A Neural Network Model Secret-Sharing Scheme with Multiple Weights for Progressive Recovery. MATHEMATICS 2022. [DOI: 10.3390/math10132231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the widespread use of deep-learning models in production environments, the value of deep-learning models has become more prominent. The key issues are the rights of the model trainers and the security of the specific scenarios using the models. In the commercial domain, consumers pay different fees and have access to different levels of services. Therefore, dividing the model into several shadow models with multiple weights is necessary. When holders want to use the model, they can recover the model whose performance corresponds to the number and weights of the collected shadow models so that access to the model can be controlled progressively, i.e., progressive recovery is significant. This paper proposes a neural network model secret sharing scheme (NNSS) with multiple weights for progressive recovery. The scheme uses Shamir’s polynomial to control model parameters’ sharing and embedding phase, which in turn enables hierarchical performance control in the secret model recovery phase. First, the important model parameters are extracted. Then, effective shadow parameters are assigned based on the holders’ weights in the sharing phase, and t shadow models are generated. The holders can obtain a sufficient number of shadow parameters for recovering the secret parameters with a certain probability during the recovery phase. As the number of shadow models obtained increases, the probability becomes larger, while the performance of the extracted models is related to the participants’ weights in the recovery phase. The probability is proportional to the number and weights of the shadow models obtained in the recovery phase, and the probability of the successful recovery of the shadow parameters is 1 when all t shadow models are obtained, i.e., the performance of the reconstruction model can reach the performance of the secret model. A series of experiments conducted on VGG19 verify the effectiveness of the scheme.
Collapse
|
241
|
Security Evaluation of Financial and Insurance and Ruin Probability Analysis Integrating Deep Learning Models. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1857100. [PMID: 35720881 PMCID: PMC9200529 DOI: 10.1155/2022/1857100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/11/2022] [Accepted: 05/21/2022] [Indexed: 11/17/2022]
Abstract
To ensure safe development of the financial and insurance industry and promote the continuous growth of the social economy, the theory and its role of deep learning are firstly analyzed. Secondly, the security of financial and insurance and bankruptcy probability are discussed. Finally, an analytical model of the security bankruptcy probability of financial and insurance is designed through a deep learning model, and the model is evaluated comprehensively. The research results manifest that first, the designed security evaluation of the financial and insurance industry based on the deep learning and bankruptcy probability analysis model not only has strong learning ability but also can effectively reduce its own calculation error through short-time learning. Then, by comparing with other models, it is found that the designed model has a stronger ability to control various errors than other models, and the overall error rate of the model can be reduced to about 20%. At last, the data training indicates that the model designed by the deep learning method can accurately and effectively predict the basic situation of the financial and insurance industry, the minimum error can reach 0, and the highest is only about 3. The research provides a technical reference for the development of the financial and insurance industry and contributes to the prosperity of the social economy.
Collapse
|
242
|
Ullah F, Moon J, Naeem H, Jabbar S. Explainable artificial intelligence approach in combating real-time surveillance of COVID19 pandemic from CT scan and X-ray images using ensemble model. THE JOURNAL OF SUPERCOMPUTING 2022; 78:19246-19271. [PMID: 35754515 PMCID: PMC9206105 DOI: 10.1007/s11227-022-04631-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/25/2022] [Indexed: 06/01/2023]
Abstract
Population size has made disease monitoring a major concern in the healthcare system, due to which auto-detection has become a top priority. Intelligent disease detection frameworks enable doctors to recognize illnesses, provide stable and accurate results, and lower mortality rates. An acute and severe disease known as Coronavirus (COVID19) has suddenly become a global health crisis. The fastest way to avoid the spreading of Covid19 is to implement an automated detection approach. In this study, an explainable COVID19 detection in CT scan and chest X-ray is established using a combination of deep learning and machine learning classification algorithms. A Convolutional Neural Network (CNN) collects deep features from collected images, and these features are then fed into a machine learning ensemble for COVID19 assessment. To identify COVID19 disease from images, an ensemble model is developed which includes, Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Random Forest (RF). The overall performance of the proposed method is interpreted using Gradient-weighted Class Activation Mapping (Grad-CAM), and t-distributed Stochastic Neighbor Embedding (t-SNE). The proposed method is evaluated using two datasets containing 1,646 and 2,481 CT scan images gathered from COVID19 patients, respectively. Various performance comparisons with state-of-the-art approaches were also shown. The proposed approach beats existing models, with scores of 98.5% accuracy, 99% precision, and 99% recall, respectively. Further, the t-SNE and explainable Artificial Intelligence (AI) experiments are conducted to validate the proposed approach.
Collapse
Affiliation(s)
- Farhan Ullah
- School of Software, Northwestern Polytechnical University, Xian, 710072 Shaanxi People’s Republic of China
| | - Jihoon Moon
- Department of Industrial Security, Chung-Ang University, Seoul, 06974 Korea
| | - Hamad Naeem
- School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, 466000 Henan People’s Republic of China
| | - Sohail Jabbar
- Department of Computational Sciences, The University of Faisalabad, Faisalabad, 38000 Pakistan
| |
Collapse
|
243
|
Kugener G, Zhu Y, Pangal DJ, Sinha A, Markarian N, Roshannai A, Chan J, Anandkumar A, Hung AJ, Wrobel BB, Zada G, Donoho DA. Deep Neural Networks Can Accurately Detect Blood Loss and Hemorrhage Control Task Success From Video. Neurosurgery 2022; 90:823-829. [PMID: 35319539 DOI: 10.1227/neu.0000000000001906] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video. OBJECTIVE To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes. METHODS Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance. RESULTS One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively. CONCLUSION BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated.
Collapse
Affiliation(s)
- Guillaume Kugener
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Yichao Zhu
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Dhiraj J Pangal
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Aditya Sinha
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Nicholas Markarian
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Arman Roshannai
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Justin Chan
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Animashree Anandkumar
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, California, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, USC Institute of Urology, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Bozena B Wrobel
- Caruso Department of Otolaryngology-Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Gabriel Zada
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Daniel A Donoho
- Division of Neurosurgery, Department of Surgery, Texas Children's Hospital, Baylor College of Medicine, Houston, Texas, USA
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, District of Columbia, USA
| |
Collapse
|
244
|
Ovalle-Magallanes E, Avina-Cervantes JG, Cruz-Aceves I, Ruiz-Pinales J. Improving convolutional neural network learning based on a hierarchical bezier generative model for stenosis detection in X-ray images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106767. [PMID: 35364481 DOI: 10.1016/j.cmpb.2022.106767] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 03/09/2022] [Accepted: 03/19/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic detection of stenosis on X-ray Coronary Angiography (XCA) images may help diagnose early coronary artery disease. Stenosis is manifested by a buildup of plaque in the arteries, decreasing the blood flow to the heart, increasing the risk of a heart attack. Convolutional Neural Networks (CNNs) have been successfully applied to identify pathological, regular, and featured tissues on rich and diverse medical image datasets. Nevertheless, CNNs find operative and performing limitations while working with small and poorly diversified databases. Transfer learning from large natural image datasets (such as ImageNet) has become a de-facto method to improve neural networks performance in the medical image domain. METHODS This paper proposes a novel Hierarchical Bezier-based Generative Model (HBGM) to improve the CNNs training process to detect stenosis. Herein, artificial image patches are generated to enlarge the original database, speeding up network convergence. The artificial dataset consists of 10,000 images containing 50% stenosis and 50% non-stenosis cases. Besides, a reliable Fréchet Inception Distance (FID) is used to evaluate the generated data quantitatively. Therefore, by using the proposed framework, the network is pre-trained with the artificial datasets and subsequently fine-tuned using the real XCA training dataset. The real dataset consists of 250 XCA image patches, selecting 125 images for stenosis and the remainder for non-stenosis cases. Furthermore, a Convolutional Block Attention Module (CBAM) was included in the network architecture as a self-attention mechanism to improve the efficiency of the network. RESULTS The results showed that the pre-trained networks using the proposed generative model outperformed the results concerning training from scratch. Particularly, an accuracy, precision, sensitivity, and F1-score of 0.8934, 0.9031, 0.8746, 0.8880, 0.9111, respectively, were achieved. The generated artificial dataset obtains a mean FID of 84.0886, with more realistic visual XCA images. CONCLUSIONS Different ResNet architectures for stenosis detection have been evaluated, including attention modules into the network. Numerical results demonstrated that by using the HBGM is obtained a higher performance than training from scratch, even outperforming the ImageNet pre-trained models.
Collapse
Affiliation(s)
- Emmanuel Ovalle-Magallanes
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| | - Juan Gabriel Avina-Cervantes
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| | - Ivan Cruz-Aceves
- CONACYT, Center for Research in Mathematics (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato, 36000 Guanajuato, Mexico.
| | - Jose Ruiz-Pinales
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| |
Collapse
|
245
|
Soto JT, Weston Hughes J, Sanchez PA, Perez M, Ouyang D, Ashley EA. Multimodal deep learning enhances diagnostic precision in left ventricular hypertrophy . EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2022; 3:380-389. [PMID: 36712167 PMCID: PMC9707995 DOI: 10.1093/ehjdh/ztac033] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 04/25/2022] [Indexed: 02/01/2023]
Abstract
Aims Determining the aetiology of left ventricular hypertrophy (LVH) can be challenging due to the similarity in clinical presentation and cardiac morphological features of diverse causes of disease. In particular, distinguishing individuals with hypertrophic cardiomyopathy (HCM) from the much larger set of individuals with manifest or occult hypertension (HTN) is of major importance for family screening and the prevention of sudden death. We hypothesized that an artificial intelligence method based joint interpretation of 12-lead electrocardiograms and echocardiogram videos could augment physician interpretation. Methods and results We chose not to train on proximate data labels such as physician over-reads of ECGs or echocardiograms but instead took advantage of electronic health record derived clinical blood pressure measurements and diagnostic consensus (often including molecular testing) among physicians in an HCM centre of excellence. Using more than 18 000 combined instances of electrocardiograms and echocardiograms from 2728 patients, we developed LVH-fusion. On held-out test data, LVH-fusion achieved an F1-score of 0.71 in predicting HCM, and 0.96 in predicting HTN. In head-to-head comparison with human readers LVH-fusion had higher sensitivity and specificity rates than its human counterparts. Finally, we use explainability techniques to investigate local and global features that positively and negatively impact LVH-fusion prediction estimates providing confirmation from unsupervised analysis the diagnostic power of lateral T-wave inversion on the ECG and proximal septal hypertrophy on the echocardiogram for HCM. Conclusion These results show that deep learning can provide effective physician augmentation in the face of a common diagnostic dilemma with far reaching implications for the prevention of sudden cardiac death.
Collapse
Affiliation(s)
| | | | - Pablo Amador Sanchez
- Department of Medicine, Division of Cardiology, Stanford University, Stanford, California, USA
| | - Marco Perez
- Department of Medicine, Division of Cardiology, Stanford University, Stanford, California, USA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, USA,Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, USA
| | - Euan A Ashley
- Corresponding author. Tel: 650 498-4900, Fax: 650 498-7452,
| |
Collapse
|
246
|
Automated video analysis of emotion and dystonia in epileptic seizures. Epilepsy Res 2022; 184:106953. [DOI: 10.1016/j.eplepsyres.2022.106953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/11/2022] [Accepted: 05/25/2022] [Indexed: 11/18/2022]
|
247
|
Wongvibulsin S, Frech TM, Chren MM, Tkaczyk ER. Expanding Personalized, Data-Driven Dermatology: Leveraging Digital Health Technology and Machine Learning to Improve Patient Outcomes. JID INNOVATIONS 2022; 2:100105. [PMID: 35462957 PMCID: PMC9026581 DOI: 10.1016/j.xjidi.2022.100105] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 12/13/2021] [Accepted: 01/07/2022] [Indexed: 11/30/2022] Open
Abstract
The current revolution of digital health technology and machine learning offers enormous potential to improve patient care. Nevertheless, it is essential to recognize that dermatology requires an approach different from those of other specialties. For many dermatological conditions, there is a lack of standardized methodology for quantitatively tracking disease progression and treatment response (clinimetrics). Furthermore, dermatological diseases impact patients in complex ways, some of which can be measured only through patient reports (psychometrics). New tools using digital health technology (e.g., smartphone applications, wearable devices) can aid in capturing both clinimetric and psychometric variables over time. With these data, machine learning can inform efforts to improve health care by, for example, the identification of high-risk patient groups, optimization of treatment strategies, and prediction of disease outcomes. We use the term personalized, data-driven dermatology to refer to the use of comprehensive data to inform individual patient care and improve patient outcomes. In this paper, we provide a framework that includes data from multiple sources, leverages digital health technology, and uses machine learning. Although this framework is applicable broadly to dermatological conditions, we use the example of a serious inflammatory skin condition, chronic cutaneous graft-versus-host disease, to illustrate personalized, data-driven dermatology.
Collapse
Affiliation(s)
- Shannon Wongvibulsin
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Medicine, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | - Tracy M. Frech
- Division of Rheumatology and Immunology, Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- VA Tennessee Valley Healthcare System, U.S. Department of Veterans Affairs, Nashville, Tennessee, USA
| | - Mary-Margaret Chren
- Department of Dermatology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Eric R. Tkaczyk
- VA Tennessee Valley Healthcare System, U.S. Department of Veterans Affairs, Nashville, Tennessee, USA
- Department of Dermatology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Biomedical Engineering, School of Engineering, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
248
|
Crowson MG, Moukheiber D, Arévalo AR, Lam BD, Mantena S, Rana A, Goss D, Bates DW, Celi LA. A systematic review of federated learning applications for biomedical data. PLOS DIGITAL HEALTH 2022; 1:e0000033. [PMID: 36812504 PMCID: PMC9931322 DOI: 10.1371/journal.pdig.0000033] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 03/30/2022] [Indexed: 11/18/2022]
Abstract
OBJECTIVES Federated learning (FL) allows multiple institutions to collaboratively develop a machine learning algorithm without sharing their data. Organizations instead share model parameters only, allowing them to benefit from a model built with a larger dataset while maintaining the privacy of their own data. We conducted a systematic review to evaluate the current state of FL in healthcare and discuss the limitations and promise of this technology. METHODS We conducted a literature search using PRISMA guidelines. At least two reviewers assessed each study for eligibility and extracted a predetermined set of data. The quality of each study was determined using the TRIPOD guideline and PROBAST tool. RESULTS 13 studies were included in the full systematic review. Most were in the field of oncology (6 of 13; 46.1%), followed by radiology (5 of 13; 38.5%). The majority evaluated imaging results, performed a binary classification prediction task via offline learning (n = 12; 92.3%), and used a centralized topology, aggregation server workflow (n = 10; 76.9%). Most studies were compliant with the major reporting requirements of the TRIPOD guidelines. In all, 6 of 13 (46.2%) of studies were judged at high risk of bias using the PROBAST tool and only 5 studies used publicly available data. CONCLUSION Federated learning is a growing field in machine learning with many promising uses in healthcare. Few studies have been published to date. Our evaluation found that investigators can do more to address the risk of bias and increase transparency by adding steps for data homogeneity or sharing required metadata and code.
Collapse
Affiliation(s)
- Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts, United States of America
| | - Dana Moukheiber
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, United States of America
| | - Aldo Robles Arévalo
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
- Data & Analytics, NTT DATA Portugal, Lisbon, Portugal
| | - Barbara D. Lam
- Department of Hematology & Oncology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| | - Sreekar Mantena
- Harvard College, Boston, Massachusetts, United States of America
| | - Aakanksha Rana
- Massachusetts Institute of Technology, Boston, Massachusetts, United States of America
| | - Deborah Goss
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
| | - David W. Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA, United States of America
- Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA, United States of America
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| |
Collapse
|
249
|
Voigt I, Boeckmann M, Bruder O, Wolf A, Schmitz T, Wieneke H. A deep neural network using audio files for detection of aortic stenosis. Clin Cardiol 2022; 45:657-663. [PMID: 35438211 PMCID: PMC9175247 DOI: 10.1002/clc.23826] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/10/2022] [Accepted: 03/13/2022] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Although aortic stenosis (AS) is the most common valvular heart disease in the western world, many affected patients remain undiagnosed. Auscultation is a readily available screening tool for AS. However, it requires a high level of professional expertise. HYPOTHESIS An AI algorithm can detect AS using audio files with the same accuracy as experienced cardiologists. METHODS A deep neural network (DNN) was trained by preprocessed audio files of 100 patients with AS and 100 controls. The DNN's performance was evaluated with a test data set of 40 patients. The primary outcome measures were sensitivity, specificity, and F1-score. Results of the DNN were compared with the performance of cardiologists, residents, and medical students. RESULTS Eighteen percent of patients without AS and 22% of patients with AS showed an additional moderate or severe mitral regurgitation. The DNN showed a sensitivity of 0.90 (0.81-0.99), a specificity of 1, and an F1-score of 0.95 (0.89-1.0) for the detection of AS. In comparison, we calculated an F1-score of 0.94 (0.86-1.0) for cardiologists, 0.88 (0.78-0.98) for residents, and 0.88 (0.78-0.98) for students. CONCLUSIONS The present study shows that deep learning-guided auscultation predicts significant AS with similar accuracy as cardiologists. The results of this pilot study suggest that AI-assisted auscultation may help general practitioners without special cardiology training in daily practice.
Collapse
Affiliation(s)
- Ingo Voigt
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Marc Boeckmann
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Oliver Bruder
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Alexander Wolf
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Thomas Schmitz
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Heinrich Wieneke
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| |
Collapse
|
250
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|