1
|
Karimi A, Stanik A, Kozitza C, Chen A. Integrating Deep Learning with Electronic Health Records for Early Glaucoma Detection: A Multi-Dimensional Machine Learning Approach. Bioengineering (Basel) 2024; 11:577. [PMID: 38927813 PMCID: PMC11200568 DOI: 10.3390/bioengineering11060577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/02/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Recent advancements in deep learning have significantly impacted ophthalmology, especially in glaucoma, a leading cause of irreversible blindness worldwide. In this study, we developed a reliable predictive model for glaucoma detection using deep learning models based on clinical data, social and behavior risk factor, and demographic data from 1652 participants, split evenly between 826 control subjects and 826 glaucoma patients. METHODS We extracted structural data from control and glaucoma patients' electronic health records (EHR). Three distinct machine learning classifiers, the Random Forest and Gradient Boosting algorithms, as well as the Sequential model from the Keras library of TensorFlow, were employed to conduct predictive analyses across our dataset. Key performance metrics such as accuracy, F1 score, precision, recall, and the area under the receiver operating characteristics curve (AUC) were computed to both train and optimize these models. RESULTS The Random Forest model achieved an accuracy of 67.5%, with a ROC AUC of 0.67, outperforming the Gradient Boosting and Sequential models, which registered accuracies of 66.3% and 64.5%, respectively. Our results highlighted key predictive factors such as intraocular pressure, family history, and body mass index, substantiating their roles in glaucoma risk assessment. CONCLUSIONS This study demonstrates the potential of utilizing readily available clinical, lifestyle, and demographic data from EHRs for glaucoma detection through deep learning models. While our model, using EHR data alone, has a lower accuracy compared to those incorporating imaging data, it still offers a promising avenue for early glaucoma risk assessment in primary care settings. The observed disparities in model performance and feature significance show the importance of tailoring detection strategies to individual patient characteristics, potentially leading to more effective and personalized glaucoma screening and intervention.
Collapse
Affiliation(s)
- Alireza Karimi
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR 97239, USA
| | - Ansel Stanik
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Cooper Kozitza
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| | - Aiyin Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, OR 97239, USA; (A.S.); (C.K.); (A.C.)
| |
Collapse
|
2
|
Huang X, Islam MR, Akter S, Ahmed F, Kazami E, Serhan HA, Abd-Alrazaq A, Yousefi S. Artificial intelligence in glaucoma: opportunities, challenges, and future directions. Biomed Eng Online 2023; 22:126. [PMID: 38102597 PMCID: PMC10725017 DOI: 10.1186/s12938-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 12/01/2023] [Indexed: 12/17/2023] Open
Abstract
Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.
Collapse
Affiliation(s)
- Xiaoqin Huang
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA
| | - Md Rafiqul Islam
- Business Information Systems, Australian Institute of Higher Education, Sydney, Australia
| | - Shanjita Akter
- School of Computer Science, Taylors University, Subang Jaya, Malaysia
| | - Fuad Ahmed
- Department of Computer Science & Engineering, Islamic University of Technology (IUT), Gazipur, Bangladesh
| | - Ehsan Kazami
- Ophthalmology, General Hospital of Mahabad, Urmia University of Medical Sciences, Urmia, Iran
| | - Hashem Abu Serhan
- Department of Ophthalmology, Hamad Medical Corporations, Doha, Qatar
| | - Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, USA.
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, USA.
| |
Collapse
|
3
|
Zedan MJM, Zulkifley MA, Ibrahim AA, Moubark AM, Kamari NAM, Abdani SR. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics (Basel) 2023; 13:2180. [PMID: 37443574 DOI: 10.3390/diagnostics13132180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/16/2023] [Accepted: 06/17/2023] [Indexed: 07/15/2023] Open
Abstract
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Collapse
Affiliation(s)
- Mohammad J M Zedan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Computer and Information Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Ahmad Asrul Ibrahim
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Asraf Mohamed Moubark
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Nor Azwan Mohamed Kamari
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
| |
Collapse
|
4
|
Song B, Li S, Sunny S, Gurushanth K, Mendonca P, Mukhia N, Patrick S, Peterson T, Gurudath S, Raghavan S, Tsusennaro I, Leivon ST, Kolur T, Shetty V, Bushan V, Ramesh R, Pillai V, Wilder-Smith P, Suresh A, Kuriakose MA, Birur P, Liang R. Exploring uncertainty measures in convolutional neural network for semantic segmentation of oral cancer images. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:115001. [PMID: 36329004 PMCID: PMC9630461 DOI: 10.1117/1.jbo.27.11.115001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 10/13/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Oral cancer is one of the most prevalent cancers, especially in middle- and low-income countries such as India. Automatic segmentation of oral cancer images can improve the diagnostic workflow, which is a significant task in oral cancer image analysis. Despite the remarkable success of deep-learning networks in medical segmentation, they rarely provide uncertainty quantification for their output. AIM We aim to estimate uncertainty in a deep-learning approach to semantic segmentation of oral cancer images and to improve the accuracy and reliability of predictions. APPROACH This work introduced a UNet-based Bayesian deep-learning (BDL) model to segment potentially malignant and malignant lesion areas in the oral cavity. The model can quantify uncertainty in predictions. We also developed an efficient model that increased the inference speed, which is almost six times smaller and two times faster (inference speed) than the original UNet. The dataset in this study was collected using our customized screening platform and was annotated by oral oncology specialists. RESULTS The proposed approach achieved good segmentation performance as well as good uncertainty estimation performance. In the experiments, we observed an improvement in pixel accuracy and mean intersection over union by removing uncertain pixels. This result reflects that the model provided less accurate predictions in uncertain areas that may need more attention and further inspection. The experiments also showed that with some performance compromises, the efficient model reduced computation time and model size, which expands the potential for implementation on portable devices used in resource-limited settings. CONCLUSIONS Our study demonstrates the UNet-based BDL model not only can perform potentially malignant and malignant oral lesion segmentation, but also can provide informative pixel-level uncertainty estimation. With this extra uncertainty information, the accuracy and reliability of the model’s prediction can be improved.
Collapse
Affiliation(s)
- Bofan Song
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shaobai Li
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Sumsum Sunny
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
| | | | | | - Nirza Mukhia
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Tyler Peterson
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| | - Shubha Gurudath
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
| | | | - Imchen Tsusennaro
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Shirley T. Leivon
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Trupti Kolur
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vivek Shetty
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Vidya Bushan
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Rohan Ramesh
- Christian Institute of Health Sciences and Research, Dimapur, Nagaland, India
| | - Vijay Pillai
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | - Petra Wilder-Smith
- University of California, Beckman Laser Institute & Medical Clinic, Irvine, California, United States
| | - Amritha Suresh
- Mazumdar Shaw Medical Centre, Bangalore, Karnataka, India
- Mazumdar Shaw Medical Foundation, Bangalore, Karnataka, India
| | | | - Praveen Birur
- KLE Society Institute of Dental Sciences, Bangalore, Karnataka, India
- Biocon Foundation, Bangalore, Karnataka, India
| | - Rongguang Liang
- The University of Arizona, Wyant College of Optical Sciences, Tucson, Arizona, United States
| |
Collapse
|
5
|
Nunez R, Harris A, Ibrahim O, Keller J, Wikle CK, Robinson E, Zukerman R, Siesky B, Verticchio A, Rowe L, Guidoboni G. Artificial Intelligence to Aid Glaucoma Diagnosis and Monitoring: State of the Art and New Directions. PHOTONICS 2022; 9:810. [PMID: 36816462 PMCID: PMC9934292 DOI: 10.3390/photonics9110810] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Recent developments in the use of artificial intelligence in the diagnosis and monitoring of glaucoma are discussed. To set the context and fix terminology, a brief historic overview of artificial intelligence is provided, along with some fundamentals of statistical modeling. Next, recent applications of artificial intelligence techniques in glaucoma diagnosis and the monitoring of glaucoma progression are reviewed, including the classification of visual field images and the detection of glaucomatous change in retinal nerve fiber layer thickness. Current challenges in the direct application of artificial intelligence to further our understating of this disease are also outlined. The article also discusses how the combined use of mathematical modeling and artificial intelligence may help to address these challenges, along with stronger communication between data scientists and clinicians.
Collapse
Affiliation(s)
- Roberto Nunez
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Alon Harris
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Omar Ibrahim
- Department of Electrical Engineering, Tikrit University, Tikrit P.O. Box 42, Iraq
| | - James Keller
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | | | - Erin Robinson
- Department of Social Work, University of Missouri, Columbia, MO 65211, USA
| | - Ryan Zukerman
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Irving Medical Center, New York-Presbyterian Hospital, New York, NY 10034, USA
| | - Brent Siesky
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Alice Verticchio
- Department of Ophthalmology, Icahn School of Medicine at Mt. Sinai, New York, NY 10029, USA
| | - Lucas Rowe
- Department of Ophthalmology, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Giovanna Guidoboni
- Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
- Department of Mathematics, University of Missouri, Columbia, MO 65211, USA
| |
Collapse
|
6
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
7
|
Artificial intelligence-based methods for fusion of electronic health records and imaging data. Sci Rep 2022; 12:17981. [PMID: 36289266 PMCID: PMC9605975 DOI: 10.1038/s41598-022-22514-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/17/2022] [Indexed: 01/24/2023] Open
Abstract
Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them-a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.
Collapse
|
8
|
Chai Y, Liu H, Xu J, Samtani S, Jiang Y, Liu H. A Multi-Label Classification with An Adversarial-Based Denoising Autoencoder for Medical Image Annotation. ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS 2022. [DOI: 10.1145/3561653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Medical image annotation aims to automatically describe the content of medical images. It helps doctors to understand the content of medical image and make better informed decisions like diagnosis. Existing methods mainly follow the approach for natural images and fail to emphasize the object abnormalities, which is the essence of medical images annotation. In light of this, we propose to transform the medical image annotation to a multi-label classification problem, where object abnormalities are focused directly. However, extant multi-label classification studies rely on arduous feature engineering, or do not solve label correlation issues well in medical images. To solve these problems, we propose a novel deep learning model where a frequent pattern mining component and an adversarial-based denoising autoencoder component are introduced. Extensive experiments are conducted on a real retinal image dataset to evaluate the performance of the proposed model. Results indicate that the proposed model significantly outperforms image captioning baselines and multi-label classification baselines.
Collapse
Affiliation(s)
- Yidong Chai
- School of Management of Hefei University of Technology, Key Laboratory of Process Optimization and Intelligence Decision Making, Minister of Education, China
| | - Hongyan Liu
- Research Center for Contemporary Management, School of Economics and Management, Tsinghua University, China
| | - Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Hospital, Capital Medical University, China
| | | | - Yuanchun Jiang
- School of Management of Hefei University of Technology, Key Laboratory of Process Optimization and Intelligence Decision Making, Minister of Education, China
| | - Haoxin Liu
- School of Management of Hefei University of Technology, Key Laboratory of Process Optimization and Intelligence Decision Making, Minister of Education, China
| |
Collapse
|
9
|
Increasing Women’s Knowledge about HPV Using BERT Text Summarization: An Online Randomized Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19138100. [PMID: 35805761 PMCID: PMC9265758 DOI: 10.3390/ijerph19138100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 06/28/2022] [Accepted: 06/29/2022] [Indexed: 01/09/2023]
Abstract
Despite the availability of online educational resources about human papillomavirus (HPV), many women around the world may be prevented from obtaining the necessary knowledge about HPV. One way to mitigate the lack of HPV knowledge is the use of auto-generated text summarization tools. This study compares the level of HPV knowledge between women who read an auto-generated summary of HPV made using the BERT deep learning model and women who read a long-form text of HPV. We randomly assigned 386 women to two conditions: half read an auto-generated summary text about HPV (n = 193) and half read an original text about HPV (n = 193). We administrated measures of HPV knowledge that consisted of 29 questions. As a result, women who read the original text were more likely to correctly answer two questions on the general HPV knowledge subscale than women who read the summarized text. For the HPV testing knowledge subscale, there was a statistically significant difference in favor of women who read the original text for only one question. The final subscale, HPV vaccination knowledge questions, did not significantly differ across groups. Using BERT for text summarization has shown promising effectiveness in increasing women’s knowledge and awareness about HPV while saving their time.
Collapse
|
10
|
An edge-driven multi-agent optimization model for infectious disease detection. APPL INTELL 2022; 52:14362-14373. [PMID: 35280108 PMCID: PMC8898659 DOI: 10.1007/s10489-021-03145-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2021] [Indexed: 11/25/2022]
Abstract
This research work introduces a new intelligent framework for infectious disease detection by exploring various emerging and intelligent paradigms. We propose new deep learning architectures such as entity embedding networks, long-short term memory, and convolution neural networks, for accurately learning heterogeneous medical data in identifying disease infection. The multi-agent system is also consolidated for increasing the autonomy behaviours of the proposed framework, where each agent can easily share the derived learning outputs with the other agents in the system. Furthermore, evolutionary computation algorithms, such as memetic algorithms, and bee swarm optimization controlled the exploration of the hyper-optimization parameter space of the proposed framework. Intensive experimentation has been established on medical data. Strong results obtained confirm the superiority of our framework against the solutions that are state of the art, in both detection rate, and runtime performance, where the detection rate reaches 98% for handling real use cases.
Collapse
|
11
|
Wang W, Guo L, Wu YJ, Goh M, Wang S. Content-oriented or persona-oriented? A text analytics of endorsement strategies on public willingness to participate in citizen science. Inf Process Manag 2022. [DOI: 10.1016/j.ipm.2021.102832] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
12
|
Su K, Wu J, Gu D, Yang S, Deng S, Khakimova AK. An Adaptive Deep Ensemble Learning Method for Dynamic Evolving Diagnostic Task Scenarios. Diagnostics (Basel) 2021; 11:2288. [PMID: 34943525 PMCID: PMC8700766 DOI: 10.3390/diagnostics11122288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 12/04/2021] [Accepted: 12/06/2021] [Indexed: 12/19/2022] Open
Abstract
Increasingly, machine learning methods have been applied to aid in diagnosis with good results. However, some complex models can confuse physicians because they are difficult to understand, while data differences across diagnostic tasks and institutions can cause model performance fluctuations. To address this challenge, we combined the Deep Ensemble Model (DEM) and tree-structured Parzen Estimator (TPE) and proposed an adaptive deep ensemble learning method (TPE-DEM) for dynamic evolving diagnostic task scenarios. Different from previous research that focuses on achieving better performance with a fixed structure model, our proposed model uses TPE to efficiently aggregate simple models more easily understood by physicians and require less training data. In addition, our proposed model can choose the optimal number of layers for the model and the type and number of basic learners to achieve the best performance in different diagnostic task scenarios based on the data distribution and characteristics of the current diagnostic task. We tested our model on one dataset constructed with a partner hospital and five UCI public datasets with different characteristics and volumes based on various diagnostic tasks. Our performance evaluation results show that our proposed model outperforms other baseline models on different datasets. Our study provides a novel approach for simple and understandable machine learning models in tasks with variable datasets and feature sets, and the findings have important implications for the application of machine learning models in computer-aided diagnosis.
Collapse
Affiliation(s)
- Kaixiang Su
- School of Management, Hefei University of Technology, Hefei 230009, China; (K.S.); (S.Y.)
| | - Jiao Wu
- School of Business, Northern Illinois University, DeKalb, IL 60115, USA;
| | - Dongxiao Gu
- School of Management, Hefei University of Technology, Hefei 230009, China; (K.S.); (S.Y.)
- Key Laboratory of Process Optimization and Intelligent Decision-Making of Ministry of Education, Hefei 230009, China
| | - Shanlin Yang
- School of Management, Hefei University of Technology, Hefei 230009, China; (K.S.); (S.Y.)
- Key Laboratory of Process Optimization and Intelligent Decision-Making of Ministry of Education, Hefei 230009, China
| | | | - Aida K. Khakimova
- Scientific-Research Center for Physical-Technical Informatics, Russian New University, Radio St., 22, 105005 Moscow, Russia;
| |
Collapse
|