1
|
Principi N, Esposito S. Smartphone-Based Artificial Intelligence for the Detection and Diagnosis of Pediatric Diseases: A Comprehensive Review. Bioengineering (Basel) 2024; 11:628. [PMID: 38927864 PMCID: PMC11200698 DOI: 10.3390/bioengineering11060628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/06/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
In recent years, the use of smartphones and other wireless technology in medical care has developed rapidly. However, in some cases, especially for pediatric medical problems, the reliability of information accessed by mobile health technology remains debatable. The main aim of this paper is to evaluate the relevance of smartphone applications in the detection and diagnosis of pediatric medical conditions for which the greatest number of applications have been developed. This is the case of smartphone applications developed for the diagnosis of acute otitis media, otitis media with effusion, hearing impairment, obesity, amblyopia, and vision screening. In some cases, the information given by these applications has significantly improved the diagnostic ability of physicians. However, distinguishing between applications that can be effective and those that may lead to mistakes can be very difficult. This highlights the importance of a careful application selection before including smartphone-based artificial intelligence in everyday clinical practice.
Collapse
Affiliation(s)
| | - Susanna Esposito
- Pediatric Clinic, Department of Medicine and Surgery, University of Parma, 43126 Parma, Italy
| |
Collapse
|
2
|
Fang TY, Lin TY, Shen CM, Hsu SY, Lin SH, Kuo YJ, Chen MH, Yin TK, Liu CH, Lo MT, Wang PC. Algorithm-Driven Tele-otoscope for Remote Care for Patients With Otitis Media. Otolaryngol Head Neck Surg 2024; 170:1590-1597. [PMID: 38545686 DOI: 10.1002/ohn.738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/05/2024] [Accepted: 02/29/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE The COVID-19 pandemic has spurred a growing demand for telemedicine. Artificial intelligence and image processing systems with wireless transmission functionalities can facilitate remote care for otitis media (OM). Accordingly, this study developed and validated an algorithm-driven tele-otoscope system equipped with Wi-Fi transmission and a cloud-based automatic OM diagnostic algorithm. STUDY DESIGN Prospective, cross-sectional, diagnostic study. SETTING Tertiary Academic Medical Center. METHODS We designed a tele-otoscope (Otiscan, SyncVision Technology Corp) equipped with digital imaging and processing modules, Wi-Fi transmission capabilities, and an automatic OM diagnostic algorithm. A total of 1137 otoscopic images, comprising 987 images of normal cases and 150 images of cases of acute OM and OM with effusion, were used as the dataset for image classification. Two convolutional neural network models, trained using our dataset, were used for raw image segmentation and OM classification. RESULTS The tele-otoscope delivered images with a resolution of 1280 × 720 pixels. Our tele-otoscope effectively differentiated OM from normal images, achieving a classification accuracy rate of up to 94% (sensitivity, 80%; specificity, 96%). CONCLUSION Our study demonstrated that the developed tele-otoscope has acceptable accuracy in diagnosing OM. This system can assist health care professionals in early detection and continuous remote monitoring, thus mitigating the consequences of OM.
Collapse
Affiliation(s)
- Te-Yung Fang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Otolaryngology, Sijhih Cathay General Hospital, New Taipei City, Taiwan
| | - Tse-Yu Lin
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Chung-Min Shen
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Pediatric, Cathay General Hospital, Taipei, Taiwan
| | - Su-Yi Hsu
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
| | - Shing-Huey Lin
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Family and Community Medicine, Cathay General Hospital, Taipei, Taiwan
| | - Yu-Jung Kuo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Ming-Hsu Chen
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Tan-Kuei Yin
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Chih-Hsien Liu
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
| | - Men-Tzung Lo
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
| | - Pa-Chun Wang
- Department of Otolaryngology, Cathay General Hospital, Taipei, Taiwan
- School of Medicine, Fu-Jen Catholic University, New Taipei City, Taiwan
- Department of Biomedical Sciences and Engineering, National Central University, Taoyuan, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
| |
Collapse
|
3
|
Dou Z, Li Y, Deng D, Zhang Y, Pang A, Fang C, Bai X, Bing D. Pure tone audiogram classification using deep learning techniques. Clin Otolaryngol 2024. [PMID: 38745553 DOI: 10.1111/coa.14170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/21/2023] [Accepted: 04/20/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE Pure tone audiometry has played a critical role in audiology as the initial diagnostic tool, offering vital insights for subsequent analyses. This study aims to develop a robust deep learning framework capable of accurately classifying audiograms across various commonly encountered tasks. DESIGN, SETTING, AND PARTICIPANTS This single-centre retrospective study was conducted in accordance with the STROBE guidelines. A total of 12 518 audiograms were collected from 6259 patients aged between 4 and 96 years, who underwent pure tone audiometry testing between February 2018 and April 2022 at Tongji Hospital, Tongji Medical College, Wuhan, China. Three experienced audiologists independently annotated the audiograms, labelling the hearing loss in degrees, types and configurations of each audiogram. MAIN OUTCOME MEASURES A deep learning framework was developed and utilised to classify audiograms across three tasks: determining the degrees of hearing loss, identifying the types of hearing loss, and categorising the configurations of audiograms. The classification performance was evaluated using four commonly used metrics: accuracy, precision, recall and F1-score. RESULTS The deep learning method consistently outperformed alternative methods, including K-Nearest Neighbors, ExtraTrees, Random Forest, XGBoost, LightGBM, CatBoost and FastAI Net, across all three tasks. It achieved the highest accuracy rates, ranging from 96.75% to 99.85%. Precision values fell within the range of 88.93% to 98.41%, while recall values spanned from 89.25% to 98.38%. The F1-score also exhibited strong performance, ranging from 88.99% to 98.39%. CONCLUSIONS This study demonstrated that a deep learning approach could accurately classify audiograms into their respective categories and could contribute to assisting doctors, particularly those lacking audiology expertise or experience, in better interpreting pure tone audiograms, enhancing diagnostic accuracy in primary care settings, and reducing the misdiagnosis rate of hearing conditions. In scenarios involving large-scale audiological data, the automated classification system could be used as a research tool to efficiently provide a comprehensive overview and statistical analysis. In the era of mobile audiometry, our deep learning framework can also help patients quickly and reliably understand their self-tested audiograms, potentially encouraging timely consultations with audiologists for further evaluation and intervention.
Collapse
Affiliation(s)
- Zhiyong Dou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Yingqiang Li
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dongzhou Deng
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yunxue Zhang
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Anran Pang
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cong Fang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiang Bai
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Dan Bing
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
4
|
Wojtera B, Szewczyk M, Pieńkowski P, Golusiński W. Artificial intelligence in head and neck surgery: Potential applications and future perspectives. J Surg Oncol 2024; 129:1051-1055. [PMID: 38419212 DOI: 10.1002/jso.27616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 02/05/2024] [Accepted: 02/11/2024] [Indexed: 03/02/2024]
Abstract
Artificial intelligence (AI) has the potential to improve the surgical treatment of patients with head and neck cancer. AI algorithms can analyse a wide range of data, including images, voice, molecular expression and raw clinical data. In the field of oncology, there are numerous AI practical applications, including diagnostics and treatment. AI can also develop predictive models to assess prognosis, overall survival, the likelihood of occult metastases, risk of complications and hospital length of stay.
Collapse
Affiliation(s)
- Bartosz Wojtera
- Department of Head and Neck Surgery, Greater Poland Cancer Centre, Poznan University of Medical Sciences, Poznań, Poland
| | - Mateusz Szewczyk
- Department of Head and Neck Surgery, Greater Poland Cancer Centre, Poznan University of Medical Sciences, Poznań, Poland
| | - Piotr Pieńkowski
- Department of Head and Neck Surgery, Greater Poland Cancer Centre, Poznan University of Medical Sciences, Poznań, Poland
| | - Wojciech Golusiński
- Department of Head and Neck Surgery, Greater Poland Cancer Centre, Poznan University of Medical Sciences, Poznań, Poland
| |
Collapse
|
5
|
Sundgaard JV, Hannemose MR, Laugesen S, Bray P, Harte J, Kamide Y, Tanaka C, Paulsen RR, Christensen AN. Multi-modal deep learning for joint prediction of otitis media and diagnostic difficulty. Laryngoscope Investig Otolaryngol 2024; 9:e1199. [PMID: 38362190 PMCID: PMC10866588 DOI: 10.1002/lio2.1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 11/10/2023] [Accepted: 11/26/2023] [Indexed: 02/17/2024] Open
Abstract
Objectives In this study, we propose a diagnostic model for automatic detection of otitis media based on combined input of otoscopy images and wideband tympanometry measurements. Methods We present a neural network-based model for the joint prediction of otitis media and diagnostic difficulty. We use the subclassifications acute otitis media and otitis media with effusion. The proposed approach is based on deep metric learning, and we compare this with the performance of a standard multi-task network. Results The proposed deep metric approach shows good performance on both tasks, and we show that the multi-modal input increases the performance for both classification and difficulty estimation compared to the models trained on the modalities separately. An accuracy of 86.5% is achieved for the classification task, and a Kendall rank correlation coefficient of 0.45 is achieved for difficulty estimation, corresponding to a correct ranking of 72.6% of the cases. Conclusion This study demonstrates the strengths of a multi-modal diagnostic tool using both otoscopy images and wideband tympanometry measurements for the diagnosis of otitis media. Furthermore, we show that deep metric learning improves the performance of the models.
Collapse
Affiliation(s)
| | - Morten Rieger Hannemose
- Department of Applied Mathematics and Computer ScienceTechnical University of DenmarkDenmark
| | - Søren Laugesen
- Interacoustics Research UnitTechnical University of DenmarkLyngbyDenmark
| | | | - James Harte
- Interacoustics Research UnitTechnical University of DenmarkLyngbyDenmark
| | | | | | - Rasmus R. Paulsen
- Department of Applied Mathematics and Computer ScienceTechnical University of DenmarkDenmark
| | | |
Collapse
|
6
|
Abd-Alrazaq A, Alajlani M, Ahmad R, AlSaad R, Aziz S, Ahmed A, Alsahli M, Damseh R, Sheikh J. The Performance of Wearable AI in Detecting Stress Among Students: Systematic Review and Meta-Analysis. J Med Internet Res 2024; 26:e52622. [PMID: 38294846 PMCID: PMC10867751 DOI: 10.2196/52622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 10/24/2023] [Accepted: 12/19/2023] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. OBJECTIVE This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. METHODS Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. RESULTS This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F1-score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. CONCLUSIONS Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. TRIAL REGISTRATION PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp.
Collapse
Affiliation(s)
- Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Mohannad Alajlani
- Institute of Digital Healthcare, WMG, University of Warwick, Warwick, United Kingdom
| | - Reham Ahmad
- Institute of Digital Healthcare, WMG, University of Warwick, Warwick, United Kingdom
| | - Rawan AlSaad
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Sarah Aziz
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Arfan Ahmed
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Mohammed Alsahli
- Health Informatics Department, College of Health Science, Saudi Electronic University, Riyadh, Saudi Arabia
| | - Rafat Damseh
- Department of Computer Science and Software Engineering, United Arab Emirates University, Al Ain, Abu Dhabi, United Arab Emirates
| | - Javaid Sheikh
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| |
Collapse
|
7
|
Kurabi A, Dewan K, Kerschner JE, Leichtle A, Li JD, Santa Maria PL, Preciado D. PANEL 3: Otitis media animal models, cell culture, tissue regeneration & pathophysiology. Int J Pediatr Otorhinolaryngol 2024; 176:111814. [PMID: 38101097 DOI: 10.1016/j.ijporl.2023.111814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/13/2023] [Accepted: 11/29/2023] [Indexed: 12/17/2023]
Abstract
OBJECTIVE To review and summarize recently published key articles on the topics of animal models, cell culture studies, tissue biomedical engineering and regeneration, and new models in relation to otitis media (OM). DATA SOURCE Electronic databases: PubMed, National Library of Medicine, Ovid Medline. REVIEW METHODS Key topics were assigned to the panel participants for identification and detailed evaluation. The PubMed reviews were focused on the period from June 2019 to June 2023, in any of the objective subject(s) or keywords listed above, noting the relevant references relating to these advances with a global overview and noting areas of recommendation(s). The final manuscript was prepared with input from all panel members. CONCLUSIONS In conclusion, ex vivo and in vivo OM research models have seen great advancements in the past 4 years. From the usage of novel genetic and molecular tools to the refinement of in vivo inducible and spontaneous mouse models, to the introduction of a wide array of reliable middle ear epithelium (MEE) cell culture systems, the next five years are likely to experience exponential growth in OM pathophysiology discoveries. Moreover, advances in these systems will predictably facilitate rapid means for novel molecular therapeutic studies.
Collapse
Affiliation(s)
- Arwa Kurabi
- Department of Otolaryngology, University of California San Diego, School of Medicine, La Jolla, CA, USA.
| | - Kalyan Dewan
- Department of Infectious Diseases, College of Veterinary Medicine, University of Georgia, Athens, GA, USA
| | - Joseph E Kerschner
- Department of Otolaryngology and Communication Sciences, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Anke Leichtle
- Department of Otorhinolaryngology, University of Luebeck, Luebeck, Germany
| | - Jian-Dong Li
- Center for Inflammation, Immunity and Infection, Institute for Biomedical Sciences, Georgia State University, Atlanta, GA, USA
| | - Peter Luke Santa Maria
- Department of Otolaryngology - Head & Neck Surgery, Stanford University, Stanford, CA, USA
| | - Diego Preciado
- Children's National Hospital, Division of Pediatric Otolaryngology, Washington, DC, USA
| |
Collapse
|
8
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Accurate Segmentation and Tracking of Chorda Tympani in Endoscopic Middle Ear Surgery with Artificial Intelligence. EAR, NOSE & THROAT JOURNAL 2023:1455613231212051. [PMID: 38083840 DOI: 10.1177/01455613231212051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023] Open
Abstract
Objective: We introduce a novel endoscopic middle ear surgery dataset specifically designed for evaluating deep learning (DL)-based semantic segmentation of chorda tympani. Methods: We curated a dataset comprising 8240 images from 25 patients, divided into a training set (20%, 1648 images), validation set (5%, 412 images), and test set (75%, 6180 images). We employed data enhancement techniques to expand the picture size of the training and validation sets by 5 times (training set: 8240 images, verification set: 2060 images). Subsequently, we employed a multistage transfer learning training method to establish, train, and validate various convolutional neural networks. Results: On the validation set of 2060 labeled images, our proposed network achieved good results, with the U-net exhibiting the highest effectiveness (mIOU = 0.8737, mPA = 0.9263). Furthermore, when applied to the test dataset of 6180 raw images and contrasted with the prediction of otologists, the overall performance of the U-net was excellent (accuracy = 0.911, precision = 0.9823, sensitivity = 0.8777, specificity = 0.9714). Conclusions: Our findings demonstrate that DL can be successfully employed for automatic segmentation of chorda tympani in endoscopic middle ear surgery, yielding high-performance results. This study validates the potential feasibility of future intelligent navigation technologies to assist in endoscopic middle ear surgery.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, Peking Union Medical College Hospital, Dongcheng District, Beijing, China
| |
Collapse
|
9
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
10
|
Song D, Kim T, Lee Y, Kim J. Image-Based Artificial Intelligence Technology for Diagnosing Middle Ear Diseases: A Systematic Review. J Clin Med 2023; 12:5831. [PMID: 37762772 PMCID: PMC10531728 DOI: 10.3390/jcm12185831] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
Abstract
Otolaryngological diagnoses, such as otitis media, are traditionally performed using endoscopy, wherein diagnostic accuracy can be subjective and vary among clinicians. The integration of objective tools, like artificial intelligence (AI), could potentially improve the diagnostic process by minimizing the influence of subjective biases and variability. We systematically reviewed the AI techniques using medical imaging in otolaryngology. Relevant studies related to AI-assisted otitis media diagnosis were extracted from five databases: Google Scholar, PubMed, Medline, Embase, and IEEE Xplore, without date restrictions. Publications that did not relate to AI and otitis media diagnosis or did not utilize medical imaging were excluded. Of the 32identified studies, 26 used tympanic membrane images for classification, achieving an average diagnosis accuracy of 86% (range: 48.7-99.16%). Another three studies employed both segmentation and classification techniques, reporting an average diagnosis accuracy of 90.8% (range: 88.06-93.9%). These findings suggest that AI technologies hold promise for improving otitis media diagnosis, offering benefits for telemedicine and primary care settings due to their high diagnostic accuracy. However, to ensure patient safety and optimal outcomes, further improvements in diagnostic performance are necessary.
Collapse
Affiliation(s)
- Dahye Song
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Taewan Kim
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Yeonjoon Lee
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Jaeyoung Kim
- Department of Dermatology and Skin Sciences, University of British Columbia, Vancouver, BC V6T 1Z1, Canada;
- Core Research & Development Center, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
| |
Collapse
|
11
|
Petsiou DP, Martinos A, Spinos D. Applications of Artificial Intelligence in Temporal Bone Imaging: Advances and Future Challenges. Cureus 2023; 15:e44591. [PMID: 37795060 PMCID: PMC10545916 DOI: 10.7759/cureus.44591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2023] [Indexed: 10/06/2023] Open
Abstract
The applications of artificial intelligence (AI) in temporal bone (TB) imaging have gained significant attention in recent years, revolutionizing the field of otolaryngology and radiology. Accurate interpretation of imaging features of TB conditions plays a crucial role in diagnosing and treating a range of ear-related pathologies, including middle and inner ear diseases, otosclerosis, and vestibular schwannomas. According to multiple clinical studies published in the literature, AI-powered algorithms have demonstrated exceptional proficiency in interpreting imaging findings, not only saving time for physicians but also enhancing diagnostic accuracy by reducing human error. Although several challenges remain in routinely relying on AI applications, the collaboration between AI and healthcare professionals holds the key to better patient outcomes and significantly improved patient care. This overview delivers a comprehensive update on the advances of AI in the field of TB imaging, summarizes recent evidence provided by clinical studies, and discusses future insights and challenges in the widespread integration of AI in clinical practice.
Collapse
Affiliation(s)
- Dioni-Pinelopi Petsiou
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Anastasios Martinos
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Dimitrios Spinos
- Otolaryngology-Head and Neck Surgery, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, GBR
| |
Collapse
|
12
|
Chen SL, Chin SC, Chan KC, Ho CY. A Machine Learning Approach to Assess Patients with Deep Neck Infection Progression to Descending Mediastinitis: Preliminary Results. Diagnostics (Basel) 2023; 13:2736. [PMID: 37685275 PMCID: PMC10486957 DOI: 10.3390/diagnostics13172736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/25/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
BACKGROUND Deep neck infection (DNI) is a serious infectious disease, and descending mediastinitis is a fatal infection of the mediastinum. However, no study has applied artificial intelligence to assess progression to descending mediastinitis in DNI patients. Thus, we developed a model to assess the possible progression of DNI to descending mediastinitis. METHODS Between August 2017 and December 2022, 380 patients with DNI were enrolled; 75% of patients (n = 285) were assigned to the training group for validation, whereas the remaining 25% (n = 95) were assigned to the test group to determine the accuracy. The patients' clinical and computed tomography (CT) parameters were analyzed via the k-nearest neighbor method. The predicted and actual progression of DNI patients to descending mediastinitis were compared. RESULTS In the training and test groups, there was no statistical significance (all p > 0.05) noted at clinical variables (age, gender, chief complaint period, white blood cells, C-reactive protein, diabetes mellitus, and blood sugar), deep neck space (parapharyngeal, submandibular, retropharyngeal, and multiple spaces involved, ≥3), tracheostomy performance, imaging parameters (maximum diameter of abscess and nearest distance from abscess to level of sternum notch), or progression to mediastinitis. The model had a predictive accuracy of 82.11% (78/95 patients), with sensitivity and specificity of 41.67% and 87.95%, respectively. CONCLUSIONS Our model can assess the progression of DNI to descending mediastinitis depending on clinical and imaging parameters. It can be used to identify DNI patients who will benefit from prompt treatment.
Collapse
Affiliation(s)
- Shih-Lung Chen
- Department of Otorhinolaryngology & Head and Neck Surgery, Chang Gung Memorial Hospital, New Taipei City 333, Taiwan
- School of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Shy-Chyi Chin
- School of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, New Taipei City 333, Taiwan
| | - Kai-Chieh Chan
- Department of Otorhinolaryngology & Head and Neck Surgery, Chang Gung Memorial Hospital, New Taipei City 333, Taiwan
- School of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| | - Chia-Ying Ho
- School of Medicine, Chang Gung University, Taoyuan 333, Taiwan
- Division of Chinese Internal Medicine, Center for Traditional Chinese Medicine, Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
| |
Collapse
|
13
|
Siggaard LD, Jacobsen H, Hougaard DD, Høgsbro M. Digital vs. physical ear-nose-and-throat specialist assessment screening for complicated hearing loss and serious ear disorders in hearing-impaired adults prior to hearing aid treatment: a randomized controlled trial. Front Digit Health 2023; 5:1182421. [PMID: 37363275 PMCID: PMC10285396 DOI: 10.3389/fdgth.2023.1182421] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 05/15/2023] [Indexed: 06/28/2023] Open
Abstract
Introduction This study introduces a digital assessment tool for asynchronous and remote ear-nose-and-throat (ENT) specialist assessment screening for complicated hearing loss and serious ear disorders in hearing-impaired adults prior to hearing aid (HA) treatment. The +60 population will nearly double from 12% to 22% between 2015 and 2050 increasing the incidence of age-induced hearing impairment and the need for hearing rehabilitation. If un-diagnosed, age-related hearing loss negatively affects quality of life by accelerating social distancing and early retirement as well as increasing risk of anxiety, depression, and dementia. Therefore, innovative measures are essential to provide timely diagnostics and treatment. Methods A total of 751 hearing-impaired adults without previous HA usage or experience were randomly assigned to digital or physical ENT specialist assessment screening prior to HA treatment initiation in 20 public and private hearing rehabilitation and ENT specialist clinics in the North Denmark Region. A total of 501 test group participants were assigned to digital assessment screening and 250 control group participants to physical assessment screening prior to HA treatment. Results In all, 658 (88%) participants completed the trial and were eligible for analysis. Digital screening sensitivity (0.85, 95% confidence interval (CI) 0.71-0.94) was significantly higher than physical screening sensitivity (0.2, 95% CI: 0.03-0.56). Screening specificity was high for both assessment methods. Discussion In a setting where hearing-impaired adults were assessed for HA treatment, digital ENT specialist assessment screening did not compromise patient safety or increase the risk of misdiagnosis in patients with complicated hearing loss and/or serious ear disorders when compared to physical ENT specialist assessment screening. Clinical Trial registration https://clinicaltrials.gov/ct2/show/NCT05154539, identifier: NCT05154539.
Collapse
Affiliation(s)
- Lene Dahl Siggaard
- Department of Otorhinolaryngology, Head and Neck Surgery, and Audiology, Aalborg University Hospital, Aalborg, Denmark
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Henrik Jacobsen
- Department of Otorhinolaryngology, Head and Neck Surgery, and Audiology, Aalborg University Hospital, Aalborg, Denmark
| | - Dan Dupont Hougaard
- Department of Otorhinolaryngology, Head and Neck Surgery, and Audiology, Aalborg University Hospital, Aalborg, Denmark
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Morten Høgsbro
- Department of Otorhinolaryngology, Head and Neck Surgery, and Audiology, Aalborg University Hospital, Aalborg, Denmark
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| |
Collapse
|
14
|
El Feghaly RE, Nedved A, Katz SE, Frost HM. New insights into the treatment of acute otitis media. Expert Rev Anti Infect Ther 2023; 21:523-534. [PMID: 37097281 PMCID: PMC10231305 DOI: 10.1080/14787210.2023.2206565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/20/2023] [Indexed: 04/26/2023]
Abstract
INTRODUCTION Acute otitis media (AOM) affects most (80%) children by 5 years of age and is the most common reason children are prescribed antibiotics. The epidemiology of AOM has changed considerably since the widespread use of pneumococcal conjugate vaccines, which has broad-reaching implications for management. AREAS COVERED In this narrative review, we cover the epidemiology of AOM, best practices for diagnosis and management, new diagnostic technology, effective stewardship interventions, and future directions of the field. Literature review was performed using PubMed and ClinicalTrials.gov. EXPERT OPINION Inaccurate diagnoses, unnecessary antibiotic use, and increasing antimicrobial resistance remain major challenges in AOM management. Fortunately, effective tools and interventions to improve diagnostic accuracy, de-implement unnecessary antibiotic use, and individualize care are on the horizon. Successful scaling of these tools and interventions will be critical to improving overall care for children.
Collapse
Affiliation(s)
- Rana E. El Feghaly
- Department of Pediatrics, Children’s Mercy Kansas City, Kansas City, MO, USA
- Department of Pediatrics, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Amanda Nedved
- Department of Pediatrics, Children’s Mercy Kansas City, Kansas City, MO, USA
- Department of Pediatrics, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Sophie E. Katz
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Holly M. Frost
- Department of Pediatrics, Denver Health and Hospital Authority, Denver, CO, USA
- Center for Health Systems Research, Denver Health and Hospital Authority, Denver, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| |
Collapse
|
15
|
Habib AR, Xu Y, Bock K, Mohanty S, Sederholm T, Weeks WB, Dodhia R, Ferres JL, Perry C, Sacks R, Singh N. Evaluating the generalizability of deep learning image classification algorithms to detect middle ear disease using otoscopy. Sci Rep 2023; 13:5368. [PMID: 37005441 PMCID: PMC10067817 DOI: 10.1038/s41598-023-31921-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/20/2023] [Indexed: 04/04/2023] Open
Abstract
To evaluate the generalizability of artificial intelligence (AI) algorithms that use deep learning methods to identify middle ear disease from otoscopic images, between internal to external performance. 1842 otoscopic images were collected from three independent sources: (a) Van, Turkey, (b) Santiago, Chile, and (c) Ohio, USA. Diagnostic categories consisted of (i) normal or (ii) abnormal. Deep learning methods were used to develop models to evaluate internal and external performance, using area under the curve (AUC) estimates. A pooled assessment was performed by combining all cohorts together with fivefold cross validation. AI-otoscopy algorithms achieved high internal performance (mean AUC: 0.95, 95%CI: 0.80-1.00). However, performance was reduced when tested on external otoscopic images not used for training (mean AUC: 0.76, 95%CI: 0.61-0.91). Overall, external performance was significantly lower than internal performance (mean difference in AUC: -0.19, p ≤ 0.04). Combining cohorts achieved a substantial pooled performance (AUC: 0.96, standard error: 0.01). Internally applied algorithms for otoscopy performed well to identify middle ear disease from otoscopy images. However, external performance was reduced when applied to new test cohorts. Further efforts are required to explore data augmentation and pre-processing techniques that might improve external performance and develop a robust, generalizable algorithm for real-world clinical applications.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia.
- Department of Otolaryngology, Head and Neck Surgery, Westmead Hospital, Sydney, NSW, Australia.
| | - Yixi Xu
- AI for Good Lab, Microsoft, Redmond, WA, USA
| | - Kris Bock
- Azure FastTrack Engineering, Brisbane, QLD, Australia
| | | | | | | | | | | | - Chris Perry
- University of Queensland Medical School, Brisbane, QLD, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
- Department of Otolaryngology, Head and Neck Surgery, Westmead Hospital, Sydney, NSW, Australia
| |
Collapse
|
16
|
Habib AR, Perry C, Crossland G, Patel H, Kong K, Whitfield B, North H, Walton J, Da Cruz M, Suruliraj A, Smith M, Harris R, Hasan Z, Gunaratne DA, Sacks R, Singh N. Inter-rater agreement between 13 otolaryngologists to diagnose otitis media in Aboriginal and Torres Strait Islander children using a telehealth approach. Int J Pediatr Otorhinolaryngol 2023; 168:111494. [PMID: 37003013 DOI: 10.1016/j.ijporl.2023.111494] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Revised: 01/07/2023] [Accepted: 02/19/2023] [Indexed: 04/03/2023]
Abstract
INTRODUCTION Telehealth programs are important to deliver otolaryngology services for Aboriginal and Torres Strait Islander children living in rural and remote areas, where distance and access to specialists is a critical factor. OBJECTIVE To evaluate the inter-rater agreement and value of increasing levels of clinical data (otoscopy with or without audiometry and in-field nurse impressions) to diagnose otitis media using a telehealth approach. DESIGN Blinded, inter-rater reliability study. SETTING Ear health and hearing assessments collected from a statewide telehealth program for Indigenous children living in rural and remote areas of Queensland, Australia. PARTICIPANTS Thirteen board-certified otolaryngologists independently reviewed 80 telehealth assessments from 65 Indigenous children (mean age 5.7 ± 3.1 years, 33.8% female). INTERVENTIONS Raters were provided increasing tiers of clinical data to assess concordance to the reference standard diagnosis: Tier A) otoscopic images alone, Tier B) otoscopic images plus tympanometry and category of hearing loss, and Tier C) as B plus static compliance, canal volume, pure-tone audiometry, and nurse impressions (otoscopic findings and presumed diagnosis). For each tier, raters were asked to determine which of the four diagnostic categories applied: normal aerated ear, acute otitis media (AOM), otitis media with effusion (OME), and chronic otitis media (COM). MAIN OUTCOME MEASURES Proportion of agreement to the reference standard, prevalence-and-bias adjusted κ coefficients, mean difference in accuracy estimates between each tier of clinical data. RESULTS Accuracy between raters and the reference standard increased with increased provision of clinical data (Tier A: 65% (95%CI: 63-68%), κ = 0.53 (95%CI: 0.48-0.57); Tier B: 77% (95%CI: 74-79%), 0.68 (95%CI: 0.65-0.72); C: 85% (95%CI: 82-87%), 0.79 (95%CI: 0.76-0.82)). Classification accuracy significantly improved between Tier A to B (mean difference:12%, p < 0.001) and between Tier B to C (mean difference: 8%, p < 0.001). The largest improvement in classification accuracy was observed between Tier A and C (mean difference: 20%, p < 0.001). Inter-rater agreement similarly improved with increasing provision of clinical data. CONCLUSIONS There is substantial agreement between otolaryngologists to diagnose ear disease using electronically stored clinical data collected from telehealth assessments. The addition of audiometry, tympanometry and nurse impressions significantly improved expert accuracy and inter-rater agreement, compared to reviewing otoscopic images alone.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, 2006, Australia; Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia.
| | - Chris Perry
- University of Queensland Medical School, St Lucia, Queensland, 4072, Australia
| | - Graeme Crossland
- Royal Darwin Hospital, Top End Health Service, Department of Health, Tiwi, Northern Territory, 0810, Australia
| | - Hemi Patel
- Royal Darwin Hospital, Top End Health Service, Department of Health, Tiwi, Northern Territory, 0810, Australia
| | - Kelvin Kong
- School of Medicine and Public Health, University of Newcastle, Callaghan, New South Wales, 2308, Australia
| | - Bernard Whitfield
- Griffith Medical School, Griffith University, Southport, Queensland, 4215, Australia
| | - Hannah North
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Joanna Walton
- Department of Otolaryngology - Head and Neck Surgery, The Children's Hospital at Westmead, Westmead, New South Wales, 2145, Australia
| | - Melville Da Cruz
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, 2006, Australia; Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Anand Suruliraj
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Murray Smith
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Rhydian Harris
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Zubair Hasan
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Dakshika A Gunaratne
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| | - Raymond Sacks
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, 2006, Australia
| | - Narinder Singh
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, 2006, Australia; Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Westmead, New South Wales, 2145, Australia
| |
Collapse
|
17
|
Byun H, Lee SH, Kim TH, Oh J, Chung JH. Feasibility of the Machine Learning Network to Diagnose Tympanic Membrane Lesions without Coding Experience. J Pers Med 2022; 12:jpm12111855. [PMID: 36579584 PMCID: PMC9697619 DOI: 10.3390/jpm12111855] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 09/29/2022] [Accepted: 10/31/2022] [Indexed: 11/10/2022] Open
Abstract
A machine learning platform operated without coding knowledge (Teachable machine®) has been introduced. The aims of the present study were to assess the performance of the Teachable machine® for diagnosing tympanic membrane lesions. A total of 3024 tympanic membrane images were used to train and validate the diagnostic performance of the network. Tympanic membrane images were labeled as normal, otitis media with effusion (OME), chronic otitis media (COM), and cholesteatoma. According to the complexity of the categorization, Level I refers to normal versus abnormal tympanic membrane; Level II was defined as normal, OME, or COM + cholesteatoma; and Level III distinguishes between all four pathologies. In addition, eighty representative test images were used to assess the performance. Teachable machine® automatically creates a classification network and presents diagnostic performance when images are uploaded. The mean accuracy of the Teachable machine® for classifying tympanic membranes as normal or abnormal (Level I) was 90.1%. For Level II, the mean accuracy was 89.0% and for Level III it was 86.2%. The overall accuracy of the classification of the 80 representative tympanic membrane images was 78.75%, and the hit rates for normal, OME, COM, and cholesteatoma were 95.0%, 70.0%, 90.0%, and 60.0%, respectively. Teachable machine® could successfully generate the diagnostic network for classifying tympanic membrane.
Collapse
Affiliation(s)
- Hayoung Byun
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
| | - Seung Hwan Lee
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Tae Hyun Kim
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of Computer Science, Hanyang University, Seoul 04763, Korea
| | - Jaehoon Oh
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of Emergency Medicine, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Jae Ho Chung
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of HY-KIST Bio-Convergence, College of Medicine, Hanyang University, Seoul 04763, Korea
- Correspondence:
| |
Collapse
|
18
|
A Machine Learning Approach to Screen for Otitis Media Using Digital Otoscope Images Labelled by an Expert Panel. Diagnostics (Basel) 2022; 12:diagnostics12061318. [PMID: 35741128 PMCID: PMC9222011 DOI: 10.3390/diagnostics12061318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/23/2022] [Accepted: 05/24/2022] [Indexed: 12/04/2022] Open
Abstract
Background: Otitis media includes several common inflammatory conditions of the middle ear that can have severe complications if left untreated. Correctly identifying otitis media can be difficult and a screening system supported by machine learning would be valuable for this prevalent disease. This study investigated the performance of a convolutional neural network in screening for otitis media using digital otoscopic images labelled by an expert panel. Methods: Five experienced otologists diagnosed 347 tympanic membrane images captured with a digital otoscope. Images with a majority expert diagnosis (n = 273) were categorized into three screening groups Normal, Pathological and Wax, and the same images were used for training and testing of the convolutional neural network. Expert panel diagnoses were compared to the convolutional neural network classification. Different approaches to the convolutional neural network were tested to identify the best performing model. Results: Overall accuracy of the convolutional neural network was above 0.9 in all except one approach. Sensitivity to finding ears with wax or pathology was above 93% in all cases and specificity was 100%. Adding more images to train the convolutional neural network had no positive impact on the results. Modifications such as normalization of datasets and image augmentation enhanced the performance in some instances. Conclusions: A machine learning approach could be used on digital otoscopic images to accurately screen for otitis media.
Collapse
|