1
|
Dubois C, Eigen D, Simon F, Couloigner V, Gormish M, Chalumeau M, Schmoll L, Cohen JF. Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images. NPJ Digit Med 2024; 7:162. [PMID: 38902477 PMCID: PMC11189910 DOI: 10.1038/s41746-024-01159-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 06/10/2024] [Indexed: 06/22/2024] Open
Abstract
Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5-100) and a specificity of 95.2% (91.5-97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6-100) and specificity of 97.7% (95.0-99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system's performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.
Collapse
Affiliation(s)
| | | | - François Simon
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | - Vincent Couloigner
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Martin Chalumeau
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Jérémie F Cohen
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France.
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France.
| |
Collapse
|
2
|
Lee JC, Hamill CS, Shnayder Y, Buczek E, Kakarala K, Bur AM. Exploring the Role of Artificial Intelligence Chatbots in Preoperative Counseling for Head and Neck Cancer Surgery. Laryngoscope 2024; 134:2757-2761. [PMID: 38126511 DOI: 10.1002/lary.31243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 10/25/2023] [Accepted: 11/30/2023] [Indexed: 12/23/2023]
Abstract
OBJECTIVE To evaluate the potential use of artificial intelligence (AI) chatbots, such as ChatGPT, in preoperative counseling for patients undergoing head and neck cancer surgery. STUDY DESIGN Cross-Sectional Survey Study. SETTING Single institution tertiary care center. METHODS ChatGPT was used to generate presurgical educational information including indications, risks, and recovery time for five common head and neck surgeries. Chatbot-generated information was compared with information gathered from a simple browser search (first publicly available website excluding scholarly articles). The accuracy of the information, readability, thoroughness, and number of errors were compared by five experienced head and neck surgeons in a blinded fashion. Each surgeon then chose a preference between the two information sources for each surgery. RESULTS With the exception of total word count, ChatGPT-generated pre-surgical information has similar readability, content of knowledge, accuracy, thoroughness, and numbers of medical errors when compared to publicly available websites. Additionally, ChatGPT was preferred 48% of the time by experienced head and neck surgeons. CONCLUSION Head and neck surgeons rated ChatGPT-generated and readily available online educational materials similarly. Further refinement in AI technology may soon open more avenues for patient counseling. Future investigations into the medical safety of AI counseling and exploring patients' perspectives would be of strong interest. LEVEL OF EVIDENCE N/A. Laryngoscope, 134:2757-2761, 2024.
Collapse
Affiliation(s)
- Jason C Lee
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Chelsea S Hamill
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Yelizaveta Shnayder
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Erin Buczek
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Kiran Kakarala
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| | - Andrés M Bur
- Department of Otolaryngology, University of Kansas Medical Center, Kansas City, Kansas, U.S.A
| |
Collapse
|
3
|
Nwosu O, Suresh K, Lee DJ, Crowson MG. Proof-of-Concept Computer Vision Model for Instrument and Anatomy Detection During Transcanal Endoscopic Ear Surgery. Otolaryngol Head Neck Surg 2024; 170:1602-1604. [PMID: 38104321 DOI: 10.1002/ohn.613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/06/2023] [Accepted: 11/24/2023] [Indexed: 12/19/2023]
Abstract
High-definition video captured during transcanal endoscopic ear surgery (TEES) can serve as imaging data for computer vision algorithms. This report describes a proof-of-concept model for automated anatomy and instrument detection during TEES.
Collapse
Affiliation(s)
- Obinna Nwosu
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Krish Suresh
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Daniel J Lee
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Matthew G Crowson
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
4
|
Fung E, Patel D, Tatum S. Artificial intelligence in maxillofacial and facial plastic and reconstructive surgery. Curr Opin Otolaryngol Head Neck Surg 2024:00020840-990000000-00130. [PMID: 38837245 DOI: 10.1097/moo.0000000000000983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
PURPOSE OF REVIEW To provide a current review of artificial intelligence and its subtypes in maxillofacial and facial plastic surgery including a discussion of implications and ethical concerns. RECENT FINDINGS Artificial intelligence has gained popularity in recent years due to technological advancements. The current literature has begun to explore the use of artificial intelligence in various medical fields, but there is limited contribution to maxillofacial and facial plastic surgery due to the wide variance in anatomical facial features as well as subjective influences. In this review article, we found artificial intelligence's roles, so far, are to automatically update patient records, produce 3D models for preoperative planning, perform cephalometric analyses, and provide diagnostic evaluation of oropharyngeal malignancies. SUMMARY Artificial intelligence has solidified a role in maxillofacial and facial plastic surgery within the past few years. As high-quality databases expand with more patients, the role for artificial intelligence to assist in more complicated and unique cases becomes apparent. Despite its potential, ethical questions have been raised that should be noted as artificial intelligence continues to thrive. These questions include concerns such as compromise of the physician-patient relationship and healthcare justice.
Collapse
Affiliation(s)
| | | | - Sherard Tatum
- Department of Otolaryngology
- Department of Pediatrics, SUNY Upstate Medical University, Syracuse, New York, USA
| |
Collapse
|
5
|
Crowson MG, Nwosu OI. The Integration and Impact of Artificial Intelligence in Otolaryngology-Head and Neck Surgery: Navigating the Last Mile. Otolaryngol Clin North Am 2024:S0030-6665(24)00058-6. [PMID: 38705741 DOI: 10.1016/j.otc.2024.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Incorporating artificial Intelligence and machine learning into otolaryngology requires careful data handling, security, and ethical considerations. Success depends on interdisciplinary cooperation, consistent innovation, and regulatory compliance to improve clinical outcomes, provider experience, and operational effectiveness.
Collapse
Affiliation(s)
- Matthew G Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear Hospital, Boston, MA, USA; Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, MA, USA.
| | - Obinna I Nwosu
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear Hospital, Boston, MA, USA; Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Alter IL, Chan K, Lechien J, Rameau A. An introduction to machine learning and generative artificial intelligence for otolaryngologists-head and neck surgeons: a narrative review. Eur Arch Otorhinolaryngol 2024; 281:2723-2731. [PMID: 38393353 DOI: 10.1007/s00405-024-08512-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 01/25/2024] [Indexed: 02/25/2024]
Abstract
PURPOSE Despite the robust expansion of research surrounding artificial intelligence (AI) and machine learning (ML) and their applications to medicine, these methodologies often remain opaque and inaccessible to many otolaryngologists. Especially, with the increasing ubiquity of large-language models (LLMs), such as ChatGPT and their potential implementation in clinical practice, clinicians may benefit from a baseline understanding of some aspects of AI. In this narrative review, we seek to clarify underlying concepts, illustrate applications to otolaryngology, and highlight future directions and limitations of these tools. METHODS Recent literature regarding AI principles and otolaryngologic applications of ML and LLMs was reviewed via search in PubMed and Google Scholar. RESULTS Significant recent strides have been made in otolaryngology research utilizing AI and ML, across all subspecialties, including neurotology, head and neck oncology, laryngology, rhinology, and sleep surgery. Potential applications suggested by recent publications include screening and diagnosis, predictive tools, clinical decision support, and clinical workflow improvement via LLMs. Ongoing concerns regarding AI in medicine include ethical concerns around bias and data sharing, as well as the "black box" problem and limitations in explainability. CONCLUSIONS Potential implementations of AI in otolaryngology are rapidly expanding. While implementation in clinical practice remains theoretical for most of these tools, their potential power to influence the practice of otolaryngology is substantial. LEVEL OF EVIDENCE: 4
Collapse
Affiliation(s)
- Isaac L Alter
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medical College, 240 E 59 St, New York, NY, 10022, USA
| | - Karly Chan
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medical College, 240 E 59 St, New York, NY, 10022, USA
| | - Jérome Lechien
- Department of Otorhinolaryngology, Head and Neck Surgery, Hôpital Foch, School of Medicine, UFR Simone Veil, Université Versailles Saint-Quentin-en-Yvelines (Paris Saclay University), Paris, France
- Department of Human Anatomy and Experimental Oncology, Faculty of Medicine, UMONS Research Institute for Health and Sciences Technology, University of Mons (UMons), Mons, Belgium
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Sean Parker Institute for the Voice, Weill Cornell Medical College, 240 E 59 St, New York, NY, 10022, USA.
| |
Collapse
|
7
|
Irfan B. Beyond the Scope: Advancing Otolaryngology With Artificial Intelligence Integration. Cureus 2024; 16:e54248. [PMID: 38496161 PMCID: PMC10944311 DOI: 10.7759/cureus.54248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2024] [Indexed: 03/19/2024] Open
Abstract
The integration of artificial intelligence (AI) into otolaryngology heralds a new era of enhanced diagnostic precision, improved treatment strategies, and better patient outcomes. This advancement, however, brings to the fore the essential role of education and training in maximizing AI's potential within the field. The diverse spectrum of otolaryngology, encompassing audiology, rhinology, and sleep medicine, presents numerous opportunities for AI applications from predicting hearing loss progression and optimizing cochlear implant settings to managing chronic sinusitis and predicting the success of treatments for obstructive sleep apnea. Such innovations necessitate a paradigm shift in educational frameworks, merging traditional clinical skills with AI literacy. This involves introducing AI concepts, tools, and applications specific to otolaryngology in the curriculum, ensuring practitioners are equipped to leverage AI for diagnostics, patient monitoring, and surgical planning. Exploring the potential of large language models (LLMs) in medical education, simulating clinical scenarios for risk-free diagnostic practice and decision-making, is imperative. Underscoring the importance of continuous education for established otolaryngologists through workshops and seminars on the latest AI tools is another essential goal. Moreover, highlighting the need for a collaborative approach to address ethical considerations and ensure the responsible integration of AI while advocating for a multidisciplinary educational strategy is an important asset. As we navigate this transition, the commitment to training and education becomes paramount, preparing the otolaryngology community to embrace AI-driven healthcare innovations.
Collapse
Affiliation(s)
- Bilal Irfan
- Microbiology and Immunology, University of Michigan, Ann Arbor, USA
| |
Collapse
|
8
|
Anam K, Swasono DI, Triono A, Muttaqin AZ, Hanggara FS. Random forest-based simultaneous and proportional myoelectric control system for finger movements. Comput Methods Biomech Biomed Engin 2023; 26:2057-2069. [PMID: 36649195 DOI: 10.1080/10255842.2023.2165068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/07/2022] [Accepted: 12/31/2022] [Indexed: 01/18/2023]
Abstract
A classification scheme for myoelectric control systems (MCS) cannot mimic complex hand movements. This paper presents simultaneous and proportional MCS by estimating the angles of fourteen finger joints using time-domain feature extraction and random forest. The experimental results show that the best feature was the root mean square (RMS). Furthermore, the random forest attained an average coefficient of determination (R2) of 0.85 compared to other regressors which perform below 0.75. The ANOVA tests indicated that the performance of the proposed system was significantly different. Therefore, the proposed system will be the best option for real-time MCS applications in the future.
Collapse
Affiliation(s)
- Khairul Anam
- Department of Electrical Engineering, University of Jember, Jember, Indonesia
- Intelligent System and Robotics Laboratory, CDAST, University of Jember, Jember, Indonesia
- Artificial Intelligence for Industrial Agriculture Research Group, University of Jember, Jember, Indonesia
| | | | - Agus Triono
- Department, of Mechanical Engineering, University of Jember, Jember, Indonesia
| | - Aris Z Muttaqin
- Department, of Mechanical Engineering, University of Jember, Jember, Indonesia
| | - Faruq S Hanggara
- Intelligent System and Robotics Laboratory, CDAST, University of Jember, Jember, Indonesia
| |
Collapse
|
9
|
Petsiou DP, Martinos A, Spinos D. Applications of Artificial Intelligence in Temporal Bone Imaging: Advances and Future Challenges. Cureus 2023; 15:e44591. [PMID: 37795060 PMCID: PMC10545916 DOI: 10.7759/cureus.44591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2023] [Indexed: 10/06/2023] Open
Abstract
The applications of artificial intelligence (AI) in temporal bone (TB) imaging have gained significant attention in recent years, revolutionizing the field of otolaryngology and radiology. Accurate interpretation of imaging features of TB conditions plays a crucial role in diagnosing and treating a range of ear-related pathologies, including middle and inner ear diseases, otosclerosis, and vestibular schwannomas. According to multiple clinical studies published in the literature, AI-powered algorithms have demonstrated exceptional proficiency in interpreting imaging findings, not only saving time for physicians but also enhancing diagnostic accuracy by reducing human error. Although several challenges remain in routinely relying on AI applications, the collaboration between AI and healthcare professionals holds the key to better patient outcomes and significantly improved patient care. This overview delivers a comprehensive update on the advances of AI in the field of TB imaging, summarizes recent evidence provided by clinical studies, and discusses future insights and challenges in the widespread integration of AI in clinical practice.
Collapse
Affiliation(s)
- Dioni-Pinelopi Petsiou
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Anastasios Martinos
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Dimitrios Spinos
- Otolaryngology-Head and Neck Surgery, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, GBR
| |
Collapse
|
10
|
Amanian A, Heffernan A, Ishii M, Creighton FX, Thamboo A. The Evolution and Application of Artificial Intelligence in Rhinology: A State of the Art Review. Otolaryngol Head Neck Surg 2023; 169:21-30. [PMID: 35787221 PMCID: PMC11110957 DOI: 10.1177/01945998221110076] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To provide a comprehensive overview on the applications of artificial intelligence (AI) in rhinology, highlight its limitations, and propose strategies for its integration into surgical practice. DATA SOURCES Medline, Embase, CENTRAL, Ei Compendex, IEEE, and Web of Science. REVIEW METHODS English studies from inception until January 2022 and those focusing on any application of AI in rhinology were included. Study selection was independently performed by 2 authors; discrepancies were resolved by the senior author. Studies were categorized by rhinology theme, and data collection comprised type of AI utilized, sample size, and outcomes, including accuracy and precision among others. CONCLUSIONS An overall 5435 articles were identified. Following abstract and title screening, 130 articles underwent full-text review, and 59 articles were selected for analysis. Eleven studies were from the gray literature. Articles were stratified into image processing, segmentation, and diagnostics (n = 27); rhinosinusitis classification (n = 14); treatment and disease outcome prediction (n = 8); optimizing surgical navigation and phase assessment (n = 3); robotic surgery (n = 2); olfactory dysfunction (n = 2); and diagnosis of allergic rhinitis (n = 3). Most AI studies were published from 2016 onward (n = 45). IMPLICATIONS FOR PRACTICE This state of the art review aimed to highlight the increasing applications of AI in rhinology. Next steps will entail multidisciplinary collaboration to ensure data integrity, ongoing validation of AI algorithms, and integration into clinical practice. Future research should be tailored at the interplay of AI with robotics and surgical education.
Collapse
Affiliation(s)
- Ameen Amanian
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Austin Heffernan
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andrew Thamboo
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| |
Collapse
|
11
|
Saeed HS, Rajai A, Dixon R, Kapadia T, Bruce IA, Stivaros S. Can MRI biomarkers for hearing loss in enlarged vestibular aqueduct be measured reproducibly? Br J Radiol 2023:20220274. [PMID: 37162001 DOI: 10.1259/bjr.20220274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023] Open
Abstract
OBJECTIVE Morphological features of an enlarged endolymphatic duct (ED) and sac (ES) are imaging biomarkers for genotype and hearing loss phenotype. We determine which biomarkers can be measured in a reproducible manner, facilitating further clinical prediction studies in enlarged vestibular aqueduct hearing loss. METHODS A rater reproducibility study. Three consultant radiologists independently measured previously reported MRI ED & ES biomarkers (ED midpoint width, maximal ED diameter closest to the vestibule, ES length, ES width and presence of ES signal heterogeneity) and presence of incomplete partition Type 2 from 80 ears (T2 weighted axial MRI). Interclass correlation coefficients (ICC) and Gwet's Agreement Coefficients (AC) were generated to give a measure of reproducibility for both continuous and categorical feature measures respectively. RESULTS ES length, width and sac signal heterogeneity showed adequate reproducibility (ICC 95% confidence intervals 0.77-0.95, Gwet's AC for sac heterogeneity 0.64). When determining ED midpoint width, measurements from multiple raters are required for "good" reliability (ICC 95% CI 0.75-0.89). Agreement on the presence of incomplete partition Type 2 ranged from "moderate" to "substantial". CONCLUSIONS Regarding MR imaging, the opinion of multiple expert raters should be sought when determining the presence of an enlarged ED defined by midpoint width. ED midpoint, ES length, width and signal heterogeneity have adequate reproducibility to be further explored as clinical predictors for audiological phenotype. ADVANCES IN KNOWLEDGE We report which ED & ES biomarkers are reproducibly measured. Researchers can confidently utilise these specific biomarkers when modelling progressive hearing loss associated with enlarged vestibular aqueduct.
Collapse
Affiliation(s)
- Haroon S Saeed
- Department of Paediatric Otolaryngology, Royal Manchester Children's Hospital, Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, UK
| | - Azita Rajai
- Research & Innovation, Manchester University NHS Foundation Trust. Oxford Road, Manchester, UK
- Centre of Biostatistics, Division of Population Health, University of Manchester, Oxford Road Manchester, Oxford, United Kingdom
| | - Rachel Dixon
- Academic Unit of Paediatric Radiology, Royal Manchester Children's Hospital, Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, UK
| | - Tejas Kapadia
- Academic Unit of Paediatric Radiology, Royal Manchester Children's Hospital, Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, UK
| | - Iain A Bruce
- Department of Paediatric Otolaryngology, Royal Manchester Children's Hospital, Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, UK
- Division of Infection, Immunity and Respiratory Medicine, Faculty of Biology, Medicine and Health, Oxford Road, Manchester, United Kingdom
| | - Stavros Stivaros
- Academic Unit of Paediatric Radiology, Royal Manchester Children's Hospital, Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, UK
- Division of Informatics, Imaging and Data Sciences, School of Health Sciences, Faculty of Biology, medicine & Health, The University of Manchester, Oxford Road, Manchester, United Kingdom
| |
Collapse
|
12
|
Ngombu S, Binol H, Gurcan MN, Moberly AC. Advances in Artificial Intelligence to Diagnose Otitis Media: State of the Art Review. Otolaryngol Head Neck Surg 2023; 168:635-642. [PMID: 35290142 DOI: 10.1177/01945998221083502] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 02/09/2022] [Indexed: 11/15/2022]
Abstract
OBJECTIVE Otitis media (OM) is a model disease for developing, validating, and implementing artificial intelligence (AI) techniques. We aim to review the state of the art applications of AI used to diagnose OM in pediatric and adult populations. DATA SOURCES Several comprehensive databases were searched to identify all articles that applied AI technologies to diagnose OM. REVIEW METHODS Relevant articles from January 2010 through May 2021 were identified by title and abstract. Articles were excluded if they did not discuss AI in conjunction with diagnosing OM. References of included studies and relevant review articles were cross-referenced to identify any additional studies. CONCLUSION Title and abstract screening resulted in full-text retrieval of 40 articles that met initial screening parameters. Of this total, secondary review articles (n = 7) and commentary-based articles (n = 2) were removed, as were articles that did not specifically discuss AI and OM diagnosis (n = 5), leaving 25 articles for review. Applications of AI technologies specific to diagnosing OM included machine learning and natural language processing (n = 23) and prototype approaches (n = 2). IMPLICATIONS FOR PRACTICE This review emphasizes the utility of AI techniques to automate and aid in diagnosing OM. Although these techniques are still in the development and testing stages, AI has the potential to improve the practice of otolaryngologists and primary care clinicians by increasing the efficiency and accuracy of diagnoses.
Collapse
Affiliation(s)
- Stephany Ngombu
- Department of Otolaryngology-Head and Neck Surgery, Wexner Medical Center at The Ohio State University, Columbus, Ohio, USA
| | - Hamidullah Binol
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, North Carolina, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Wexner Medical Center at The Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
13
|
Suresh K, Cohen MS, Hartnick CJ, Bartholomew RA, Lee DJ, Crowson MG. Generation of synthetic tympanic membrane images: Development, human validation, and clinical implications of synthetic data. PLOS DIGITAL HEALTH 2023; 2:e0000202. [PMID: 36827244 PMCID: PMC9956018 DOI: 10.1371/journal.pdig.0000202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 01/24/2023] [Indexed: 02/25/2023]
Abstract
Synthetic clinical images could augment real medical image datasets, a novel approach in otolaryngology-head and neck surgery (OHNS). Our objective was to develop a generative adversarial network (GAN) for tympanic membrane images and to validate the quality of synthetic images with human reviewers. Our model was developed using a state-of-the-art GAN architecture, StyleGAN2-ADA. The network was trained on intraoperative high-definition (HD) endoscopic images of tympanic membranes collected from pediatric patients undergoing myringotomy with possible tympanostomy tube placement. A human validation survey was administered to a cohort of OHNS and pediatrics trainees at our institution. The primary measure of model quality was the Frechet Inception Distance (FID), a metric comparing the distribution of generated images with the distribution of real images. The measures used for human reviewer validation were the sensitivity, specificity, and area under the curve (AUC) for humans' ability to discern synthetic from real images. Our dataset comprised 202 images. The best GAN was trained at 512x512 image resolution with a FID of 47.0. The progression of images through training showed stepwise "learning" of the anatomic features of a tympanic membrane. The validation survey was taken by 65 persons who reviewed 925 images. Human reviewers demonstrated a sensitivity of 66%, specificity of 73%, and AUC of 0.69 for the detection of synthetic images. In summary, we successfully developed a GAN to produce synthetic tympanic membrane images and validated this with human reviewers. These images could be used to bolster real datasets with various pathologies and develop more robust deep learning models such as those used for diagnostic predictions from otoscopic images. However, caution should be exercised with the use of synthetic data given issues regarding data diversity and performance validation. Any model trained using synthetic data will require robust external validation to ensure validity and generalizability.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
- * E-mail:
| | - Michael S. Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Christopher J. Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ryan A. Bartholomew
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Daniel J. Lee
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
14
|
Kwak C, Han W, Bahng J. Systematic Review and Meta-Analysis of the Application of Virtual Reality in Hearing Disorders. J Audiol Otol 2022; 26:169-181. [PMID: 36285466 PMCID: PMC9597270 DOI: 10.7874/jao.2022.00234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 08/26/2022] [Indexed: 11/06/2022] Open
Abstract
Background and Objectives Trendy technologies, such as artificial intelligence, virtual reality (VR), and augmented reality (AR) are being increasingly used for hearing loss, tinnitus, and vestibular disease. Thus, we conducted this systematic review and meta-analysis to identify the possible benefits of the use of VR and AR technologies in patients with hearing loss, tinnitus, and/or vestibular dysfunction, with the aim of suggesting potential applications of these technologies for both researchers and clinicians. Materials and Methods Published articles from 1968 to 2022 were gathered from six electronic journal databases. Applying our specified inclusion and/or exclusion criteria, 23 studies were analyzed. As only one article on hearing loss and two articles on tinnitus were found, 20 studies on vestibular dysfunction were only finally included for the meta-analysis. Standardized mean differences (SMDs) were chosen as estimates to compare the studies. A funnel plot and Egger’s regression analysis were used to identify any risk of bias. Results High heterogeneity (I2: 83%, τ2: 0.5431, p<0.01) was identified across the studies on vestibular dysfunction. VR-based rehabilitation was significantly effective for individuals with vestibular disease (SMDs: 0.03, 95% confidence interval [CI]: -0.08 to 0.15, p<0.05). A subgroup analysis revealed that only improvement in the subjective questionnaire was meaningful and statistically significant (SMDs: -0.66, 95% CI: -1.10 to -0.22). Conclusions VR-based vestibular rehabilitation showed potential for subjective rating measures like Dizziness Handicap Index. The negative effect of aging on vestibular disease was indirectly confirmed. More clinical trials and an evidence-based approach are needed to confirm the implementation of state-of-the-art technology for hearing loss and tinnitus, representative diseases in neurotology.
Collapse
Affiliation(s)
- Chanbeom Kwak
- Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Korea,Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea
| | - Woojae Han
- Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Korea,Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea
| | - Junghwa Bahng
- Department of Audiology and Speech Language Pathology, Hallym University of Graduate Studies, Seoul, Korea,Center for Hearing and Speech Research, Hallym University of Graduate Studies, Seoul, Korea,Address for correspondence Junghwa Bahng, PhD Department of Audiology and Speech Language Pathology, Hallym University of Graduate Studies, 427 Yeoksam-ro, Gangnam-gu, Seoul 06197, Korea Tel +82-2-3453-6618 Fax +82-70-8638-6833 E-mail
| |
Collapse
|
15
|
Ren G, Yu K, Xie Z, Wang P, Zhang W, Huang Y, Wang Y, Wu X. Current Applications of Machine Learning in Spine: From Clinical View. Global Spine J 2022; 12:1827-1840. [PMID: 34628966 PMCID: PMC9609532 DOI: 10.1177/21925682211035363] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
STUDY DESIGN Narrative review. OBJECTIVES This review aims to present current applications of machine learning (ML) in spine domain to clinicians. METHODS We conducted a comprehensive PubMed search of peer-reviewed articles that were published between 2006 and 2020 using terms (spine, spinal, lumbar, cervical, thoracic, machine learning) to examine ML in spine. Then exclude research of other domain, case report, review or meta-analysis, and which without available abstract or full text. RESULTS Total 1738 articles were retrieved from database, and 292 studies were finally included. Key findings of current applications were compiled and summarized in this review. Main clinical applications of those techniques including image processing, diagnosis, decision supporting, operative assistance, rehabilitation, surgery outcomes, complications, hospitalization and cost. CONCLUSIONS ML had achieved excellent performance and hold immense potential in spine. ML could help clinical staff to improve medical level, enhance work efficiency, and reduce adverse events. However more randomized controlled trials and improvement of interpretability are essential to clinicians accepting models' assistance in real work.
Collapse
Affiliation(s)
- GuanRui Ren
- Southeast University Medical College,
Nanjing, Jiangsu, China
| | - Kun Yu
- Nanjing Jiangbei Hospital, Nanjing,
Jiangsu, China
| | - ZhiYang Xie
- Department of Spine Surgery, Zhongda
Hospital, School of Medicine, Southeast University, Nanjing, Jiangsu, China
| | - PeiYang Wang
- Southeast University Medical College,
Nanjing, Jiangsu, China
| | - Wei Zhang
- Southeast University Medical College,
Nanjing, Jiangsu, China
| | - Yong Huang
- Southeast University Medical College,
Nanjing, Jiangsu, China
| | - YunTao Wang
- Department of Spine Surgery, Zhongda
Hospital, School of Medicine, Southeast University, Nanjing, Jiangsu, China,YunTao Wang, Department of Spine Surgery,
Zhongda Hospital, School of Medicine, Southeast University, No. 87, Dingjiaqiao
Road, Nanjing, Jiangsu 210009, China.
| | - XiaoTao Wu
- Department of Spine Surgery, Zhongda
Hospital, School of Medicine, Southeast University, Nanjing, Jiangsu, China,XiaoTao Wu, Department of Spine Surgery,
Zhongda Hospital, School of Medicine, Southeast University, No. 87, Dingjiaqiao
Road, Nanjing, Jiangsu 210009, China.
| |
Collapse
|
16
|
Machine Learning in the Management of Lateral Skull Base Tumors: A Systematic Review. JOURNAL OF OTORHINOLARYNGOLOGY, HEARING AND BALANCE MEDICINE 2022. [DOI: 10.3390/ohbm3040007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
Collapse
|
17
|
Prediction of hearing recovery in unilateral sudden sensorineural hearing loss using artificial intelligence. Sci Rep 2022; 12:3977. [PMID: 35273267 PMCID: PMC8913667 DOI: 10.1038/s41598-022-07881-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 02/28/2022] [Indexed: 11/08/2022] Open
Abstract
Despite the significance of predicting the prognosis of idiopathic sudden sensorineural hearing loss (ISSNHL), no predictive models have been established. This study used artificial intelligence to develop prognosis models to predict recovery from ISSNHL. We retrospectively reviewed the medical data of 453 patients with ISSNHL (men, 220; women, 233; mean age, 50.3 years) who underwent treatment at a tertiary hospital between January 2021 and December 2019 and were followed up after 1 month. According to Siegel's criteria, 203 patients recovered in 1 month. Demographic characteristics, clinical and laboratory data, and pure-tone audiometry were analyzed. Logistic regression (baseline), a support vector machine, extreme gradient boosting, a light gradient boosting machine, and multilayer perceptron were used. The outcomes were the area under the receiver operating characteristic curve (AUROC) primarily, area under the precision-recall curve, Brier score, balanced accuracy, and F1 score. The light gradient boosting machine model had the best AUROC and balanced accuracy. Together with multilayer perceptron, it was also significantly superior to logistic regression in terms of AUROC. Using the SHapley Additive exPlanation method, we found that the initial audiogram shape is the most important prognostic factor. Machine/deep learning methods were successfully established to predict the prognosis of ISSNHL.
Collapse
|
18
|
Seol HY, Moon IJ. Hearables as a gateway to hearing health care: A review. Clin Exp Otorhinolaryngol 2022; 15:127-134. [PMID: 35249320 PMCID: PMC9149229 DOI: 10.21053/ceo.2021.01662] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 10/19/2021] [Indexed: 11/26/2022] Open
Abstract
The market for hearing technology is evolving—with the emergence of hearables, it now extends beyond hearing aids and includes any ear-level devices with wireless connectivity (i.e., wireless earbuds). However, will this evolving marketplace bring forth opportunities or challenges to individuals’ hearing health care and the profession of audiology and otolaryngology? The debate has been ongoing. This study explores the wide spectrum of hearables available in the market and discusses the necessity of high-quality clinical evidence prior to the implementation of over-the-counter devices into clinical practice.
Collapse
|
19
|
George MM, Tolley NS. AIM in Otolaryngology and Head and Neck Surgery. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
|
20
|
Abdi S, Kitsara I, Hawley MS, de Witte LP. Emerging technologies and their potential for generating new assistive technologies. Assist Technol 2021; 33:17-26. [PMID: 34951831 DOI: 10.1080/10400435.2021.1945704] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Limited access to assistive technology (AT) is a well-recognized global challenge. Emerging technologies have potential to develop new assistive products and bridge some of the gaps in access to AT. However, limited analyses exist on the potential of these technologies in the AT field. This paper describes a study that aimed to provide an overview of emerging technological developments and their potential for the AT field. It involved conducting a gray literature review and patent analysis to create an overview of the emerging enabling technologies that may foster the development of new AT products and services and identify emerging AT applications. The analysis identified seven enabling technologies that are relevant to the AT field. These are artificial intelligence, emerging human-computer interfaces, sensor technology, robotics, advances in connectivity and computing, additive manufacturing and new materials. Whilst there are over 3.7 million patents related to these enabling technologies, only a fraction of them - 11,000 patents were identified in the analysis specifically related to AT (0.3%). The paper presents some of the promising examples. Overall, the results indicate that there is an enormous potential for new AT solutions that capitalize on emerging technological advances.
Collapse
Affiliation(s)
- Sarah Abdi
- Centre for Assistive Technology and Connected Healthcare, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - Irene Kitsara
- Technology and Innovation Support Division, IP for Innovators Department, IP and Innovation Ecosystems Sector, World Intellectual Property Organisation (WIPO), Geneva, Switzerland
| | - Mark S Hawley
- Centre for Assistive Technology and Connected Healthcare, School of Health and Related Research, University of Sheffield, Sheffield, UK
| | - L P de Witte
- Centre for Assistive Technology and Connected Healthcare, School of Health and Related Research, University of Sheffield, Sheffield, UK
| |
Collapse
|
21
|
Standiford TC, Farlow JL, Brenner MJ, Conte ML, Terrell JE. Clinical Decision Support Systems in Otolaryngology-Head and Neck Surgery: A State of the Art Review. Otolaryngol Head Neck Surg 2021; 166:35-47. [PMID: 33874795 DOI: 10.1177/01945998211004529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE To offer practical, evidence-informed knowledge on clinical decision support systems (CDSSs) and their utility in improving care and reducing costs in otolaryngology-head and neck surgery. This primer on CDSSs introduces clinicians to both the capabilities and the limitations of this technology, reviews the literature on current state, and seeks to spur further progress in this area. DATA SOURCES PubMed/MEDLINE, Embase, and Web of Science. REVIEW METHODS Scoping review of CDSS literature applicable to otolaryngology clinical practice. Investigators identified articles that incorporated knowledge-based computerized CDSSs to aid clinicians in decision making and workflow. Data extraction included level of evidence, Osheroff classification of CDSS intervention type, otolaryngology subspecialty or domain, and impact on provider performance or patient outcomes. CONCLUSIONS Of 3191 studies retrieved, 11 articles met formal inclusion criteria. CDSS interventions included guideline or protocols support (n = 8), forms and templates (n = 5), data presentation aids (n = 2), and reactive alerts, reference information, or order sets (all n = 1); 4 studies had multiple interventions. CDSS studies demonstrated effectiveness across diverse domains, including antibiotic stewardship, cancer survivorship, guideline adherence, data capture, cost reduction, and workflow. Implementing CDSSs often involved collaboration with health information technologists. IMPLICATIONS FOR PRACTICE While the published literature on CDSSs in otolaryngology is finite, CDSS interventions are proliferating in clinical practice, with roles in preventing medical errors, streamlining workflows, and improving adherence to best practices for head and neck disorders. Clinicians may collaborate with information technologists and health systems scientists to develop, implement, and investigate the impact of CDSSs in otolaryngology.
Collapse
Affiliation(s)
| | - Janice L Farlow
- Department of Otolaryngology-Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Michael J Brenner
- Department of Otolaryngology-Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Marisa L Conte
- Department of Research and Informatics, University of Michigan Medical School, Ann Arbor, Michigan, USA
| | - Jeffrey E Terrell
- Department of Otolaryngology-Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, Michigan, USA
| |
Collapse
|
22
|
Crowson MG, Hartnick CJ, Diercks GR, Gallagher TQ, Fracchia MS, Setlur J, Cohen MS. Machine Learning for Accurate Intraoperative Pediatric Middle Ear Effusion Diagnosis. Pediatrics 2021; 147:peds.2020-034546. [PMID: 33731369 DOI: 10.1542/peds.2020-034546] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/16/2020] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES Misdiagnosis of acute and chronic otitis media in children can result in significant consequences from either undertreatment or overtreatment. Our objective was to develop and train an artificial intelligence algorithm to accurately predict the presence of middle ear effusion in pediatric patients presenting to the operating room for myringotomy and tube placement. METHODS We trained a neural network to classify images as " normal" (no effusion) or "abnormal" (effusion present) using tympanic membrane images from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media or otitis media with effusion. Model performance was tested on held-out cases and fivefold cross-validation. RESULTS The mean training time for the neural network model was 76.0 (SD ± 0.01) seconds. Our model approach achieved a mean image classification accuracy of 83.8% (95% confidence interval [CI]: 82.7-84.8). In support of this classification accuracy, the model produced an area under the receiver operating characteristic curve performance of 0.93 (95% CI: 0.91-0.94) and F1-score of 0.80 (95% CI: 0.77-0.82). CONCLUSIONS Artificial intelligence-assisted diagnosis of acute or chronic otitis media in children may generate value for patients, families, and the health care system by improving point-of-care diagnostic accuracy. With a small training data set composed of intraoperative images obtained at time of tympanostomy tube insertion, our neural network was accurate in predicting the presence of a middle ear effusion in pediatric ear cases. This diagnostic accuracy performance is considerably higher than human-expert otoscopy-based diagnostic performance reported in previous studies.
Collapse
Affiliation(s)
- Matthew G Crowson
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts; .,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Christopher J Hartnick
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Gillian R Diercks
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Thomas Q Gallagher
- Department of Otolaryngology-Head and Neck Surgery, Eastern Virginia Medical School, Norfolk, Virginia
| | - Mary S Fracchia
- Department of Pediatrics, Massachusetts General Hospital for Children, Boston, Massachusetts; and.,Department of Pediatrics, Harvard Medical School, Harvard University, Boston, Massachusetts
| | - Jennifer Setlur
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Michael S Cohen
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
23
|
George MM, Tolley NS. AIM in Otolaryngology and Head & Neck Surgery. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_198-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Wu Z, Lin Z, Li L, Pan H, Chen G, Fu Y, Qiu Q. Deep Learning for Classification of Pediatric Otitis Media. Laryngoscope 2020; 131:E2344-E2351. [PMID: 33369754 DOI: 10.1002/lary.29302] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/15/2020] [Accepted: 11/23/2020] [Indexed: 12/20/2022]
Abstract
OBJECTIVES/HYPOTHESIS To create a new strategy for monitoring pediatric otitis media (OM), we developed a brief, reliable, and objective method for automated classification using convolutional neural networks (CNNs) with images from otoscope. STUDY DESIGN Prospective study. METHODS An otoscopic image classifier for pediatric OM was built upon the idea of deep learning and transfer learning using the two most widely used CNN architectures named Xception and MobileNet-V2. Otoscopic images, including acute otitis media (AOM), otitis media with effusion (OME), and normal ears were obtained from our institution. Among qualified otoendoscopic images, 10,703 images were used for training, and 1,500 images were used for testing. In addition, 102 images captured by smartphone with WI-FI connected otoscope were used as a prospective test set to evaluate the model for home screening and monitoring. RESULTS For all diagnoses combined in the test set, the Xception model and the MobileNet-V2 model had similar overall accuracies of 97.45% (95% CI 96.81%-97.94%) and 95.72% (95% CI 95.12%-96.16%). The overall accuracies of two models with smartphone images were 90.66% (95% CI 90.21%-90.98%) and 88.56% (95% CI 87.86%-90.05%). The class activation map results showed that the extracted features of smartphone images were the same as those of otoendoscopic images. CONCLUSIONS We have developed deep learning algorithms for the successfully automated classification of pediatric AOM and OME with otoscopic images. With a smartphone-enabled wireless otoscope, artificial intelligence may assist parents in early detection and continuous monitoring at home to decrease the visit frequencies. LEVEL OF EVIDENCE NA Laryngoscope, 131:E2344-E2351, 2021.
Collapse
Affiliation(s)
- Zebin Wu
- Department of Otolaryngology, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Zheqi Lin
- Department of R&D, Shenzhen Accurate Technology Co., Ltd, Shenzhen, China
| | - Lan Li
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Hongguang Pan
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Guowei Chen
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Yuqing Fu
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Qianhui Qiu
- Department of Otolaryngology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|