1
|
Huang AE, Valdez TA. Artificial Intelligence and Pediatric Otolaryngology. Otolaryngol Clin North Am 2024:S0030-6665(24)00069-0. [PMID: 39033065 DOI: 10.1016/j.otc.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
Artificial intelligence (AI) studies show how to program computers to simulate human intelligence and perform data interpretation, learning, and adaptive decision-making. Within pediatric otolaryngology, there is a growing body of evidence for the role of AI in diagnosis and triaging of acute otitis media and middle ear effusion, pediatric sleep disorders, and syndromic craniofacial anomalies. The use of automated machine learning with robotic devices intraoperatively is an evolving field of study, particularly in the realms of pediatric otologic surgery and computer-aided planning for maxillofacial reconstruction, and we will likely continue seeing novel applications of machine learning in otolaryngologic surgery.
Collapse
Affiliation(s)
- Alice E Huang
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, CA, USA
| | - Tulio A Valdez
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
2
|
Eitan DN, Wolter NE, Scheffler P. Using Machine Learning for Endoscopic Detection of Low-Grade Subglottic Stenosis: A Proof of Principle. Otolaryngol Head Neck Surg 2024. [PMID: 39015068 DOI: 10.1002/ohn.901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 05/20/2024] [Accepted: 05/29/2024] [Indexed: 07/18/2024]
Abstract
The current study trains, tests, and evaluates a deep learning algorithm to detect subglottic stenosis (SGS) on endoscopy. A retrospective review of patients undergoing microlaryngoscopy-bronchoscopy was performed. A pretrained image classifier (Resnet50) was retrained and tested on 159 images of airways taken at the glottis, 106 normal-sized airways, and 122 with SGS. Data augmentation was performed given the small sample size to prevent overfitting. Overall model accuracy was 73.3% (SD: 3.8). Precision and recall for stenosis were 77.3% (SD: 4.0) and 72.7 (SD: 4.0). F1 score for the detection of stenosis was 0.75 (SD: 0.04). Precision and recall for normal-sized images were lower at 69% (SD: 4.35) and 74% (SD: 4), with an F1 score of 0.71 (SD: 0.04). This study demonstrates that an image classification algorithm can identify SGS on endoscopic images. Work is needed to improve diagnostic accuracy for eventual deployment of the algorithm into clinical care.
Collapse
Affiliation(s)
- Dana N Eitan
- Creighton University School of Medicine, Phoenix, Arizona, USA
| | - Nikolaus E Wolter
- Department of Otolaryngology- Head and Neck Surgery, Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Patrick Scheffler
- Creighton University School of Medicine, Phoenix, Arizona, USA
- Department of Otolaryngology- Head and Neck Surgery, Phoenix Children's Hospital, Phoenix, Arizona, USA
- University of Arizona-Phoenix College of Medicine, Phoenix, Arizona, USA
| |
Collapse
|
3
|
Chang KM, Surapaneni SS, Shaikh N, Marston AP, Vecchiotti MA, Rangarajan N, Hill CA, Scott AR. Pediatric tympanostomy tube assessment via deep learning. Am J Otolaryngol 2024; 45:104334. [PMID: 38723380 DOI: 10.1016/j.amjoto.2024.104334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 04/21/2024] [Indexed: 06/14/2024]
Abstract
PURPOSE Tympanostomy tube (TT) placement is the most frequently performed ambulatory surgery in children under 15. After the procedure it is recommended that patients follow up regularly for "tube checks" until TT extrusion. Such visits incur direct and indirect costs to families in the form of days off from work, copays, and travel expenses. This pilot study aims to compare the efficacy of tympanic membrane (TM) evaluation by an artificial intelligence algorithm with that of clinical staff for determining presence or absence of a tympanostomy tube within the TM. METHODS Using a digital otoscope, we performed a prospective study in children (ages 10 months-10 years) with a history of TTs who were being seen for follow up in a pediatric otolaryngology clinic. A smartphone otoscope was used by study personnel who were not physicians to take ear exam images, then through conventional otoscopic exam, ears were assessed by a clinician for tubes being in place or tubes having extruded from the TM. We trained and tested a deep learning (artificial intelligence) algorithm to assess the images and compared that with the clinician's assessment. RESULTS A total of 123 images were obtained from 28 subjects. The algorithm classified images as TM with or without tube in place. Overall classification accuracy was 97.7 %. Recall and precision were 100 % and 96 %, respectively, for TM without a tube present, and 95 % and 100 %, respectively, for TM with a tube in place. DISCUSSION This is a promising deep learning algorithm for classifying ear tube presence in the TM utilizing images obtained in awake children using an over-the-counter otoscope available to the lay population. We are continuing enrollment, with the goal of building an algorithm to assess tube patency and extrusion.
Collapse
Affiliation(s)
- K M Chang
- Tufts University School of Medicine, Boston, MA, United States of America
| | | | - N Shaikh
- Tufts Medical Center, Boston, MA, United States of America
| | - A P Marston
- Tufts Medical Center, Boston, MA, United States of America
| | - M A Vecchiotti
- Tufts Medical Center, Boston, MA, United States of America
| | - N Rangarajan
- COHI Group, St. Paul, MN, United States of America
| | - C A Hill
- COHI Group, St. Paul, MN, United States of America
| | - A R Scott
- Tufts University School of Medicine, Boston, MA, United States of America; Tufts Medical Center, Boston, MA, United States of America.
| |
Collapse
|
4
|
Dubois C, Eigen D, Simon F, Couloigner V, Gormish M, Chalumeau M, Schmoll L, Cohen JF. Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images. NPJ Digit Med 2024; 7:162. [PMID: 38902477 PMCID: PMC11189910 DOI: 10.1038/s41746-024-01159-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 06/10/2024] [Indexed: 06/22/2024] Open
Abstract
Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5-100) and a specificity of 95.2% (91.5-97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6-100) and specificity of 97.7% (95.0-99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system's performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.
Collapse
Affiliation(s)
| | | | - François Simon
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | - Vincent Couloigner
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Martin Chalumeau
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Jérémie F Cohen
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France.
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France.
| |
Collapse
|
5
|
Suresh K, Wu MP, Benboujja F, Christakis B, Newton A, Hartnick CJ, Cohen MS. AI Model Versus Clinician Otoscopy in the Operative Setting for Otitis Media Diagnosis. Otolaryngol Head Neck Surg 2024; 170:1598-1601. [PMID: 37822130 DOI: 10.1002/ohn.559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 09/10/2023] [Accepted: 09/15/2023] [Indexed: 10/13/2023]
Abstract
Prior work has demonstrated improved accuracy in otitis media diagnosis based on otoscopy using artificial intelligence (AI)-based approaches compared to clinician evaluation. However, this difference in accuracy has not been shown in a setting resembling the point-of-care. In this study, we compare the diagnostic accuracy of a machine-learning model to that of pediatricians using standard handheld otoscopes. We find that the model is more accurate than clinicians (90.6% vs 59.4%, P = .01). This is a step towards validation of AI-based diagnosis under more real-world conditions. With further validation, for example on different patient populations and in deployment, this technology could be a useful addition to the clinician's toolbox in accurately diagnosing otitis media.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Boston, Massachusetts, USA
| | - Michael P Wu
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Boston, Massachusetts, USA
| | - Fouzi Benboujja
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Boston, Massachusetts, USA
| | - Barbara Christakis
- Department of Pediatrics, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Alice Newton
- Department of Pediatrics, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Christopher J Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Boston, Massachusetts, USA
| | - Michael S Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, USA
- Department of Otolaryngology-Head & Neck Surgery, Boston, Massachusetts, USA
| |
Collapse
|
6
|
Shaikh N, Conway SJ, Kovačević J, Condessa F, Shope TR, Haralam MA, Campese C, Lee MC, Larsson T, Cavdar Z, Hoberman A. Development and Validation of an Automated Classifier to Diagnose Acute Otitis Media in Children. JAMA Pediatr 2024; 178:401-407. [PMID: 38436941 PMCID: PMC10985552 DOI: 10.1001/jamapediatrics.2024.0011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 11/17/2023] [Indexed: 03/05/2024]
Abstract
Importance Acute otitis media (AOM) is a frequently diagnosed illness in children, yet the accuracy of diagnosis has been consistently low. Multiple neural networks have been developed to recognize the presence of AOM with limited clinical application. Objective To develop and internally validate an artificial intelligence decision-support tool to interpret videos of the tympanic membrane and enhance accuracy in the diagnosis of AOM. Design, Setting, and Participants This diagnostic study analyzed otoscopic videos of the tympanic membrane captured using a smartphone during outpatient clinic visits at 2 sites in Pennsylvania between 2018 and 2023. Eligible participants included children who presented for sick visits or wellness visits. Exposure Otoscopic examination. Main Outcomes and Measures Using the otoscopic videos that were annotated by validated otoscopists, a deep residual-recurrent neural network was trained to predict both features of the tympanic membrane and the diagnosis of AOM vs no AOM. The accuracy of this network was compared with a second network trained using a decision tree approach. A noise quality filter was also trained to prompt users that the video segment acquired may not be adequate for diagnostic purposes. Results Using 1151 videos from 635 children (majority younger than 3 years of age), the deep residual-recurrent neural network had almost identical diagnostic accuracy as the decision tree network. The finalized deep residual-recurrent neural network algorithm classified tympanic membrane videos into AOM vs no AOM categories with a sensitivity of 93.8% (95% CI, 92.6%-95.0%) and specificity of 93.5% (95% CI, 92.8%-94.3%) and the decision tree model had a sensitivity of 93.7% (95% CI, 92.4%-94.9%) and specificity of 93.3% (92.5%-94.1%). Of the tympanic membrane features outputted, bulging of the TM most closely aligned with the predicted diagnosis; bulging was present in 230 of 230 cases (100%) in which the diagnosis was predicted to be AOM in the test set. Conclusions and Relevance These findings suggest that given its high accuracy, the algorithm and medical-grade application that facilitates image acquisition and quality filtering could reasonably be used in primary care or acute care settings to aid with automated diagnosis of AOM and decisions regarding treatment.
Collapse
Affiliation(s)
- Nader Shaikh
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | - Shannon J. Conway
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | - Jelena Kovačević
- Tandon School of Engineering, New York University, New York, New York
| | - Filipe Condessa
- Bosch Center for Artificial Intelligence, Pittsburgh, Pennsylvania
| | - Timothy R. Shope
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | - Mary Ann Haralam
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | - Catherine Campese
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | - Matthew C. Lee
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| | | | | | - Alejandro Hoberman
- Department of Pediatrics, Division of General Academic Pediatrics, University of Pittsburgh School of Medicine, University of Pittsburgh Medical Center Children’s Hospital of Pittsburgh, Pennsylvania
| |
Collapse
|
7
|
Shim JH, Sunwoo W, Choi BY, Kim KG, Kim YJ. Improving the Accuracy of Otitis Media with Effusion Diagnosis in Pediatric Patients Using Deep Learning. Bioengineering (Basel) 2023; 10:1337. [PMID: 38002461 PMCID: PMC10669592 DOI: 10.3390/bioengineering10111337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 11/09/2023] [Accepted: 11/13/2023] [Indexed: 11/26/2023] Open
Abstract
Otitis media with effusion (OME), primarily seen in children aged 2 years and younger, is characterized by the presence of fluid in the middle ear, often resulting in hearing loss and aural fullness. While deep learning networks have been explored to aid OME diagnosis, prior work did not often specify if pediatric images were used for training, causing uncertainties about their clinical relevance, especially due to important distinctions between the tympanic membranes of small children and adults. We trained cross-validated ResNet50, DenseNet201, InceptionV3, and InceptionResNetV2 models on 1150 pediatric tympanic membrane images from otoendoscopes to classify OME. When assessed using a separate dataset of 100 pediatric tympanic membrane images, the models achieved mean accuracies of 92.9% (ResNet50), 97.2% (DenseNet201), 96.0% (InceptionV3), and 94.8% (InceptionResNetV2), compared to the seven otolaryngologists that achieved accuracies between 84.0% and 69.0%. The results showed that even the worst-performing model trained on fold 3 of InceptionResNetV2 with an accuracy of 88.0% exceeded the accuracy of the highest-performing otolaryngologist at 84.0%. Our findings suggest that these specifically trained deep learning models can potentially enhance the clinical diagnosis of OME using pediatric otoendoscopic tympanic membrane images.
Collapse
Affiliation(s)
- Jae-Hyuk Shim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Woongsang Sunwoo
- Department of Otorhinolaryngology-Head and Neck Surgery, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Byung Yoon Choi
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seongnam 13620, Republic of Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gil Medical Center, Gachon University College of Medicine, Incheon 21565, Republic of Korea
| |
Collapse
|
8
|
Song D, Kim T, Lee Y, Kim J. Image-Based Artificial Intelligence Technology for Diagnosing Middle Ear Diseases: A Systematic Review. J Clin Med 2023; 12:5831. [PMID: 37762772 PMCID: PMC10531728 DOI: 10.3390/jcm12185831] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
Abstract
Otolaryngological diagnoses, such as otitis media, are traditionally performed using endoscopy, wherein diagnostic accuracy can be subjective and vary among clinicians. The integration of objective tools, like artificial intelligence (AI), could potentially improve the diagnostic process by minimizing the influence of subjective biases and variability. We systematically reviewed the AI techniques using medical imaging in otolaryngology. Relevant studies related to AI-assisted otitis media diagnosis were extracted from five databases: Google Scholar, PubMed, Medline, Embase, and IEEE Xplore, without date restrictions. Publications that did not relate to AI and otitis media diagnosis or did not utilize medical imaging were excluded. Of the 32identified studies, 26 used tympanic membrane images for classification, achieving an average diagnosis accuracy of 86% (range: 48.7-99.16%). Another three studies employed both segmentation and classification techniques, reporting an average diagnosis accuracy of 90.8% (range: 88.06-93.9%). These findings suggest that AI technologies hold promise for improving otitis media diagnosis, offering benefits for telemedicine and primary care settings due to their high diagnostic accuracy. However, to ensure patient safety and optimal outcomes, further improvements in diagnostic performance are necessary.
Collapse
Affiliation(s)
- Dahye Song
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Taewan Kim
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Yeonjoon Lee
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Jaeyoung Kim
- Department of Dermatology and Skin Sciences, University of British Columbia, Vancouver, BC V6T 1Z1, Canada;
- Core Research & Development Center, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
| |
Collapse
|
9
|
Ding X, Huang Y, Tian X, Zhao Y, Feng G, Gao Z. Diagnosis, Treatment, and Management of Otitis Media with Artificial Intelligence. Diagnostics (Basel) 2023; 13:2309. [PMID: 37443702 DOI: 10.3390/diagnostics13132309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/04/2023] [Accepted: 06/14/2023] [Indexed: 07/15/2023] Open
Abstract
A common infectious disease, otitis media (OM) has a low rate of early diagnosis, which significantly increases the difficulty of treating the disease and the likelihood of serious complications developing including hearing loss, speech impairment, and even intracranial infection. Several areas of healthcare have shown great promise in the application of artificial intelligence (AI) systems, such as the accurate detection of diseases, the automated interpretation of images, and the prediction of patient outcomes. Several articles have reported some machine learning (ML) algorithms such as ResNet, InceptionV3 and Unet, were applied to the diagnosis of OM successfully. The use of these techniques in the OM is still in its infancy, but their potential is enormous. We present in this review important concepts related to ML and AI, describe how these technologies are currently being applied to diagnosing, treating, and managing OM, and discuss the challenges associated with developing AI-assisted OM technologies in the future.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| |
Collapse
|
10
|
Suresh K, Cohen MS, Hartnick CJ, Bartholomew RA, Lee DJ, Crowson MG. Making Use of Artificial Intelligence-Generated Synthetic Tympanic Membrane Images. JAMA Otolaryngol Head Neck Surg 2023; 149:555-556. [PMID: 36995729 PMCID: PMC10064279 DOI: 10.1001/jamaoto.2023.0218] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 02/05/2023] [Indexed: 03/31/2023]
Abstract
This diagnostic study examines the application of generative artificial intelligence in clinical tool research and development.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Michael S. Cohen
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Christopher J. Hartnick
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Ryan A. Bartholomew
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Daniel J. Lee
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Matthew G. Crowson
- Department of Otolaryngology–Head & Neck Surgery, Massachusetts Eye and Ear, Boston
- Department of Otolaryngology–Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
11
|
El Feghaly RE, Nedved A, Katz SE, Frost HM. New insights into the treatment of acute otitis media. Expert Rev Anti Infect Ther 2023; 21:523-534. [PMID: 37097281 PMCID: PMC10231305 DOI: 10.1080/14787210.2023.2206565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 04/20/2023] [Indexed: 04/26/2023]
Abstract
INTRODUCTION Acute otitis media (AOM) affects most (80%) children by 5 years of age and is the most common reason children are prescribed antibiotics. The epidemiology of AOM has changed considerably since the widespread use of pneumococcal conjugate vaccines, which has broad-reaching implications for management. AREAS COVERED In this narrative review, we cover the epidemiology of AOM, best practices for diagnosis and management, new diagnostic technology, effective stewardship interventions, and future directions of the field. Literature review was performed using PubMed and ClinicalTrials.gov. EXPERT OPINION Inaccurate diagnoses, unnecessary antibiotic use, and increasing antimicrobial resistance remain major challenges in AOM management. Fortunately, effective tools and interventions to improve diagnostic accuracy, de-implement unnecessary antibiotic use, and individualize care are on the horizon. Successful scaling of these tools and interventions will be critical to improving overall care for children.
Collapse
Affiliation(s)
- Rana E. El Feghaly
- Department of Pediatrics, Children’s Mercy Kansas City, Kansas City, MO, USA
- Department of Pediatrics, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Amanda Nedved
- Department of Pediatrics, Children’s Mercy Kansas City, Kansas City, MO, USA
- Department of Pediatrics, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Sophie E. Katz
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Holly M. Frost
- Department of Pediatrics, Denver Health and Hospital Authority, Denver, CO, USA
- Center for Health Systems Research, Denver Health and Hospital Authority, Denver, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| |
Collapse
|
12
|
Cao Z, Chen F, Grais EM, Yue F, Cai Y, Swanepoel DW, Zhao F. Machine Learning in Diagnosing Middle Ear Disorders Using Tympanic Membrane Images: A Meta-Analysis. Laryngoscope 2023; 133:732-741. [PMID: 35848851 DOI: 10.1002/lary.30291] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 06/18/2022] [Accepted: 06/21/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE To systematically evaluate the development of Machine Learning (ML) models and compare their diagnostic accuracy for the classification of Middle Ear Disorders (MED) using Tympanic Membrane (TM) images. METHODS PubMed, EMBASE, CINAHL, and CENTRAL were searched up until November 30, 2021. Studies on the development of ML approaches for diagnosing MED using TM images were selected according to the inclusion criteria. PRISMA guidelines were followed with study design, analysis method, and outcomes extracted. Sensitivity, specificity, and area under the curve (AUC) were used to summarize the performance metrics of the meta-analysis. Risk of Bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool in combination with the Prediction Model Risk of Bias Assessment Tool. RESULTS Sixteen studies were included, encompassing 20254 TM images (7025 normal TM and 13229 MED). The sample size ranged from 45 to 6066 per study. The accuracy of the 25 included ML approaches ranged from 76.00% to 98.26%. Eleven studies (68.8%) were rated as having a low risk of bias, with the reference standard as the major domain of high risk of bias (37.5%). Sensitivity and specificity were 93% (95% CI, 90%-95%) and 85% (95% CI, 82%-88%), respectively. The AUC of total TM images was 94% (95% CI, 91%-96%). The greater AUC was found using otoendoscopic images than otoscopic images. CONCLUSIONS ML approaches perform robustly in distinguishing between normal ears and MED, however, it is proposed that a standardized TM image acquisition and annotation protocol should be developed. LEVEL OF EVIDENCE NA Laryngoscope, 133:732-741, 2023.
Collapse
Affiliation(s)
- Zuwei Cao
- Center for Rehabilitative Auditory Research, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Feifan Chen
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fengjuan Yue
- Medical Examination Center, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, China
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| |
Collapse
|
13
|
Habib AR, Xu Y, Bock K, Mohanty S, Sederholm T, Weeks WB, Dodhia R, Ferres JL, Perry C, Sacks R, Singh N. Evaluating the generalizability of deep learning image classification algorithms to detect middle ear disease using otoscopy. Sci Rep 2023; 13:5368. [PMID: 37005441 PMCID: PMC10067817 DOI: 10.1038/s41598-023-31921-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/20/2023] [Indexed: 04/04/2023] Open
Abstract
To evaluate the generalizability of artificial intelligence (AI) algorithms that use deep learning methods to identify middle ear disease from otoscopic images, between internal to external performance. 1842 otoscopic images were collected from three independent sources: (a) Van, Turkey, (b) Santiago, Chile, and (c) Ohio, USA. Diagnostic categories consisted of (i) normal or (ii) abnormal. Deep learning methods were used to develop models to evaluate internal and external performance, using area under the curve (AUC) estimates. A pooled assessment was performed by combining all cohorts together with fivefold cross validation. AI-otoscopy algorithms achieved high internal performance (mean AUC: 0.95, 95%CI: 0.80-1.00). However, performance was reduced when tested on external otoscopic images not used for training (mean AUC: 0.76, 95%CI: 0.61-0.91). Overall, external performance was significantly lower than internal performance (mean difference in AUC: -0.19, p ≤ 0.04). Combining cohorts achieved a substantial pooled performance (AUC: 0.96, standard error: 0.01). Internally applied algorithms for otoscopy performed well to identify middle ear disease from otoscopy images. However, external performance was reduced when applied to new test cohorts. Further efforts are required to explore data augmentation and pre-processing techniques that might improve external performance and develop a robust, generalizable algorithm for real-world clinical applications.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia.
- Department of Otolaryngology, Head and Neck Surgery, Westmead Hospital, Sydney, NSW, Australia.
| | - Yixi Xu
- AI for Good Lab, Microsoft, Redmond, WA, USA
| | - Kris Bock
- Azure FastTrack Engineering, Brisbane, QLD, Australia
| | | | | | | | | | | | - Chris Perry
- University of Queensland Medical School, Brisbane, QLD, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, Sydney, NSW, Australia
- Department of Otolaryngology, Head and Neck Surgery, Westmead Hospital, Sydney, NSW, Australia
| |
Collapse
|
14
|
Lovecchio F, Lafage R, Line B, Bess S, Shaffrey C, Kim HJ, Ames C, Burton D, Gupta M, Smith JS, Eastlack R, Klineberg E, Mundis G, Schwab F, Lafage V. Optimizing the Definition of Proximal Junctional Kyphosis: A Sensitivity Analysis. Spine (Phila Pa 1976) 2023; 48:414-420. [PMID: 36728798 DOI: 10.1097/brs.0000000000004564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 11/27/2022] [Indexed: 02/03/2023]
Abstract
STUDY DESIGN Diagnostic binary threshold analysis. OBJECTIVE (1) Perform a sensitivity analysis demonstrating the test performance metrics for any combination of proximal junctional angle (PJA) magnitude and change; (2) Propose a new proximal junctional kyphosis (PJK) criteria. SUMMARY OF BACKGROUND DATA Previous definitions of PJK have been arbitrarily selected and then tested through retrospective case series, often showing little correlation with clinical outcomes. MATERIALS AND METHODS Surgically treated adult spinal deformity patients (≥4 levels fused) enrolled into a prospective, multicenter database were evaluated at a minimum 2-year follow-up for proximal junctional failure (PJF). Using PJF as the outcome of interest, test performance metrics including sensitivity, positive predictive value, and F1 metrics (harmonic mean of precision and recall) were calculated for all combinations of PJA magnitude and change using different combinations of perijunctional vertebrae. The combination with the highest F1 score was selected as the new PJK criteria. Performance metrics of previous PJK definitions and the new PJK definition were compared. RESULTS Of the total, 669 patients were reviewed. PJF rate was 10%. Overall, the highest F1 scores were achieved when the upper instrumented vertebrae -1 (UIV-1)/UIV+2 angle was measured. For lower thoracic cases, out of all the PJA and magnitude/change combinations tested, a UIV-1/UIV+2 magnitude of -28° and a change of -20° was associated with the highest F1 score. For upper thoracic cases, a UIV-1/UIV+2 magnitude of -30° and a change of -24° were associated with the highest F1 score. Using PJF as the outcome, patients meeting this new criterion (11.5%) at 6 weeks had the lowest survival rate (74.7%) at 2 years postoperative, compared with Glattes (84.4%) and Bridwell (77.4%). CONCLUSIONS Out of all possible PJA magnitude and change combinations, without stratifying by upper thoracic versus lower thoracic fusions, a magnitude of ≤-28° and a change of ≤-22° provide the best test performance metrics for predicting PJF.
Collapse
Affiliation(s)
- Francis Lovecchio
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, NY
| | - Renaud Lafage
- Department of Orthopedic Surgery, Northwell Health, Lenox Hill Hospital, New York, NY
| | - Breton Line
- Denver International Spine Center, Presbyterian St. Luke's/Rocky Mountain Hospital for Children, Denver, CO
| | - Shay Bess
- Denver International Spine Center, Presbyterian St. Luke's/Rocky Mountain Hospital for Children, Denver, CO
| | | | - Han Jo Kim
- Department of Orthopedic Surgery, Hospital for Special Surgery, New York, NY
| | - Christopher Ames
- Department of Neurosurgery, University of California School of Medicine, San Francisco, CA
| | - Douglas Burton
- Department of Orthopedic Surgery, University of Kansas Medical Center, Kansas City, KS
| | - Munish Gupta
- Department of Orthopedic Surgery, Washington University, St Louis, MO
| | - Justin S Smith
- Department of Neurosurgery, University of Virginia Medical Center, Charlottesville, VA
| | - Robert Eastlack
- Department of Orthopedic Surgery, Scripps Clinic Torrey Pines, La Jolla, CA
| | - Eric Klineberg
- Department of Orthopedic Surgery, University of California, Davis, Sacramento, CA
| | - Gregory Mundis
- Department of Orthopedic Surgery, Scripps Clinic Torrey Pines, La Jolla, CA
| | - Frank Schwab
- Department of Orthopedic Surgery, Northwell Health, Lenox Hill Hospital, New York, NY
| | - Virginie Lafage
- Department of Orthopedic Surgery, Northwell Health, Lenox Hill Hospital, New York, NY
| |
Collapse
|
15
|
Suresh K, Cohen MS, Hartnick CJ, Bartholomew RA, Lee DJ, Crowson MG. Generation of synthetic tympanic membrane images: Development, human validation, and clinical implications of synthetic data. PLOS DIGITAL HEALTH 2023; 2:e0000202. [PMID: 36827244 PMCID: PMC9956018 DOI: 10.1371/journal.pdig.0000202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 01/24/2023] [Indexed: 02/25/2023]
Abstract
Synthetic clinical images could augment real medical image datasets, a novel approach in otolaryngology-head and neck surgery (OHNS). Our objective was to develop a generative adversarial network (GAN) for tympanic membrane images and to validate the quality of synthetic images with human reviewers. Our model was developed using a state-of-the-art GAN architecture, StyleGAN2-ADA. The network was trained on intraoperative high-definition (HD) endoscopic images of tympanic membranes collected from pediatric patients undergoing myringotomy with possible tympanostomy tube placement. A human validation survey was administered to a cohort of OHNS and pediatrics trainees at our institution. The primary measure of model quality was the Frechet Inception Distance (FID), a metric comparing the distribution of generated images with the distribution of real images. The measures used for human reviewer validation were the sensitivity, specificity, and area under the curve (AUC) for humans' ability to discern synthetic from real images. Our dataset comprised 202 images. The best GAN was trained at 512x512 image resolution with a FID of 47.0. The progression of images through training showed stepwise "learning" of the anatomic features of a tympanic membrane. The validation survey was taken by 65 persons who reviewed 925 images. Human reviewers demonstrated a sensitivity of 66%, specificity of 73%, and AUC of 0.69 for the detection of synthetic images. In summary, we successfully developed a GAN to produce synthetic tympanic membrane images and validated this with human reviewers. These images could be used to bolster real datasets with various pathologies and develop more robust deep learning models such as those used for diagnostic predictions from otoscopic images. However, caution should be exercised with the use of synthetic data given issues regarding data diversity and performance validation. Any model trained using synthetic data will require robust external validation to ensure validity and generalizability.
Collapse
Affiliation(s)
- Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
- * E-mail:
| | - Michael S. Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Christopher J. Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ryan A. Bartholomew
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Daniel J. Lee
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
16
|
Miller LE, Goedicke W, Crowson MG, Rathi VK, Naunheim MR, Agarwala AV. Using Machine Learning to Predict Operating Room Case Duration: A Case Study in Otolaryngology. Otolaryngol Head Neck Surg 2023; 168:241-247. [PMID: 35133897 DOI: 10.1177/01945998221076480] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 01/07/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVE Optimizing operating room (OR) efficiency depends on accurate case duration estimates. Machine learning (ML) methods have been used to predict OR case durations in other subspecialties. We hypothesize that ML methods improve projected case lengths over existing non-ML techniques for otolaryngology-head and neck surgery cases. METHODS Deidentified patient information from otolaryngology surgical cases at 1 academic institution were reviewed from 2016 to 2020. Variables collected included patient, surgeon, procedure, and facility data known preoperatively so as to capture all realistic contributors. Available case data were divided into a training and testing data set. Several ML algorithms were evaluated based on best performance of predicted case duration when compared to actual case duration. Performance of all models was compared by the average root mean squared error and mean absolute error (MAE). RESULTS In total, 50,888 otolaryngology surgical cases were evaluated with an average case duration of 98.3 ± 86.9 minutes. Most cases were general otolaryngology (n = 16,620). Case features closely associated with OR duration included procedure performed, surgeon, subspecialty of case, and postoperative destination of the patient. The best-performing ML models were CatBoost and XGBoost, which reduced operative time MAE by 9.6 minutes and 8.5 minutes compared to current methods, respectively. DISCUSSION The incorporation of other easily identifiable features beyond procedure performed and surgeon meaningfully improved our operative duration prediction accuracy. CatBoost provided the best-performing ML model. IMPLICATIONS FOR PRACTICE ML algorithms to predict OR case time duration in otolaryngology can improve case duration accuracy and result in financial benefit.
Collapse
Affiliation(s)
- Lauren E Miller
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - William Goedicke
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - Matthew G Crowson
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - Vinay K Rathi
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - Matthew R Naunheim
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| | - Aalok V Agarwala
- Department of Anesthesia, Massachusetts Eye and Ear Infirmary, Boston, Massachusetts, USA
| |
Collapse
|
17
|
El Feghaly RE, Jackson MA. Predicting Recurrent Acute Otitis Media and the Need for Tympanostomy: A Powerful Tool. Pediatrics 2023; 151:190441. [PMID: 36617973 DOI: 10.1542/peds.2022-060110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/26/2022] [Indexed: 01/10/2023] Open
Affiliation(s)
- Rana E El Feghaly
- Division of Infectious Diseases, Department of Pediatrics, Children's Mercy Kansas City, Kansas City, Missouri.,Department of Pediatrics, University of Missouri-Kansas City, Kansas City, Missouri
| | - Mary Anne Jackson
- Division of Infectious Diseases, Department of Pediatrics, Children's Mercy Kansas City, Kansas City, Missouri.,Department of Pediatrics, University of Missouri-Kansas City, Kansas City, Missouri
| |
Collapse
|
18
|
Byun H, Lee SH, Kim TH, Oh J, Chung JH. Feasibility of the Machine Learning Network to Diagnose Tympanic Membrane Lesions without Coding Experience. J Pers Med 2022; 12:jpm12111855. [PMID: 36579584 PMCID: PMC9697619 DOI: 10.3390/jpm12111855] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 09/29/2022] [Accepted: 10/31/2022] [Indexed: 11/10/2022] Open
Abstract
A machine learning platform operated without coding knowledge (Teachable machine®) has been introduced. The aims of the present study were to assess the performance of the Teachable machine® for diagnosing tympanic membrane lesions. A total of 3024 tympanic membrane images were used to train and validate the diagnostic performance of the network. Tympanic membrane images were labeled as normal, otitis media with effusion (OME), chronic otitis media (COM), and cholesteatoma. According to the complexity of the categorization, Level I refers to normal versus abnormal tympanic membrane; Level II was defined as normal, OME, or COM + cholesteatoma; and Level III distinguishes between all four pathologies. In addition, eighty representative test images were used to assess the performance. Teachable machine® automatically creates a classification network and presents diagnostic performance when images are uploaded. The mean accuracy of the Teachable machine® for classifying tympanic membranes as normal or abnormal (Level I) was 90.1%. For Level II, the mean accuracy was 89.0% and for Level III it was 86.2%. The overall accuracy of the classification of the 80 representative tympanic membrane images was 78.75%, and the hit rates for normal, OME, COM, and cholesteatoma were 95.0%, 70.0%, 90.0%, and 60.0%, respectively. Teachable machine® could successfully generate the diagnostic network for classifying tympanic membrane.
Collapse
Affiliation(s)
- Hayoung Byun
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
| | - Seung Hwan Lee
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Tae Hyun Kim
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of Computer Science, Hanyang University, Seoul 04763, Korea
| | - Jaehoon Oh
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of Emergency Medicine, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Jae Ho Chung
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea
- Department of HY-KIST Bio-Convergence, College of Medicine, Hanyang University, Seoul 04763, Korea
- Correspondence:
| |
Collapse
|
19
|
Ezzibdeh R, Munjal T, Ahmad I, Valdez TA. Artificial intelligence and tele-otoscopy: A window into the future of pediatric otology. Int J Pediatr Otorhinolaryngol 2022; 160:111229. [PMID: 35816971 DOI: 10.1016/j.ijporl.2022.111229] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 06/30/2022] [Accepted: 07/01/2022] [Indexed: 10/17/2022]
Abstract
Telehealth in otolaryngology is gaining popularity as a potential tool for increased access for rural populations, decreased specialist wait times, and overall savings to the healthcare system. The adoption of telehealth has been dramatically increased by the COVID-19 pandemic limiting patients' physical access to hospitals and clinics. One of the key challenges to telehealth in general otolaryngology and otology specifically is the limited physical examination possible on the ear canal and middle ear. This is compounded in pediatric populations who commonly present with middle ear pathologies which can be challenging to diagnose even in the clinic. To address this need, various otoscopes have been designed to allow patients, their parents, or primary care providers to image the tympanic membrane and middle ear, and send data to otolaryngologists for review. Furthermore, the ability of these devices to capture images in digital format has opened the possibility of using artificial intelligence for quick and reliable diagnostic workup. In this manuscript, we provide a concise review of the literature regarding the efficacy of remote otoscopy, as well as recent efforts on the use of artificial intelligence in aiding otologic diagnoses.
Collapse
Affiliation(s)
- Rami Ezzibdeh
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Tina Munjal
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Iram Ahmad
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Tulio A Valdez
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| |
Collapse
|
20
|
Crowson MG, Bates DW, Suresh K, Cohen MS, Hartnick CJ. "Human vs Machine" Validation of a Deep Learning Algorithm for Pediatric Middle Ear Infection Diagnosis. Otolaryngol Head Neck Surg 2022:1945998221119156. [PMID: 35972815 PMCID: PMC9931938 DOI: 10.1177/01945998221119156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE We compared the diagnostic performance of human clinicians with that of a neural network algorithm developed using a library of tympanic membrane images derived from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media (AOM) or otitis media with effusion (OME). STUDY DESIGN Retrospective cohort study. SETTING Tertiary academic medical center from 2018 to 2021. METHODS A training set of 639 images of tympanic membranes representing normal, OME, and AOM was used to train a neural network as well as a proprietary commercial image classifier from Google. Model diagnostic prediction performance in differentiating normal vs nonpurulent vs purulent effusion was scored based on classification accuracy. A web-based survey was developed to test human clinicians' diagnostic accuracy on a novel image set, and this was compared head to head against our model. RESULTS Our model achieved a mean prediction accuracy of 80.8% (95% CI, 77.0%-84.6%). The Google model achieved a prediction accuracy of 85.4%. In a validation survey of 39 clinicians analyzing a sample of 22 endoscopic ear images, the average diagnostic accuracy was 65.0%. On the same data set, our model achieved an accuracy of 95.5%. CONCLUSION Our model outperformed certain groups of human clinicians in assessing images of tympanic membranes for effusions in children. Reduced diagnostic error rates using machine learning models may have implications in reducing rates of misdiagnosis, potentially leading to fewer missed diagnoses, unnecessary antibiotic prescriptions, and surgical procedures.
Collapse
Affiliation(s)
- Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - David W. Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA,Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA
| | - Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - Michael S. Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - Christopher J. Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| |
Collapse
|
21
|
Habib AR, Crossland G, Patel H, Wong E, Kong K, Gunasekera H, Richards B, Caffery L, Perry C, Sacks R, Kumar A, Singh N. An Artificial Intelligence Computer-vision Algorithm to Triage Otoscopic Images From Australian Aboriginal and Torres Strait Islander Children. Otol Neurotol 2022; 43:481-488. [PMID: 35239622 DOI: 10.1097/mao.0000000000003484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To develop an artificial intelligence image classification algorithm to triage otoscopic images from rural and remote Australian Aboriginal and Torres Strait Islander children. STUDY DESIGN Retrospective observational study. SETTING Tertiary referral center. PATIENTS Rural and remote Aboriginal and Torres Strait Islander children who underwent tele-otology ear health screening in the Northern Territory, Australia between 2010 and 2018. INTERVENTIONS Otoscopic images were labeled by otolaryngologists to classify the ground truth. Deep and transfer learning methods were used to develop an image classification algorithm. MAIN OUTCOME MEASURES Accuracy, sensitivity, specificity, positive predictive value, negative predictive value, area under the curve (AUC) of the resultant algorithm compared with the ground truth. RESULTS Six thousand five hundred twenty seven images were used (5927 images for training and 600 for testing). The algorithm achieved an accuracy of 99.3% for acute otitis media, 96.3% for chronic otitis media, 77.8% for otitis media with effusion (OME), and 98.2% to classify wax/obstructed canal. To differentiate between multiple diagnoses, the algorithm achieved 74.4 to 92.8% accuracy and an AUC of 0.963 to 0.997. The most common incorrect classification pattern was OME misclassified as normal tympanic membranes. CONCLUSIONS The paucity of access to tertiary otolaryngology care for rural and remote Aboriginal and Torres Strait Islander communities may contribute to an under-identification of ear disease. Computer vision image classification algorithms can accurately classify ear disease from otoscopic images of Indigenous Australian children. In the future, a validated algorithm may integrate with existing telemedicine initiatives to support effective triage and facilitate early treatment and referral.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology-Head and Neck Surgery, Princess Alexandra Hospital, Brisbane, Queensland, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| | - Graeme Crossland
- Department of Otolaryngology - Head and Neck Surgery, Royal Darwin Hospital, Darwin, Northern Territory, Australia
| | - Hemi Patel
- Department of Otolaryngology - Head and Neck Surgery, Royal Darwin Hospital, Darwin, Northern Territory, Australia
| | - Eugene Wong
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| | - Kelvin Kong
- School of Medicine and Public Health, University of Newcastle, Newcastle, New South Wales, Australia
- Department of Linguistics, Faculty of Medicine, Macquarie University, Sydney, New South Wales, Australia
- School of Population Health, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Hasantha Gunasekera
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- The Children's Hospital at Westmead, Sydney, New South Wales, Australia
| | - Brent Richards
- Division of Medical Services, Gold Coast University Hospital, Gold Coast, Queensland, Australia
- Griffith Health, Griffith University Queensland, Australia
| | - Liam Caffery
- Centre for Online Health, University of Queensland, Australia
| | - Chris Perry
- Centre for Online Health, University of Queensland, Australia
| | - Raymond Sacks
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Camperdown, New South Wales, Australia
| | - Narinder Singh
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
22
|
Habib AR, Kajbafzadeh M, Hasan Z, Wong E, Gunasekera H, Perry C, Sacks R, Kumar A, Singh N. Artificial intelligence to classify ear disease from otoscopy: A systematic review and meta-analysis. Clin Otolaryngol 2022; 47:401-413. [PMID: 35253378 PMCID: PMC9310803 DOI: 10.1111/coa.13925] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/08/2022] [Accepted: 02/27/2022] [Indexed: 11/29/2022]
Abstract
Objectives To summarise the accuracy of artificial intelligence (AI) computer vision algorithms to classify ear disease from otoscopy. Design Systematic review and meta‐analysis. Methods Using the PRISMA guidelines, nine online databases were searched for articles that used AI computer vision algorithms developed from various methods (convolutional neural networks, artificial neural networks, support vector machines, decision trees and k‐nearest neighbours) to classify otoscopic images. Diagnostic classes of interest: normal tympanic membrane, acute otitis media (AOM), otitis media with effusion (OME), chronic otitis media (COM) with or without perforation, cholesteatoma and canal obstruction. Main outcome measures Accuracy to correctly classify otoscopic images compared to otolaryngologists (ground truth). The Quality Assessment of Diagnostic Accuracy Studies Version 2 tool was used to assess the quality of methodology and risk of bias. Results Thirty‐nine articles were included. Algorithms achieved 90.7% (95%CI: 90.1–91.3%) accuracy to difference between normal or abnormal otoscopy images in 14 studies. The most common multiclassification algorithm (3 or more diagnostic classes) achieved 97.6% (95%CI: 97.3–97.9%) accuracy to differentiate between normal, AOM and OME in three studies. AI algorithms outperformed human assessors to classify otoscopy images achieving 93.4% (95%CI: 90.5–96.4%) versus 73.2% (95%CI: 67.9–78.5%) accuracy in three studies. Convolutional neural networks achieved the highest accuracy compared to other classification methods. Conclusion AI can classify ear disease from otoscopy. A concerted effort is required to establish a comprehensive and reliable otoscopy database for algorithm training. An AI‐supported otoscopy system may assist health care workers, trainees and primary care practitioners with less otology experience identify ear disease.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Queensland, Australia.,Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Majid Kajbafzadeh
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia
| | - Zubair Hasan
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Hasantha Gunasekera
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,The Children's Hospital at Westmead, New South Wales, Australia
| | - Chris Perry
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Queensland, Australia.,University of Queensland Medical School, Queensland, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, New South Wales, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| |
Collapse
|
23
|
Chawdhary G, Shoman N. Emerging artificial intelligence applications in otological imaging. Curr Opin Otolaryngol Head Neck Surg 2021; 29:357-364. [PMID: 34459798 DOI: 10.1097/moo.0000000000000754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW To highlight the recent literature on artificial intelligence (AI) pertaining to otological imaging and to discuss future directions, obstacles and opportunities. RECENT FINDINGS The main themes in the recent literature centre around automated otoscopic image diagnosis and automated image segmentation for application in virtual reality surgical simulation and planning. Other applications that have been studied include identification of tinnitus MRI biomarkers, facial palsy analysis, intraoperative augmented reality systems, vertigo diagnosis and endolymphatic hydrops ratio calculation in Meniere's disease. Studies are presently at a preclinical, proof-of-concept stage. SUMMARY The recent literature on AI in otological imaging is promising and demonstrates the future potential of this technology in automating certain imaging tasks in a healthcare environment of ever-increasing demand and workload. Some studies have shown equivalence or superiority of the algorithm over physicians, albeit in narrowly defined realms. Future challenges in developing this technology include the compilation of large high quality annotated datasets, fostering strong collaborations between the health and technology sectors, testing the technology within real-world clinical pathways and bolstering trust among patients and physicians in this new method of delivering healthcare.
Collapse
Affiliation(s)
- Gaurav Chawdhary
- ENT Department, Royal Hallamshire Hospital, Broomhall, Sheffield, UK
| | - Nael Shoman
- ENT Department, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
24
|
Byun H, Yu S, Oh J, Bae J, Yoon MS, Lee SH, Chung JH, Kim TH. An Assistive Role of a Machine Learning Network in Diagnosis of Middle Ear Diseases. J Clin Med 2021; 10:jcm10153198. [PMID: 34361982 PMCID: PMC8347824 DOI: 10.3390/jcm10153198] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 07/16/2021] [Accepted: 07/19/2021] [Indexed: 02/07/2023] Open
Abstract
The present study aimed to develop a machine learning network to diagnose middle ear diseases with tympanic membrane images and to identify its assistive role in the diagnostic process. The medical records of subjects who underwent ear endoscopy tests were reviewed. From these records, 2272 diagnostic tympanic membranes images were appropriately labeled as normal, otitis media with effusion (OME), chronic otitis media (COM), or cholesteatoma and were used for training. We developed the “ResNet18 + Shuffle” network and validated the model performance. Seventy-one representative cases were selected to test the final accuracy of the network and resident physicians. We asked 10 resident physicians to make diagnoses from tympanic membrane images with and without the help of the machine learning network, and the change of the diagnostic performance of resident physicians with the aid of the answers from the machine learning network was assessed. The devised network showed a highest accuracy of 97.18%. A five-fold validation showed that the network successfully diagnosed ear diseases with an accuracy greater than 93%. All resident physicians were able to diagnose middle ear diseases more accurately with the help of the machine learning network. The increase in diagnostic accuracy was up to 18% (1.4% to 18.4%). The machine learning network successfully classified middle ear diseases and was assistive to clinicians in the interpretation of tympanic membrane images.
Collapse
Affiliation(s)
- Hayoung Byun
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea; (H.B.); (S.H.L.)
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
| | - Sangjoon Yu
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of Computer Science, Hanyang University, Seoul 04763, Korea
| | - Jaehoon Oh
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Junwon Bae
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Myeong Seong Yoon
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, Seoul 04763, Korea
| | - Seung Hwan Lee
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea; (H.B.); (S.H.L.)
| | - Jae Ho Chung
- Department of Otolaryngology & Head and Neck Surgery, College of Medicine, Hanyang University, Seoul 04763, Korea; (H.B.); (S.H.L.)
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of HY-KIST Bio-Convergence, College of Medicine, Hanyang University, Seoul 04763, Korea
- Correspondence: (J.H.C.); (T.H.K.)
| | - Tae Hyun Kim
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04763, Korea; (S.Y.); (J.O.); (J.B.); (M.S.Y.)
- Department of Computer Science, Hanyang University, Seoul 04763, Korea
- Correspondence: (J.H.C.); (T.H.K.)
| |
Collapse
|
25
|
Kerbl R. [Pediatrics up to date-Brief notes on research]. Monatsschr Kinderheilkd 2021; 169:681-683. [PMID: 34305179 PMCID: PMC8287283 DOI: 10.1007/s00112-021-01240-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/02/2022]
Affiliation(s)
- Reinhold Kerbl
- Abteilung für Kinder und Jugendliche, LKH Hochsteiermark/Leoben, Vordernbergerstraße 42, 8700 Leoben, Österreich
| |
Collapse
|
26
|
García-Domínguez A, Galván-Tejada CE, Brena RF, Aguileta AA, Galván-Tejada JI, Gamboa-Rosales H, Celaya-Padilla JM, Luna-García H. Children's Activity Classification for Domestic Risk Scenarios Using Environmental Sound and a Bayesian Network. Healthcare (Basel) 2021; 9:healthcare9070884. [PMID: 34356262 PMCID: PMC8307924 DOI: 10.3390/healthcare9070884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 06/26/2021] [Accepted: 07/06/2021] [Indexed: 11/16/2022] Open
Abstract
Children’s healthcare is a relevant issue, especially the prevention of domestic accidents, since it has even been defined as a global health problem. Children’s activity classification generally uses sensors embedded in children’s clothing, which can lead to erroneous measurements for possible damage or mishandling. Having a non-invasive data source for a children’s activity classification model provides reliability to the monitoring system where it is applied. This work proposes the use of environmental sound as a data source for the generation of children’s activity classification models, implementing feature selection methods and classification techniques based on Bayesian networks, focused on the recognition of potentially triggering activities of domestic accidents, applicable in child monitoring systems. Two feature selection techniques were used: the Akaike criterion and genetic algorithms. Likewise, models were generated using three classifiers: naive Bayes, semi-naive Bayes and tree-augmented naive Bayes. The generated models, combining the methods of feature selection and the classifiers used, present accuracy of greater than 97% for most of them, with which we can conclude the efficiency of the proposal of the present work in the recognition of potentially detonating activities of domestic accidents.
Collapse
Affiliation(s)
- Antonio García-Domínguez
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
| | - Carlos E. Galván-Tejada
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
- Correspondence:
| | - Ramón F. Brena
- Tecnológico de Monterrey, School of Engineering and Sciences, Av. Eugenio Garza Sada 2501 Sur, Monterrey 64849, Nuevo León, Mexico;
| | - Antonio A. Aguileta
- Facultad de Matemáticas, Universidad Autónoma de Yucatán, Anillo Periférico Norte, Tablaje Cat. 13615, Colonia Chuburná Hidalgo Inn, Mérida 97110, Yucatan, Mexico;
| | - Jorge I. Galván-Tejada
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
| | - Hamurabi Gamboa-Rosales
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
| | - José M. Celaya-Padilla
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
| | - Huizilopoztli Luna-García
- Unidad Académica de Ingeniería Eléctrica, Universidad Autónoma de Zacatecas, Jardín Juárez 147, Centro 98000, Zacatecas, Mexico; (A.G.-D.); (J.I.G.-T.); (H.G.-R.); (J.M.C.-P.); (H.L.-G.)
| |
Collapse
|
27
|
Won J, Monroy GL, Dsouza RI, Spillman DR, McJunkin J, Porter RG, Shi J, Aksamitiene E, Sherwood M, Stiger L, Boppart SA. Handheld Briefcase Optical Coherence Tomography with Real-Time Machine Learning Classifier for Middle Ear Infections. BIOSENSORS-BASEL 2021; 11:bios11050143. [PMID: 34063695 PMCID: PMC8147830 DOI: 10.3390/bios11050143] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/29/2021] [Accepted: 04/30/2021] [Indexed: 12/13/2022]
Abstract
A middle ear infection is a prevalent inflammatory disease most common in the pediatric population, and its financial burden remains substantial. Current diagnostic methods are highly subjective, relying on visual cues gathered by an otoscope. To address this shortcoming, optical coherence tomography (OCT) has been integrated into a handheld imaging probe. This system can non-invasively and quantitatively assess middle ear effusions and identify the presence of bacterial biofilms in the middle ear cavity during ear infections. Furthermore, the complete OCT system is housed in a standard briefcase to maximize its portability as a diagnostic device. Nonetheless, interpreting OCT images of the middle ear more often requires expertise in OCT as well as middle ear infections, making it difficult for an untrained user to operate the system as an accurate stand-alone diagnostic tool in clinical settings. Here, we present a briefcase OCT system implemented with a real-time machine learning platform for middle ear infections. A random forest-based classifier can categorize images based on the presence of middle ear effusions and biofilms. This study demonstrates that our briefcase OCT system coupled with machine learning can provide user-invariant classification results of middle ear conditions, which may greatly improve the utility of this technology for the diagnosis and management of middle ear infections.
Collapse
Affiliation(s)
- Jungeun Won
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA;
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Guillermo L. Monroy
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Roshan I. Dsouza
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Darold R. Spillman
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Jonathan McJunkin
- Department of Otolaryngology, Carle Foundation Hospital, Champaign, IL 61822, USA; (J.M.); (R.G.P.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Ryan G. Porter
- Department of Otolaryngology, Carle Foundation Hospital, Champaign, IL 61822, USA; (J.M.); (R.G.P.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Jindou Shi
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Edita Aksamitiene
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - MaryEllen Sherwood
- Stephens Family Clinical Research Institute, Carle Foundation Hospital, Urbana, IL 61801, USA; (M.S.); (L.S.)
| | - Lindsay Stiger
- Stephens Family Clinical Research Institute, Carle Foundation Hospital, Urbana, IL 61801, USA; (M.S.); (L.S.)
| | - Stephen A. Boppart
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA;
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
- Correspondence:
| |
Collapse
|
28
|
Pichichero ME. Can Machine Learning and AI Replace Otoscopy for Diagnosis of Otitis Media? Pediatrics 2021; 147:peds.2020-049584. [PMID: 33731368 DOI: 10.1542/peds.2020-049584] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/19/2021] [Indexed: 11/24/2022] Open
Affiliation(s)
- Michael E Pichichero
- Research Institute at Rochester General Hospital, Center for Infectious Diseases and Immunology, Rochester, New York; and Center for Immunology and Infectious Diseases, University of California, Davis, Davis, California
| |
Collapse
|