1
|
Machado P, Tahmasebi A, Fallon S, Liu JB, Dogan BE, Needleman L, Lazar M, Willis AI, Brill K, Nazarian S, Berger A, Forsberg F. Characterizing Sentinel Lymph Node Status in Breast Cancer Patients Using a Deep-Learning Model Compared With Radiologists' Analysis of Grayscale Ultrasound and Lymphosonography. Ultrasound Q 2024; 40:e00683. [PMID: 38958999 DOI: 10.1097/ruq.0000000000000683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
ABSTRACT The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.
Collapse
Affiliation(s)
- Priscilla Machado
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Samuel Fallon
- Sidney Kimmel Medical College, Thomas Jefferson University, Philadelphia, PA
| | - Ji-Bin Liu
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| | - Basak E Dogan
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX
| | | | - Melissa Lazar
- Department of Surgery, Thomas Jefferson University, Philadelphia, PA
| | - Alliric I Willis
- Department of Surgery, Thomas Jefferson University, Philadelphia, PA
| | - Kristin Brill
- Department of Surgery, Thomas Jefferson University, Philadelphia, PA
| | - Susanna Nazarian
- Department of Surgery, Thomas Jefferson University, Philadelphia, PA
| | - Adam Berger
- Chief, Department of Melanoma and Soft Tissue Surgical Oncology, Rutgers University, New Brunswick, NJ
| | - Flemming Forsberg
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA
| |
Collapse
|
2
|
Dubois C, Eigen D, Simon F, Couloigner V, Gormish M, Chalumeau M, Schmoll L, Cohen JF. Development and validation of a smartphone-based deep-learning-enabled system to detect middle-ear conditions in otoscopic images. NPJ Digit Med 2024; 7:162. [PMID: 38902477 PMCID: PMC11189910 DOI: 10.1038/s41746-024-01159-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 06/10/2024] [Indexed: 06/22/2024] Open
Abstract
Middle-ear conditions are common causes of primary care visits, hearing impairment, and inappropriate antibiotic use. Deep learning (DL) may assist clinicians in interpreting otoscopic images. This study included patients over 5 years old from an ambulatory ENT practice in Strasbourg, France, between 2013 and 2020. Digital otoscopic images were obtained using a smartphone-attached otoscope (Smart Scope, Karl Storz, Germany) and labeled by a senior ENT specialist across 11 diagnostic classes (reference standard). An Inception-v2 DL model was trained using 41,664 otoscopic images, and its diagnostic accuracy was evaluated by calculating class-specific estimates of sensitivity and specificity. The model was then incorporated into a smartphone app called i-Nside. The DL model was evaluated on a validation set of 3,962 images and a held-out test set comprising 326 images. On the validation set, all class-specific estimates of sensitivity and specificity exceeded 98%. On the test set, the DL model achieved a sensitivity of 99.0% (95% confidence interval: 94.5-100) and a specificity of 95.2% (91.5-97.6) for the binary classification of normal vs. abnormal images; wax plugs were detected with a sensitivity of 100% (94.6-100) and specificity of 97.7% (95.0-99.1); other class-specific estimates of sensitivity and specificity ranged from 33.3% to 92.3% and 96.0% to 100%, respectively. We present an end-to-end DL-enabled system able to achieve expert-level diagnostic accuracy for identifying normal tympanic aspects and wax plugs within digital otoscopic images. However, the system's performance varied for other middle-ear conditions. Further prospective validation is necessary before wider clinical deployment.
Collapse
Affiliation(s)
| | | | - François Simon
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | - Vincent Couloigner
- Department of Pediatric Otolaryngology, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Martin Chalumeau
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France
| | | | - Jérémie F Cohen
- Inserm UMR1153 (CRESS), Université Paris Cité, Paris, France.
- Department of General Pediatrics and Pediatric Infectious Diseases, Necker-Enfants malades Hospital, APHP, Université Paris Cité, Paris, France.
| |
Collapse
|
3
|
Principi N, Esposito S. Smartphone-Based Artificial Intelligence for the Detection and Diagnosis of Pediatric Diseases: A Comprehensive Review. Bioengineering (Basel) 2024; 11:628. [PMID: 38927864 PMCID: PMC11200698 DOI: 10.3390/bioengineering11060628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/06/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
In recent years, the use of smartphones and other wireless technology in medical care has developed rapidly. However, in some cases, especially for pediatric medical problems, the reliability of information accessed by mobile health technology remains debatable. The main aim of this paper is to evaluate the relevance of smartphone applications in the detection and diagnosis of pediatric medical conditions for which the greatest number of applications have been developed. This is the case of smartphone applications developed for the diagnosis of acute otitis media, otitis media with effusion, hearing impairment, obesity, amblyopia, and vision screening. In some cases, the information given by these applications has significantly improved the diagnostic ability of physicians. However, distinguishing between applications that can be effective and those that may lead to mistakes can be very difficult. This highlights the importance of a careful application selection before including smartphone-based artificial intelligence in everyday clinical practice.
Collapse
Affiliation(s)
| | - Susanna Esposito
- Pediatric Clinic, Department of Medicine and Surgery, University of Parma, 43126 Parma, Italy
| |
Collapse
|
4
|
O'Neill S, Begg S, Hyett N, Spelten E. Primary Health Care Interventions for Potentially Preventable Ear, Nose, and Throat Conditions in Rural and Remote Areas: A Systematic Review. EAR, NOSE & THROAT JOURNAL 2024:1455613241245198. [PMID: 38646793 DOI: 10.1177/01455613241245198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024] Open
Abstract
Background:Primary and secondary level preventive primary health care programs providing early detection and timely management of ear, nose, and throat (ENT) conditions in rural and remote regions are fundamental to preventing downstream impacts on health, social, and educational outcomes. However, the range and quality of evidence is yet to be reviewed. Objectives: The study objectives were to identify and synthesize the evidence of primary health care interventions for detection and management of ENT conditions in rural and remote areas, and evaluate the quality of the research and effectiveness of interventions. Methods: A systematic literature search of 6 databases (February 2023). The review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement, and the quality appraisal of studies was evaluated using the Mixed Methods Appraisal Tool (initial screening questions: Are there clear research questions? Do the collected data allow to address the research questions?). Results: Ten studies met the inclusion criteria. The results describe interventions for detection and management of respiratory tract infections, otitis media, and ear disease in primary health care settings. No studies met the inclusion criteria for tonsillitis. The role of community-based programs and allied health workers in the detection and management of ENT conditions was found to be effective in rural and remote regions. Only 2 of the studies met the screening criteria for quality appraisal. Conclusions: The study findings may inform future programs and policy development to address detection and management of ENT conditions in rural and remote primary care settings, and supports the need for further research on innovative models of care targeting potentially preventable hospitalizations through primary and secondary level prevention.
Collapse
Affiliation(s)
- Susan O'Neill
- Department of Community and Allied Health, La Trobe Rural Health School, La Trobe University, Bendigo, VIC, Australia
| | - Stephen Begg
- Department of Community and Allied Health, La Trobe Rural Health School, La Trobe University, Bendigo, VIC, Australia
| | - Nerida Hyett
- Murray Primary Health Network, Bendigo, VIC, Australia
| | - Evelien Spelten
- Department of Community and Allied Health, La Trobe Rural Health School, La Trobe University, Bendigo, VIC, Australia
| |
Collapse
|
5
|
Cheong RCT, Jawad S, Adams A, Campion T, Lim ZH, Papachristou N, Unadkat S, Randhawa P, Joseph J, Andrews P, Taylor P, Kunz H. Enhancing paranasal sinus disease detection with AutoML: efficient AI development and evaluation via magnetic resonance imaging. Eur Arch Otorhinolaryngol 2024; 281:2153-2158. [PMID: 38197934 PMCID: PMC10942883 DOI: 10.1007/s00405-023-08424-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/18/2023] [Indexed: 01/11/2024]
Abstract
PURPOSE Artificial intelligence (AI) in the form of automated machine learning (AutoML) offers a new potential breakthrough to overcome the barrier of entry for non-technically trained physicians. A Clinical Decision Support System (CDSS) for screening purposes using AutoML could be beneficial to ease the clinical burden in the radiological workflow for paranasal sinus diseases. METHODS The main target of this work was the usage of automated evaluation of model performance and the feasibility of the Vertex AI image classification model on the Google Cloud AutoML platform to be trained to automatically classify the presence or absence of sinonasal disease. The dataset is a consensus labelled Open Access Series of Imaging Studies (OASIS-3) MRI head dataset by three specialised head and neck consultant radiologists. A total of 1313 unique non-TSE T2w MRI head sessions were used from the OASIS-3 repository. RESULTS The best-performing image classification model achieved a precision of 0.928. Demonstrating the feasibility and high performance of the Vertex AI image classification model to automatically detect the presence or absence of sinonasal disease on MRI. CONCLUSION AutoML allows for potential deployment to optimise diagnostic radiology workflows and lay the foundation for further AI research in radiology and otolaryngology. The usage of AutoML could serve as a formal requirement for a feasibility study.
Collapse
Affiliation(s)
- Ryan Chin Taw Cheong
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | - Susan Jawad
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | | | | | | | - Nikolaos Papachristou
- Medical Physics and Digital Innovation Laboratory, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Samit Unadkat
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | - Premjit Randhawa
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | - Jonathan Joseph
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | - Peter Andrews
- Royal National ENT and Eastman Dental Hospitals, University College London Hospitals NHS, London, UK
| | | | - Holger Kunz
- University College London, London, UK.
- School of Public Health, Imperial College London, London, UK.
| |
Collapse
|
6
|
Afify HM, Mohammed KK, Hassanien AE. Insight into Automatic Image Diagnosis of Ear Conditions Based on Optimized Deep Learning Approach. Ann Biomed Eng 2024; 52:865-876. [PMID: 38097895 PMCID: PMC10940396 DOI: 10.1007/s10439-023-03422-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 12/06/2023] [Indexed: 03/16/2024]
Abstract
Examining otoscopic images for ear diseases is necessary when the clinical diagnosis of ear diseases extracted from the knowledge of otolaryngologists is limited. Improved diagnosis approaches based on otoscopic image processing are urgently needed. Recently, convolutional neural networks (CNNs) have been carried out for medical diagnosis to obtain higher accuracy than standard machine learning algorithms and specialists' expertise. Therefore, the proposed approach involves using the Bayesian hyperparameter optimization with the CNN architecture for automatic diagnosis of ear imagery database including four classes: normal, myringosclerosis, earwax plug, and chronic otitis media (COM). The suggested approach was trained using 616 otoscopic images, and the performance of this approach was assessed using 264 testing images. In this paper, the performance of ear disease classification was compared in terms of accuracy, sensitivity, specificity, and positive predictive value (PPV). The results produced a classification accuracy of 98.10%, a sensitivity of 98.11%, a specificity of 99.36%, and a PPV of 98.10%. Finally, the suggested approach demonstrates how to locate optimal CNN hyperparameters for accurate diagnosis of ear diseases while taking time into account. As a result, the usefulness and dependability of the suggested approach will lead to the establishment of an automated tool for better categorization and prediction of different ear diseases.
Collapse
Affiliation(s)
- Heba M Afify
- Systems and Biomedical Engineering Department, Higher Institute of Engineering in Shorouk Academy, Al Shorouk City, Cairo, Egypt.
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt.
| | - Kamel K Mohammed
- Center for Virus Research and Studies, Al Azhar University, Cairo, Egypt
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt
| | - Aboul Ella Hassanien
- College of Business Administration, Kuwait University, Kuwait, Kuwait
- Scientific Research Group in Egypt (SRGE), Cairo, Egypt
- Faculty of Computers and Information, Cairo University, Giza, Egypt
| |
Collapse
|
7
|
Zhou Z, Pandey R, Valdez TA. Label-Free Optical Technologies for Middle-Ear Diseases. Bioengineering (Basel) 2024; 11:104. [PMID: 38391590 PMCID: PMC10885954 DOI: 10.3390/bioengineering11020104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 02/24/2024] Open
Abstract
Medical applications of optical technology have increased tremendously in recent decades. Label-free techniques have the unique advantage of investigating biological samples in vivo without introducing exogenous agents. This is especially beneficial for a rapid clinical translation as it reduces the need for toxicity studies and regulatory approval for exogenous labels. Emerging applications have utilized label-free optical technology for screening, diagnosis, and surgical guidance. Advancements in detection technology and rapid improvements in artificial intelligence have expedited the clinical implementation of some optical technologies. Among numerous biomedical application areas, middle-ear disease is a unique space where label-free technology has great potential. The middle ear has a unique anatomical location that can be accessed through a dark channel, the external auditory canal; it can be sampled through a tympanic membrane of approximately 100 microns in thickness. The tympanic membrane is the only membrane in the body that is surrounded by air on both sides, under normal conditions. Despite these favorable characteristics, current examination modalities for middle-ear space utilize century-old technology such as white-light otoscopy. This paper reviews existing label-free imaging technologies and their current progress in visualizing middle-ear diseases. We discuss potential opportunities, barriers, and practical considerations when transitioning label-free technology to clinical applications.
Collapse
Affiliation(s)
- Zeyi Zhou
- School of Medicine, Stanford University, Palo Alto, CA 94305, USA
| | - Rishikesh Pandey
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Tulio A Valdez
- Department of Otolaryngology, Stanford University, Palo Alto, CA 94304, USA
| |
Collapse
|
8
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
9
|
Wu Q, Wang X, Liang G, Luo X, Zhou M, Deng H, Zhang Y, Huang X, Yang Q. Advances in Image-Based Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery: A Systematic Review. Otolaryngol Head Neck Surg 2023; 169:1132-1142. [PMID: 37288505 DOI: 10.1002/ohn.391] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To update the literature and provide a systematic review of image-based artificial intelligence (AI) applications in otolaryngology, highlight its advances, and propose future challenges. DATA SOURCES Web of Science, Embase, PubMed, and Cochrane Library. REVIEW METHODS Studies written in English, published between January 2020 and December 2022. Two independent authors screened the search results, extracted data, and assessed studies. RESULTS Overall, 686 studies were identified. After screening titles and abstracts, 325 full-text studies were assessed for eligibility, and 78 studies were included in this systematic review. The studies originated from 16 countries. Among these countries, the top 3 were China (n = 29), Korea (n = 8), the United States, and Japan (n = 7 each). The most common area was otology (n = 35), followed by rhinology (n = 20), pharyngology (n = 18), and head and neck surgery (n = 5). Most applications of AI in otology, rhinology, pharyngology, and head and neck surgery mainly included chronic otitis media (n = 9), nasal polyps (n = 4), laryngeal cancer (n = 12), and head and neck squamous cell carcinoma (n = 3), respectively. The overall performance of AI in accuracy, the area under the curve, sensitivity, and specificity were 88.39 ± 9.78%, 91.91 ± 6.70%, 86.93 ± 11.59%, and 88.62 ± 14.03%, respectively. CONCLUSION This state-of-the-art review aimed to highlight the increasing applications of image-based AI in otorhinolaryngology head and neck surgery. The following steps will entail multicentre collaboration to ensure data reliability, ongoing optimization of AI algorithms, and integration into real-world clinical practice. Future studies should consider 3-dimensional (3D)-based AI, such as 3D surgical AI.
Collapse
Affiliation(s)
- Qingwu Wu
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xinyue Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Guixian Liang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Min Zhou
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Huiyi Deng
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yana Zhang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xuekun Huang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Qintai Yang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
10
|
Song D, Kim T, Lee Y, Kim J. Image-Based Artificial Intelligence Technology for Diagnosing Middle Ear Diseases: A Systematic Review. J Clin Med 2023; 12:5831. [PMID: 37762772 PMCID: PMC10531728 DOI: 10.3390/jcm12185831] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
Abstract
Otolaryngological diagnoses, such as otitis media, are traditionally performed using endoscopy, wherein diagnostic accuracy can be subjective and vary among clinicians. The integration of objective tools, like artificial intelligence (AI), could potentially improve the diagnostic process by minimizing the influence of subjective biases and variability. We systematically reviewed the AI techniques using medical imaging in otolaryngology. Relevant studies related to AI-assisted otitis media diagnosis were extracted from five databases: Google Scholar, PubMed, Medline, Embase, and IEEE Xplore, without date restrictions. Publications that did not relate to AI and otitis media diagnosis or did not utilize medical imaging were excluded. Of the 32identified studies, 26 used tympanic membrane images for classification, achieving an average diagnosis accuracy of 86% (range: 48.7-99.16%). Another three studies employed both segmentation and classification techniques, reporting an average diagnosis accuracy of 90.8% (range: 88.06-93.9%). These findings suggest that AI technologies hold promise for improving otitis media diagnosis, offering benefits for telemedicine and primary care settings due to their high diagnostic accuracy. However, to ensure patient safety and optimal outcomes, further improvements in diagnostic performance are necessary.
Collapse
Affiliation(s)
- Dahye Song
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Taewan Kim
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Yeonjoon Lee
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University, Ansan 15588, Republic of Korea; (D.S.); (T.K.)
| | - Jaeyoung Kim
- Department of Dermatology and Skin Sciences, University of British Columbia, Vancouver, BC V6T 1Z1, Canada;
- Core Research & Development Center, Korea University Ansan Hospital, Ansan 15355, Republic of Korea
| |
Collapse
|
11
|
Ma T, Wu Q, Jiang L, Zeng X, Wang Y, Yuan Y, Wang B, Zhang T. Artificial Intelligence and Machine (Deep) Learning in Otorhinolaryngology: A Bibliometric Analysis Based on VOSviewer and CiteSpace. EAR, NOSE & THROAT JOURNAL 2023:1455613231185074. [PMID: 37515527 DOI: 10.1177/01455613231185074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2023] Open
Abstract
BACKGROUND Otorhinolaryngology diseases are well suited for artificial intelligence (AI)-based interpretation. The use of AI, particularly AI based on deep learning (DL), in the treatment of human diseases is becoming more and more popular. However, there are few bibliometric analyses that have systematically studied this field. OBJECTIVE The objective of this study was to visualize the research hot spots and trends of AI and DL in ENT diseases through bibliometric analysis to help researchers understand the future development of basic and clinical research. METHODS In all, 232 articles and reviews were retrieved from The Web of Science Core Collection. Using CiteSpace and VOSviewer software, countries, institutions, authors, references, and keywords in the field were visualized and examined. RESULTS The majority of these papers came from 44 nations and 498 institutions, with China and the United States leading the way. Common diseases used by AI in ENT include otosclerosis, otitis media, nasal polyps, sinusitis, and so on. In the early years, research focused on the analysis of hearing and articulation disorders, and in recent years mainly on the diagnosis, localization, and grading of diseases. CONCLUSIONS The analysis shows the periodical hot spots and development direction of AI and DL application in ENT diseases from the time dimension. The diagnosis and prognosis of otolaryngology diseases and the analysis of otolaryngology endoscopic images have been the focus of current research and the development trend of future.
Collapse
Affiliation(s)
- Tianyu Ma
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Qilong Wu
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Li Jiang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaoyun Zeng
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yuyao Wang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yi Yuan
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Bingxuan Wang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Tianhong Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
12
|
Ding X, Huang Y, Tian X, Zhao Y, Feng G, Gao Z. Diagnosis, Treatment, and Management of Otitis Media with Artificial Intelligence. Diagnostics (Basel) 2023; 13:2309. [PMID: 37443702 DOI: 10.3390/diagnostics13132309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/04/2023] [Accepted: 06/14/2023] [Indexed: 07/15/2023] Open
Abstract
A common infectious disease, otitis media (OM) has a low rate of early diagnosis, which significantly increases the difficulty of treating the disease and the likelihood of serious complications developing including hearing loss, speech impairment, and even intracranial infection. Several areas of healthcare have shown great promise in the application of artificial intelligence (AI) systems, such as the accurate detection of diseases, the automated interpretation of images, and the prediction of patient outcomes. Several articles have reported some machine learning (ML) algorithms such as ResNet, InceptionV3 and Unet, were applied to the diagnosis of OM successfully. The use of these techniques in the OM is still in its infancy, but their potential is enormous. We present in this review important concepts related to ML and AI, describe how these technologies are currently being applied to diagnosing, treating, and managing OM, and discuss the challenges associated with developing AI-assisted OM technologies in the future.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peaking Union Medical College Hospital, No. 1, Shuaifuyuan, Dongcheng District, Beijing 100010, China
| |
Collapse
|
13
|
Lee P, Tahmasebi A, Dave JK, Parekh MR, Kumaran M, Wang S, Eisenbrey JR, Donuru A. Comparison of Gray-scale Inversion to Improve Detection of Pulmonary Nodules on Chest X-rays Between Radiologists and a Deep Convolutional Neural Network. Curr Probl Diagn Radiol 2023; 52:180-186. [PMID: 36470698 DOI: 10.1067/j.cpradiol.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 10/08/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022]
Abstract
Detection of pulmonary nodules on chest x-rays is an important task for radiologists. Previous studies have shown improved detection rates using gray-scale inversion. The purpose of our study was to compare the efficacy of gray-scale inversion in improving the detection of pulmonary nodules on chest x-rays for radiologists and machine learning models (ML). We created a mixed dataset consisting of 60, 2-view (posteroanterior view - PA and lateral view) chest x-rays with computed tomography confirmed nodule(s) and 62 normal chest x-rays. Twenty percent of the cases were separated for a testing dataset (24 total images). Data augmentation through mirroring and transfer learning was used for the remaining cases (784 total images) for supervised training of 4 ML models (grayscale PA, grayscale lateral, gray-scale inversion PA, and gray-scale inversion lateral) on Google's cloud-based AutoML platform. Three cardiothoracic radiologists analyzed the complete 2-view dataset (n=120) and, for comparison to the ML, the single-view testing subsets (12 images each). Gray-scale inversion (area under the curve (AUC) 0.80, 95% confidence interval (CI) 0.75-0.85) did not improve diagnostic performance for radiologists compared to grayscale (AUC 0.84, 95% CI 0.79-0.88). Gray-scale inversion also did not improve diagnostic performance for the ML. The ML did demonstrate higher sensitivity and negative predictive value for grayscale PA (72.7% and 75.0%), grayscale lateral (63.6% and 66.6%), and gray-scale inversion lateral views (72.7% and 76.9%), comparing favorably to the radiologists (63.9% and 72.3%, 27.8% and 58.3%, 19.5% and 50.5% respectively). In the limited testing dataset, the ML did demonstrate higher sensitivity and negative predictive value for grayscale PA (72.7% and 75.0%), grayscale lateral (63.6% and 66.6%), and gray-scale inversion lateral views (72.7% and 76.9%), comparing favorably to the radiologists (63.9% and 72.3%, 27.8% and 58.3%, 19.5% and 50.5%, respectively). Further investigation of other post-processing algorithms to improve diagnostic performance of ML is warranted.
Collapse
Affiliation(s)
- Patrick Lee
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Jaydev K Dave
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Maansi R Parekh
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Maruti Kumaran
- Department of Radiology, Temple University Hospital, Philadelphia, PA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA
| | - Achala Donuru
- Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, PA.
| |
Collapse
|
14
|
Cao Z, Chen F, Grais EM, Yue F, Cai Y, Swanepoel DW, Zhao F. Machine Learning in Diagnosing Middle Ear Disorders Using Tympanic Membrane Images: A Meta-Analysis. Laryngoscope 2023; 133:732-741. [PMID: 35848851 DOI: 10.1002/lary.30291] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 06/18/2022] [Accepted: 06/21/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE To systematically evaluate the development of Machine Learning (ML) models and compare their diagnostic accuracy for the classification of Middle Ear Disorders (MED) using Tympanic Membrane (TM) images. METHODS PubMed, EMBASE, CINAHL, and CENTRAL were searched up until November 30, 2021. Studies on the development of ML approaches for diagnosing MED using TM images were selected according to the inclusion criteria. PRISMA guidelines were followed with study design, analysis method, and outcomes extracted. Sensitivity, specificity, and area under the curve (AUC) were used to summarize the performance metrics of the meta-analysis. Risk of Bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool in combination with the Prediction Model Risk of Bias Assessment Tool. RESULTS Sixteen studies were included, encompassing 20254 TM images (7025 normal TM and 13229 MED). The sample size ranged from 45 to 6066 per study. The accuracy of the 25 included ML approaches ranged from 76.00% to 98.26%. Eleven studies (68.8%) were rated as having a low risk of bias, with the reference standard as the major domain of high risk of bias (37.5%). Sensitivity and specificity were 93% (95% CI, 90%-95%) and 85% (95% CI, 82%-88%), respectively. The AUC of total TM images was 94% (95% CI, 91%-96%). The greater AUC was found using otoendoscopic images than otoscopic images. CONCLUSIONS ML approaches perform robustly in distinguishing between normal ears and MED, however, it is proposed that a standardized TM image acquisition and annotation protocol should be developed. LEVEL OF EVIDENCE NA Laryngoscope, 133:732-741, 2023.
Collapse
Affiliation(s)
- Zuwei Cao
- Center for Rehabilitative Auditory Research, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Feifan Chen
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fengjuan Yue
- Medical Examination Center, Guizhou Provincial People's Hospital, Guiyang City, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, China
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| |
Collapse
|
15
|
Tahmasebi A, Wang S, Wessner CE, Vu T, Liu JB, Forsberg F, Civan J, Guglielmo FF, Eisenbrey JR. Ultrasound-Based Machine Learning Approach for Detection of Nonalcoholic Fatty Liver Disease. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2023. [PMID: 36807314 DOI: 10.1002/jum.16194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/05/2022] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES Current diagnosis of nonalcoholic fatty liver disease (NAFLD) relies on biopsy or MR-based fat quantification. This prospective study explored the use of ultrasound with artificial intelligence for the detection of NAFLD. METHODS One hundred and twenty subjects with clinical suspicion of NAFLD and 10 healthy volunteers consented to participate in this institutional review board-approved study. Subjects were categorized as NAFLD and non-NAFLD according to MR proton density fat fraction (PDFF) findings. Ultrasound images from 10 different locations in the right and left hepatic lobes were collected following a standard protocol. MRI-based liver fat quantification was used as the reference standard with >6.4% indicative of NAFLD. A supervised machine learning model was developed for assessment of NAFLD. To validate model performance, a balanced testing dataset of 24 subjects was used. Sensitivity, specificity, positive predictive value, negative predictive value, and overall accuracy with 95% confidence interval were calculated. RESULTS A total of 1119 images from 106 participants was used for model development. The internal evaluation achieved an average precision of 0.941, recall of 88.2%, and precision of 89.0%. In the testing set AutoML achieved a sensitivity of 72.2% (63.1%-80.1%), specificity of 94.6% (88.7%-98.0%), positive predictive value (PPV) of 93.1% (86.0%-96.7%), negative predictive value of 77.3% (71.6%-82.1%), and accuracy of 83.4% (77.9%-88.0%). The average agreement for an individual subject was 92%. CONCLUSIONS An ultrasound-based machine learning model for identification of NAFLD showed high specificity and PPV in this prospective trial. This approach may in the future be used as an inexpensive and noninvasive screening tool for identifying NAFLD in high-risk patients.
Collapse
Affiliation(s)
- Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Shuo Wang
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Corinne E Wessner
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Trang Vu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Ji-Bin Liu
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flemming Forsberg
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Jesse Civan
- Department of Medicine, Division of Gastroenterology and Hepatology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Flavius F Guglielmo
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - John R Eisenbrey
- Department of Radiology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
16
|
Artificial intelligence model for analyzing colonic endoscopy images to detect changes associated with irritable bowel syndrome. PLOS DIGITAL HEALTH 2023; 2:e0000058. [PMID: 36812592 PMCID: PMC9937744 DOI: 10.1371/journal.pdig.0000058] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 01/12/2023] [Indexed: 02/19/2023]
Abstract
IBS is not considered to be an organic disease and usually shows no abnormality on lower gastrointestinal endoscopy, although biofilm formation, dysbiosis, and histological microinflammation have recently been reported in patients with IBS. In this study, we investigated whether an artificial intelligence (AI) colorectal image model can identify minute endoscopic changes, which cannot typically be detected by human investigators, that are associated with IBS. Study subjects were identified based on electronic medical records and categorized as IBS (Group I; n = 11), IBS with predominant constipation (IBS-C; Group C; n = 12), and IBS with predominant diarrhea (IBS-D; Group D; n = 12). The study subjects had no other diseases. Colonoscopy images from IBS patients and from asymptomatic healthy subjects (Group N; n = 88) were obtained. Google Cloud Platform AutoML Vision (single-label classification) was used to construct AI image models to calculate sensitivity, specificity, predictive value, and AUC. A total of 2479, 382, 538, and 484 images were randomly selected for Groups N, I, C and D, respectively. The AUC of the model discriminating between Group N and I was 0.95. Sensitivity, specificity, positive predictive value, and negative predictive value of Group I detection were 30.8%, 97.6%, 66.7%, and 90.2%, respectively. The overall AUC of the model discriminating between Groups N, C, and D was 0.83; sensitivity, specificity, and positive predictive value of Group N were 87.5%, 46.2%, and 79.9%, respectively. Using the image AI model, colonoscopy images of IBS could be discriminated from healthy subjects at AUC 0.95. Prospective studies are needed to further validate whether this externally validated model has similar diagnostic capabilities at other facilities and whether it can be used to determine treatment efficacy.
Collapse
|
17
|
Tseng CC, Lim V, Jyung RW. Use of artificial intelligence for the diagnosis of cholesteatoma. Laryngoscope Investig Otolaryngol 2023; 8:201-211. [PMID: 36846416 PMCID: PMC9948563 DOI: 10.1002/lio2.1008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 12/07/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023] Open
Abstract
Objectives Accurate diagnosis of cholesteatomas is crucial. However, cholesteatomas can easily be missed in routine otoscopic exams. Convolutional neural networks (CNNs) have performed well in medical image classification, so we evaluated their use for detecting cholesteatomas in otoscopic images. Study Design Design and evaluation of artificial intelligence driven workflow for cholesteatoma diagnosis. Methods Otoscopic images collected from the faculty practice of the senior author were deidentified and labeled by the senior author as cholesteatoma, abnormal non-cholesteatoma, or normal. An image classification workflow was developed to automatically differentiate cholesteatomas from other possible tympanic membrane appearances. Eight pretrained CNNs were trained on our otoscopic images, then tested on a withheld subset of images to evaluate their final performance. CNN intermediate activations were also extracted to visualize important image features. Results A total of 834 otoscopic images were collected, further categorized into 197 cholesteatoma, 457 abnormal non-cholesteatoma, and 180 normal. Final trained CNNs demonstrated strong performance, achieving accuracies of 83.8%-98.5% for differentiating cholesteatoma from normal, 75.6%-90.1% for differentiating cholesteatoma from abnormal non-cholesteatoma, and 87.0%-90.4% for differentiating cholesteatoma from non-cholesteatoma (abnormal non-cholesteatoma + normal). DenseNet201 (100% sensitivity, 97.1% specificity), NASNetLarge (100% sensitivity, 88.2% specificity), and MobileNetV2 (94.1% sensitivity, 100% specificity) were among the best performing CNNs in distinguishing cholesteatoma versus normal. Visualization of intermediate activations showed robust detection of relevant image features by the CNNs. Conclusion While further refinement and more training images are needed to improve performance, artificial intelligence-driven analysis of otoscopic images shows great promise as a diagnostic tool for detecting cholesteatomas. Level of Evidence 3.
Collapse
Affiliation(s)
- Christopher C. Tseng
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| | - Valerie Lim
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| | - Robert W. Jyung
- Department of Otolaryngology – Head and Neck SurgeryRutgers New Jersey Medical SchoolNewarkNew JerseyUSA
| |
Collapse
|
18
|
Machine learning in general practice: scoping review of administrative task support and automation. BMC PRIMARY CARE 2023; 24:14. [PMID: 36641467 PMCID: PMC9840326 DOI: 10.1186/s12875-023-01969-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/04/2023] [Indexed: 01/15/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly used to support general practice in the early detection of disease and treatment recommendations. However, AI systems aimed at alleviating time-consuming administrative tasks currently appear limited. This scoping review thus aims to summarize the research that has been carried out in methods of machine learning applied to the support and automation of administrative tasks in general practice. METHODS Databases covering the fields of health care and engineering sciences (PubMed, Embase, CINAHL with full text, Cochrane Library, Scopus, and IEEE Xplore) were searched. Screening for eligible studies was completed using Covidence, and data was extracted along nine research-based attributes concerning general practice, administrative tasks, and machine learning. The search and screening processes were completed during the period of April to June 2022. RESULTS 1439 records were identified and 1158 were screened for eligibility criteria. A total of 12 studies were included. The extracted attributes indicate that most studies concern various scheduling tasks using supervised machine learning methods with relatively low general practitioner (GP) involvement. Importantly, four studies employed the latest available machine learning methods and the data used frequently varied in terms of setting, type, and availability. CONCLUSION The limited field of research developing in the application of machine learning to administrative tasks in general practice indicates that there is a great need and high potential for such methods. However, there is currently a lack of research likely due to the unavailability of open-source data and a prioritization of diagnostic-based tasks. Future research would benefit from open-source data, cutting-edge methods of machine learning, and clearly stated GP involvement, so that improved and replicable scientific research can be done.
Collapse
|
19
|
Ilicki J. Challenges in evaluating the accuracy of AI-containing digital triage systems: A systematic review. PLoS One 2022; 17:e0279636. [PMID: 36574438 PMCID: PMC9794085 DOI: 10.1371/journal.pone.0279636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/12/2022] [Indexed: 12/28/2022] Open
Abstract
INTRODUCTION Patient-operated digital triage systems with AI components are becoming increasingly common. However, previous reviews have found a limited amount of research on such systems' accuracy. This systematic review of the literature aimed to identify the main challenges in determining the accuracy of patient-operated digital AI-based triage systems. METHODS A systematic review was designed and conducted in accordance with PRISMA guidelines in October 2021 using PubMed, Scopus and Web of Science. Articles were included if they assessed the accuracy of a patient-operated digital triage system that had an AI-component and could triage a general primary care population. Limitations and other pertinent data were extracted, synthesized and analysed. Risk of bias was not analysed as this review studied the included articles' limitations (rather than results). Results were synthesized qualitatively using a thematic analysis. RESULTS The search generated 76 articles and following exclusion 8 articles (6 primary articles and 2 reviews) were included in the analysis. Articles' limitations were synthesized into three groups: epistemological, ontological and methodological limitations. Limitations varied with regards to intractability and the level to which they can be addressed through methodological choices. Certain methodological limitations related to testing triage systems using vignettes can be addressed through methodological adjustments, whereas epistemological and ontological limitations require that readers of such studies appraise the studies with limitations in mind. DISCUSSION The reviewed literature highlights recurring limitations and challenges in studying the accuracy of patient-operated digital triage systems with AI components. Some of these challenges can be addressed through methodology whereas others are intrinsic to the area of inquiry and involve unavoidable trade-offs. Future studies should take these limitations in consideration in order to better address the current knowledge gaps in the literature.
Collapse
|
20
|
Ezzibdeh R, Munjal T, Ahmad I, Valdez TA. Artificial intelligence and tele-otoscopy: A window into the future of pediatric otology. Int J Pediatr Otorhinolaryngol 2022; 160:111229. [PMID: 35816971 DOI: 10.1016/j.ijporl.2022.111229] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 06/30/2022] [Accepted: 07/01/2022] [Indexed: 10/17/2022]
Abstract
Telehealth in otolaryngology is gaining popularity as a potential tool for increased access for rural populations, decreased specialist wait times, and overall savings to the healthcare system. The adoption of telehealth has been dramatically increased by the COVID-19 pandemic limiting patients' physical access to hospitals and clinics. One of the key challenges to telehealth in general otolaryngology and otology specifically is the limited physical examination possible on the ear canal and middle ear. This is compounded in pediatric populations who commonly present with middle ear pathologies which can be challenging to diagnose even in the clinic. To address this need, various otoscopes have been designed to allow patients, their parents, or primary care providers to image the tympanic membrane and middle ear, and send data to otolaryngologists for review. Furthermore, the ability of these devices to capture images in digital format has opened the possibility of using artificial intelligence for quick and reliable diagnostic workup. In this manuscript, we provide a concise review of the literature regarding the efficacy of remote otoscopy, as well as recent efforts on the use of artificial intelligence in aiding otologic diagnoses.
Collapse
Affiliation(s)
- Rami Ezzibdeh
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Tina Munjal
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Iram Ahmad
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| | - Tulio A Valdez
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, United States.
| |
Collapse
|
21
|
Chen YC, Chu YC, Huang CY, Lee YT, Lee WY, Hsu CY, Yang AC, Liao WH, Cheng YF. Smartphone-based artificial intelligence using a transfer learning algorithm for the detection and diagnosis of middle ear diseases: A retrospective deep learning study. EClinicalMedicine 2022; 51:101543. [PMID: 35856040 PMCID: PMC9287624 DOI: 10.1016/j.eclinm.2022.101543] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/09/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Middle ear diseases such as otitis media and middle ear effusion, for which diagnoses are often delayed or misdiagnosed, are among the most common issues faced by clinicians providing primary care for children and adolescents. Artificial intelligence (AI) has the potential to assist clinicians in the detection and diagnosis of middle ear diseases through imaging. METHODS Otoendoscopic images obtained by otolaryngologists from Taipei Veterans General Hospital in Taiwan between Jany 1, 2011 to Dec 31, 2019 were collected retrospectively and de-identified. The images were entered into convolutional neural network (CNN) training models after data pre-processing, augmentation and splitting. To differentiate sophisticated middle ear diseases, nine CNN-based models were constructed to recognize middle ear diseases. The best-performing models were chosen and ensembled in a small CNN for mobile device use. The pretrained model was converted into the smartphone-based program, and the utility was evaluated in terms of detecting and classifying ten middle ear diseases based on otoendoscopic images. A class activation map (CAM) was also used to identify key features for CNN classification. The performance of each classifier was determined by its accuracy, precision, recall, and F1-score. FINDINGS A total of 2820 clinical eardrum images were collected for model training. The programme achieved a high detection accuracy for binary outcomes (pass/refer) of otoendoscopic images and ten different disease categories, with an accuracy reaching 98.0% after model optimisation. Furthermore, the application presented a smooth recognition process and a user-friendly interface and demonstrated excellent performance, with an accuracy of 97.6%. A fifty-question questionnaire related to middle ear diseases was designed for practitioners with different levels of clinical experience. The AI-empowered mobile algorithm's detection accuracy was generally superior to that of general physicians, resident doctors, and otolaryngology specialists (36.0%, 80.0% and 90.0%, respectively). Our results show that the proposed method provides sufficient treatment recommendations that are comparable to those of specialists. INTERPRETATION We developed a deep learning model that can detect and classify middle ear diseases. The use of smartphone-based point-of-care diagnostic devices with AI-empowered automated classification can provide real-world smart medical solutions for the diagnosis of middle ear diseases and telemedicine. FUNDING This study was supported by grants from the Ministry of Science and Technology (MOST110-2622-8-075-001, MOST110-2320-B-075-004-MY3, MOST-110-2634-F-A49 -005, MOST110-2745-B-075A-001A and MOST110-2221-E-075-005), Veterans General Hospitals and University System of Taiwan Joint Research Program (VGHUST111-G6-11-2, VGHUST111c-140), and Taipei Veterans General Hospital (V111E-002-3).
Collapse
Affiliation(s)
- Yen-Chi Chen
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
- Institute of Brain Science, National Yang Ming Chiao Tung University, 3F Shouren Building, No.155, Sec.2, Linong Street, Beitou District, Taipei 112, Taiwan
- Department of Otolaryngology-Head and Neck Surgery, Kaohsiung Municipal Gangshan Hospital (Outsourced by Show-Chwan Memorial Hospital), Kaohsiung 820, Taiwan
| | - Yuan-Chia Chu
- Information Management Office, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Big Data Canter, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Department of Information Management, National Taipei University of Nursing and Health Sciences, 365 Ming-De Road, Taipei 112, Taiwan
| | - Chii-Yuan Huang
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
| | - Yen-Ting Lee
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
| | - Wen-Ya Lee
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
| | - Chien-Yeh Hsu
- Department of Information Management, National Taipei University of Nursing and Health Sciences, 365 Ming-De Road, Taipei 112, Taiwan
- Master Program in Global Health and Development, College of Public Health, Taipei Medical University, 250 Wu-Hsing Street, Taipei 110, Taiwan
| | - Albert C. Yang
- Institute of Brain Science, National Yang Ming Chiao Tung University, 3F Shouren Building, No.155, Sec.2, Linong Street, Beitou District, Taipei 112, Taiwan
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Corresponding authors.
| | - Wen-Huei Liao
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Corresponding authors.
| | - Yen-Fu Cheng
- Department of Otolaryngology-Head and Neck Surgery, Taipei Veterans General Hospital, NO. 201, Sec. 2, Shipai Rd., Beitou District, Taipei 112, Taiwan
- Institute of Brain Science, National Yang Ming Chiao Tung University, 3F Shouren Building, No.155, Sec.2, Linong Street, Beitou District, Taipei 112, Taiwan
- Faculty of Medicine, National Yang Ming Chiao Tung University, Taipei 112, Taiwan
- Department of Medical Research, Taipei Veterans General Hospital, Taipei 112, Taiwan
- Corresponding authors.
| |
Collapse
|
22
|
Robler SK, Coco L, Krumm M. Telehealth solutions for assessing auditory outcomes related to noise and ototoxic exposures in clinic and research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1737. [PMID: 36182272 DOI: 10.1121/10.0013706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 08/04/2022] [Indexed: 06/16/2023]
Abstract
Nearly 1.5 billion people globally have some decline in hearing ability throughout their lifetime. Many causes for hearing loss are preventable, such as that from exposure to noise and chemicals. According to the World Health Organization, nearly 50% of individuals 12-25 years old are at risk of hearing loss due to recreational noise exposure. In the occupational setting, an estimated 16% of disabling hearing loss is related to occupational noise exposure, highest in developing countries. Ototoxicity is another cause of acquired hearing loss. Audiologic assessment is essential for monitoring hearing health and for the diagnosis and management of hearing loss and related disorders (e.g., tinnitus). However, 44% of the world's population is considered rural and, consequently, lacks access to quality hearing healthcare. Therefore, serving individuals living in rural and under-resourced areas requires creative solutions. Conducting hearing assessments via telehealth is one such solution. Telehealth can be used in a variety of contexts, including noise and ototoxic exposure monitoring, field testing in rural and low-resource settings, and evaluating auditory outcomes in large-scale clinical trials. This overview summarizes current telehealth applications and practices for the audiometric assessment, identification, and monitoring of hearing loss.
Collapse
Affiliation(s)
- Samantha Kleindienst Robler
- Department of Otolaryngology-Head and Neck Surgery, University of Arkansas for Medical Sciences, Little Rock, Arkansas 72205, USA
| | - Laura Coco
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, California 92182, USA
| | - Mark Krumm
- Department of Hearing Sciences, Kent State University, Kent, Ohio 44240, USA
| |
Collapse
|
23
|
Crowson MG, Bates DW, Suresh K, Cohen MS, Hartnick CJ. "Human vs Machine" Validation of a Deep Learning Algorithm for Pediatric Middle Ear Infection Diagnosis. Otolaryngol Head Neck Surg 2022:1945998221119156. [PMID: 35972815 PMCID: PMC9931938 DOI: 10.1177/01945998221119156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE We compared the diagnostic performance of human clinicians with that of a neural network algorithm developed using a library of tympanic membrane images derived from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media (AOM) or otitis media with effusion (OME). STUDY DESIGN Retrospective cohort study. SETTING Tertiary academic medical center from 2018 to 2021. METHODS A training set of 639 images of tympanic membranes representing normal, OME, and AOM was used to train a neural network as well as a proprietary commercial image classifier from Google. Model diagnostic prediction performance in differentiating normal vs nonpurulent vs purulent effusion was scored based on classification accuracy. A web-based survey was developed to test human clinicians' diagnostic accuracy on a novel image set, and this was compared head to head against our model. RESULTS Our model achieved a mean prediction accuracy of 80.8% (95% CI, 77.0%-84.6%). The Google model achieved a prediction accuracy of 85.4%. In a validation survey of 39 clinicians analyzing a sample of 22 endoscopic ear images, the average diagnostic accuracy was 65.0%. On the same data set, our model achieved an accuracy of 95.5%. CONCLUSION Our model outperformed certain groups of human clinicians in assessing images of tympanic membranes for effusions in children. Reduced diagnostic error rates using machine learning models may have implications in reducing rates of misdiagnosis, potentially leading to fewer missed diagnoses, unnecessary antibiotic prescriptions, and surgical procedures.
Collapse
Affiliation(s)
- Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - David W. Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA,Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA
| | - Krish Suresh
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - Michael S. Cohen
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| | - Christopher J. Hartnick
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts,Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts
| |
Collapse
|
24
|
"Development of a Novel Scar Screening System with Machine Learning". Plast Reconstr Surg 2022; 150:465e-472e. [PMID: 35687417 DOI: 10.1097/prs.0000000000009312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Hypertrophic scars and keloids tend to cause serious functional and cosmetic impediments to patients. However, as these scars are not life threatening, many patients do not seek proper treatment. Thus, educating physicians and patients regarding these scars is important. The authors aimed to develop an algorithm for scar screening system and compare accuracy of the system with that of physicians. This algorithm is designed to involve healthcare providers and patients. METHODS Digital images were obtained from Google Images, open access repositories, and patients in our hospital. After preprocessing, 3,768 images were uploaded to Google Cloud AutoML Vision platform and labeled with one of the four diagnoses: immature, mature, and hypertrophic scars and keloid. A consensus label for each image was compared with the label provided by physicians. RESULTS For all diagnoses, the average precision (positive predictive value) of the algorithm was 80.7%, the average recall (sensitivity) was 71%, and the area under the curve (AUC) was 0.846. The algorithm afforded 77 correct diagnoses with an accuracy of 77%. Conversely, the average physician accuracy was 68.7%. The Cohen's kappa coefficient of the algorithm was 0.69, whereas that of the physicians were 0.59. CONCLUSIONS We developed a computer vision algorithm that can diagnose four scar types using automated machine learning. Future iterations of this algorithm, with more comprehensive accuracy, can be embedded in telehealth and digital imaging platforms used by patients and primary doctors. The scar screening system with machine learning may be a valuable support tool for physicians and patients.
Collapse
|
25
|
A Machine Learning Approach to Screen for Otitis Media Using Digital Otoscope Images Labelled by an Expert Panel. Diagnostics (Basel) 2022; 12:diagnostics12061318. [PMID: 35741128 PMCID: PMC9222011 DOI: 10.3390/diagnostics12061318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/23/2022] [Accepted: 05/24/2022] [Indexed: 12/04/2022] Open
Abstract
Background: Otitis media includes several common inflammatory conditions of the middle ear that can have severe complications if left untreated. Correctly identifying otitis media can be difficult and a screening system supported by machine learning would be valuable for this prevalent disease. This study investigated the performance of a convolutional neural network in screening for otitis media using digital otoscopic images labelled by an expert panel. Methods: Five experienced otologists diagnosed 347 tympanic membrane images captured with a digital otoscope. Images with a majority expert diagnosis (n = 273) were categorized into three screening groups Normal, Pathological and Wax, and the same images were used for training and testing of the convolutional neural network. Expert panel diagnoses were compared to the convolutional neural network classification. Different approaches to the convolutional neural network were tested to identify the best performing model. Results: Overall accuracy of the convolutional neural network was above 0.9 in all except one approach. Sensitivity to finding ears with wax or pathology was above 93% in all cases and specificity was 100%. Adding more images to train the convolutional neural network had no positive impact on the results. Modifications such as normalization of datasets and image augmentation enhanced the performance in some instances. Conclusions: A machine learning approach could be used on digital otoscopic images to accurately screen for otitis media.
Collapse
|
26
|
Unadkat V, Pangal DJ, Kugener G, Roshannai A, Chan J, Zhu Y, Markarian N, Zada G, Donoho DA. Code-free machine learning for object detection in surgical video: a benchmarking, feasibility, and cost study. Neurosurg Focus 2022; 52:E11. [PMID: 35364576 DOI: 10.3171/2022.1.focus21652] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/25/2022] [Indexed: 11/06/2022]
Abstract
OBJECTIVE While the utilization of machine learning (ML) for data analysis typically requires significant technical expertise, novel platforms can deploy ML methods without requiring the user to have any coding experience (termed AutoML). The potential for these methods to be applied to neurosurgical video and surgical data science is unknown. METHODS AutoML, a code-free ML (CFML) system, was used to identify surgical instruments contained within each frame of endoscopic, endonasal intraoperative video obtained from a previously validated internal carotid injury training exercise performed on a high-fidelity cadaver model. Instrument-detection performances using CFML were compared with two state-of-the-art ML models built using the Python coding language on the same intraoperative video data set. RESULTS The CFML system successfully ingested surgical video without the use of any code. A total of 31,443 images were used to develop this model; 27,223 images were uploaded for training, 2292 images for validation, and 1928 images for testing. The mean average precision on the test set across all instruments was 0.708. The CFML model outperformed two standard object detection networks, RetinaNet and YOLOv3, which had mean average precisions of 0.669 and 0.527, respectively, in analyzing the same data set. Significant advantages to the CFML system included ease of use, relatively low cost, displays of true/false positives and negatives in a user-friendly interface, and the ability to deploy models for further analysis with ease. Significant drawbacks of the CFML model included an inability to view the structure of the trained model, an inability to update the ML model once trained with new examples, and the inability for robust downstream analysis of model performance and error modes. CONCLUSIONS This first report describes the baseline performance of CFML in an object detection task using a publicly available surgical video data set as a test bed. Compared with standard, code-based object detection networks, CFML exceeded performance standards. This finding is encouraging for surgeon-scientists seeking to perform object detection tasks to answer clinical questions, perform quality improvement, and develop novel research ideas. The limited interpretability and customization of CFML models remain ongoing challenges. With the further development of code-free platforms, CFML will become increasingly important across biomedical research. Using CFML, surgeons without significant coding experience can perform exploratory ML analyses rapidly and efficiently.
Collapse
Affiliation(s)
- Vyom Unadkat
- 1Department of Computer Science, USC Viterbi School of Engineering, Los Angeles, California.,2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Dhiraj J Pangal
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Guillaume Kugener
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Arman Roshannai
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Justin Chan
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Yichao Zhu
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Nicholas Markarian
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Gabriel Zada
- 2Department of Neurosurgery, Keck School of Medicine of USC, Los Angeles, California; and
| | - Daniel A Donoho
- 3Division of Neurosurgery, Center for Neurosciences, Children's National Hospital, Washington, DC
| |
Collapse
|
27
|
Habib AR, Crossland G, Patel H, Wong E, Kong K, Gunasekera H, Richards B, Caffery L, Perry C, Sacks R, Kumar A, Singh N. An Artificial Intelligence Computer-vision Algorithm to Triage Otoscopic Images From Australian Aboriginal and Torres Strait Islander Children. Otol Neurotol 2022; 43:481-488. [PMID: 35239622 DOI: 10.1097/mao.0000000000003484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To develop an artificial intelligence image classification algorithm to triage otoscopic images from rural and remote Australian Aboriginal and Torres Strait Islander children. STUDY DESIGN Retrospective observational study. SETTING Tertiary referral center. PATIENTS Rural and remote Aboriginal and Torres Strait Islander children who underwent tele-otology ear health screening in the Northern Territory, Australia between 2010 and 2018. INTERVENTIONS Otoscopic images were labeled by otolaryngologists to classify the ground truth. Deep and transfer learning methods were used to develop an image classification algorithm. MAIN OUTCOME MEASURES Accuracy, sensitivity, specificity, positive predictive value, negative predictive value, area under the curve (AUC) of the resultant algorithm compared with the ground truth. RESULTS Six thousand five hundred twenty seven images were used (5927 images for training and 600 for testing). The algorithm achieved an accuracy of 99.3% for acute otitis media, 96.3% for chronic otitis media, 77.8% for otitis media with effusion (OME), and 98.2% to classify wax/obstructed canal. To differentiate between multiple diagnoses, the algorithm achieved 74.4 to 92.8% accuracy and an AUC of 0.963 to 0.997. The most common incorrect classification pattern was OME misclassified as normal tympanic membranes. CONCLUSIONS The paucity of access to tertiary otolaryngology care for rural and remote Aboriginal and Torres Strait Islander communities may contribute to an under-identification of ear disease. Computer vision image classification algorithms can accurately classify ear disease from otoscopic images of Indigenous Australian children. In the future, a validated algorithm may integrate with existing telemedicine initiatives to support effective triage and facilitate early treatment and referral.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology-Head and Neck Surgery, Princess Alexandra Hospital, Brisbane, Queensland, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| | - Graeme Crossland
- Department of Otolaryngology - Head and Neck Surgery, Royal Darwin Hospital, Darwin, Northern Territory, Australia
| | - Hemi Patel
- Department of Otolaryngology - Head and Neck Surgery, Royal Darwin Hospital, Darwin, Northern Territory, Australia
| | - Eugene Wong
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| | - Kelvin Kong
- School of Medicine and Public Health, University of Newcastle, Newcastle, New South Wales, Australia
- Department of Linguistics, Faculty of Medicine, Macquarie University, Sydney, New South Wales, Australia
- School of Population Health, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Hasantha Gunasekera
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- The Children's Hospital at Westmead, Sydney, New South Wales, Australia
| | - Brent Richards
- Division of Medical Services, Gold Coast University Hospital, Gold Coast, Queensland, Australia
- Griffith Health, Griffith University Queensland, Australia
| | - Liam Caffery
- Centre for Online Health, University of Queensland, Australia
| | - Chris Perry
- Centre for Online Health, University of Queensland, Australia
| | - Raymond Sacks
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, Camperdown, New South Wales, Australia
| | - Narinder Singh
- Sydney Medical School, Faculty of Medicine and Health, University of Sydney, Camperdown, New South Wales, Australia
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, Sydney, New South Wales, Australia
| |
Collapse
|
28
|
Habib AR, Kajbafzadeh M, Hasan Z, Wong E, Gunasekera H, Perry C, Sacks R, Kumar A, Singh N. Artificial intelligence to classify ear disease from otoscopy: A systematic review and meta-analysis. Clin Otolaryngol 2022; 47:401-413. [PMID: 35253378 PMCID: PMC9310803 DOI: 10.1111/coa.13925] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/08/2022] [Accepted: 02/27/2022] [Indexed: 11/29/2022]
Abstract
Objectives To summarise the accuracy of artificial intelligence (AI) computer vision algorithms to classify ear disease from otoscopy. Design Systematic review and meta‐analysis. Methods Using the PRISMA guidelines, nine online databases were searched for articles that used AI computer vision algorithms developed from various methods (convolutional neural networks, artificial neural networks, support vector machines, decision trees and k‐nearest neighbours) to classify otoscopic images. Diagnostic classes of interest: normal tympanic membrane, acute otitis media (AOM), otitis media with effusion (OME), chronic otitis media (COM) with or without perforation, cholesteatoma and canal obstruction. Main outcome measures Accuracy to correctly classify otoscopic images compared to otolaryngologists (ground truth). The Quality Assessment of Diagnostic Accuracy Studies Version 2 tool was used to assess the quality of methodology and risk of bias. Results Thirty‐nine articles were included. Algorithms achieved 90.7% (95%CI: 90.1–91.3%) accuracy to difference between normal or abnormal otoscopy images in 14 studies. The most common multiclassification algorithm (3 or more diagnostic classes) achieved 97.6% (95%CI: 97.3–97.9%) accuracy to differentiate between normal, AOM and OME in three studies. AI algorithms outperformed human assessors to classify otoscopy images achieving 93.4% (95%CI: 90.5–96.4%) versus 73.2% (95%CI: 67.9–78.5%) accuracy in three studies. Convolutional neural networks achieved the highest accuracy compared to other classification methods. Conclusion AI can classify ear disease from otoscopy. A concerted effort is required to establish a comprehensive and reliable otoscopy database for algorithm training. An AI‐supported otoscopy system may assist health care workers, trainees and primary care practitioners with less otology experience identify ear disease.
Collapse
Affiliation(s)
- Al-Rahim Habib
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Queensland, Australia.,Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Majid Kajbafzadeh
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia
| | - Zubair Hasan
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Eugene Wong
- Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| | - Hasantha Gunasekera
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,The Children's Hospital at Westmead, New South Wales, Australia
| | - Chris Perry
- Department of Otolaryngology - Head and Neck Surgery, Princess Alexandra Hospital, Queensland, Australia.,University of Queensland Medical School, Queensland, Australia
| | - Raymond Sacks
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, University of Sydney, New South Wales, Australia
| | - Narinder Singh
- Faculty of Medicine and Health, University of Sydney, New South Wales, Australia.,Department of Otolaryngology - Head and Neck Surgery, Westmead Hospital, New South Wales, Australia
| |
Collapse
|
29
|
Esposito S, Bianchini S, Argentiero A, Gobbi R, Vicini C, Principi N. New Approaches and Technologies to Improve Accuracy of Acute Otitis Media Diagnosis. Diagnostics (Basel) 2021; 11:2392. [PMID: 34943628 PMCID: PMC8700495 DOI: 10.3390/diagnostics11122392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 12/10/2021] [Accepted: 12/17/2021] [Indexed: 12/18/2022] Open
Abstract
Several studies have shown that in recent years incidence of acute otitis media (AOM) has declined worldwide. However, related medical, social, and economic problems for patients, their families, and society remain very high. Better knowledge of potential risk factors for AOM development and more effective preventive interventions, particularly in AOM-prone children, can further reduce disease incidence. However, a more accurate AOM diagnosis seems essential to achieve this goal. Diagnostic uncertainty is common, and to avoid risks related to a disease caused mainly by bacteria, several children without AOM are treated with antibiotics and followed as true AOM cases. The main objective of this manuscript is to discuss the most common difficulties that presently limit accurate AOM diagnosis and the new approaches and technologies that have been proposed to improve disease detection. We showed that misdiagnosis can be dangerous or lead to relevant therapeutic mistakes. The need to improve AOM diagnosis has allowed the identification of a long list of technologies to visualize and evaluate the tympanic membrane and to assess middle-ear effusion. Most of the new instruments, including light field otoscopy, optical coherence tomography, low-coherence interferometry, and Raman spectroscopy, are far from being introduced in clinical practice. Video-otoscopy can be effective, especially when it is used in association with telemedicine, parents' cooperation, and artificial intelligence. Introduction of otologic telemedicine and use of artificial intelligence among pediatricians and ENT specialists must be strongly promoted in order to reduce mistakes in AOM diagnosis.
Collapse
Affiliation(s)
- Susanna Esposito
- Pediatric Clinic, Pietro Barilla Children’s Hospital, Department of Medicine and Surgery, University of Parma, Via Gramsci 14, 43126 Parma, Italy; (S.B.); (A.A.)
| | - Sonia Bianchini
- Pediatric Clinic, Pietro Barilla Children’s Hospital, Department of Medicine and Surgery, University of Parma, Via Gramsci 14, 43126 Parma, Italy; (S.B.); (A.A.)
| | - Alberto Argentiero
- Pediatric Clinic, Pietro Barilla Children’s Hospital, Department of Medicine and Surgery, University of Parma, Via Gramsci 14, 43126 Parma, Italy; (S.B.); (A.A.)
| | - Riccardo Gobbi
- Head-Neck and Oral Surgery Unit, Department of Head-Neck Surgery, Otolaryngology, Morgagni Piertoni Hospital, 47121 Forlì, Italy; (R.G.); (C.V.)
| | - Claudio Vicini
- Head-Neck and Oral Surgery Unit, Department of Head-Neck Surgery, Otolaryngology, Morgagni Piertoni Hospital, 47121 Forlì, Italy; (R.G.); (C.V.)
| | | |
Collapse
|
30
|
Chawdhary G, Shoman N. Emerging artificial intelligence applications in otological imaging. Curr Opin Otolaryngol Head Neck Surg 2021; 29:357-364. [PMID: 34459798 DOI: 10.1097/moo.0000000000000754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW To highlight the recent literature on artificial intelligence (AI) pertaining to otological imaging and to discuss future directions, obstacles and opportunities. RECENT FINDINGS The main themes in the recent literature centre around automated otoscopic image diagnosis and automated image segmentation for application in virtual reality surgical simulation and planning. Other applications that have been studied include identification of tinnitus MRI biomarkers, facial palsy analysis, intraoperative augmented reality systems, vertigo diagnosis and endolymphatic hydrops ratio calculation in Meniere's disease. Studies are presently at a preclinical, proof-of-concept stage. SUMMARY The recent literature on AI in otological imaging is promising and demonstrates the future potential of this technology in automating certain imaging tasks in a healthcare environment of ever-increasing demand and workload. Some studies have shown equivalence or superiority of the algorithm over physicians, albeit in narrowly defined realms. Future challenges in developing this technology include the compilation of large high quality annotated datasets, fostering strong collaborations between the health and technology sectors, testing the technology within real-world clinical pathways and bolstering trust among patients and physicians in this new method of delivering healthcare.
Collapse
Affiliation(s)
- Gaurav Chawdhary
- ENT Department, Royal Hallamshire Hospital, Broomhall, Sheffield, UK
| | - Nael Shoman
- ENT Department, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
31
|
Canares TL, Wang W, Unberath M, Clark JH. Artificial intelligence to diagnose ear disease using otoscopic image analysis: a review. J Investig Med 2021; 70:354-362. [PMID: 34521730 DOI: 10.1136/jim-2021-001870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 12/22/2022]
Abstract
AI relates broadly to the science of developing computer systems to imitate human intelligence, thus allowing for the automation of tasks that would otherwise necessitate human cognition. Such technology has increasingly demonstrated capacity to outperform humans for functions relating to image recognition. Given the current lack of cost-effective confirmatory testing, accurate diagnosis and subsequent management depend on visual detection of characteristic findings during otoscope examination. The aim of this manuscript is to perform a comprehensive literature review and evaluate the potential application of artificial intelligence for the diagnosis of ear disease from otoscopic image analysis.
Collapse
Affiliation(s)
- Therese L Canares
- Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Weiyao Wang
- Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland, USA
| | - Mathias Unberath
- Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland, USA
| | - James H Clark
- Otolaryngology-HNS, Johns Hopkins Medicine School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
32
|
O'Byrne C, Abbas A, Korot E, Keane PA. Automated deep learning in ophthalmology: AI that can build AI. Curr Opin Ophthalmol 2021; 32:406-412. [PMID: 34231529 DOI: 10.1097/icu.0000000000000779] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
PURPOSE OF REVIEW The purpose of this review is to describe the current status of automated deep learning in healthcare and to explore and detail the development of these models using commercially available platforms. We highlight key studies demonstrating the effectiveness of this technique and discuss current challenges and future directions of automated deep learning. RECENT FINDINGS There are several commercially available automated deep learning platforms. Although specific features differ between platforms, they utilise the common approach of supervised learning. Ophthalmology is an exemplar speciality in the area, with a number of recent proof-of-concept studies exploring classification of retinal fundus photographs, optical coherence tomography images and indocyanine green angiography images. Automated deep learning has also demonstrated impressive results in other specialities such as dermatology, radiology and histopathology. SUMMARY Automated deep learning allows users without coding expertise to develop deep learning algorithms. It is rapidly establishing itself as a valuable tool for those with limited technical experience. Despite residual challenges, it offers considerable potential in the future of patient management, clinical research and medical education. VIDEO ABSTRACT http://links.lww.com/COOP/A44.
Collapse
Affiliation(s)
- Ciara O'Byrne
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Trinity College School of Medicine, Dublin, Ireland
| | - Abdallah Abbas
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- University College London Medical School, London, UK
| | - Edward Korot
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Byers Eye Institute, Stanford University, Stanford, California, USA
| | - Pearse A Keane
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation, London, UK
| |
Collapse
|
33
|
[Artificial intelligence in otorhinolaryngology]. HNO 2021; 70:87-93. [PMID: 34374811 PMCID: PMC8353610 DOI: 10.1007/s00106-021-01095-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/26/2021] [Indexed: 11/24/2022]
Abstract
Hintergrund Die fortschreitende Digitalisierung ermöglicht zunehmend den Einsatz von künstlicher Intelligenz (KI). Sie wird Gesellschaft und Medizin in den nächsten Jahren maßgeblich beeinflussen. Ziel der Arbeit Darstellung des gegenwärtigen Einsatzspektrums von KI in der Hals-Nasen-Ohren-Heilkunde und Skizzierung zukünftiger Entwicklungen bei der Anwendung dieser Technologie. Material und Methoden Es erfolgte die Auswertung und Diskussion wissenschaftlicher Studien und Expertenanalysen. Ergebnisse Durch die Verwendung von KI kann der Nutzen herkömmlicher diagnostischer Werkzeuge in der Hals-Nasen-Ohren-Heilkunde gesteigert werden. Zudem kann der Einsatz dieser Technologie die chirurgische Präzision in der Kopf-Hals-Chirurgie weiter erhöhen. Schlussfolgerungen KI besitzt ein großes Potenzial zur weiteren Verbesserung diagnostischer und therapeutischer Verfahren in der Hals-Nasen-Ohren-Heilkunde. Allerdings ist die Anwendung dieser Technologie auch mit Herausforderungen verbunden, beispielsweise im Bereich des Datenschutzes.
Collapse
|
34
|
Kashani RG, Młyńczak MC, Zarabanda D, Solis-Pazmino P, Huland DM, Ahmad IN, Singh SP, Valdez TA. Shortwave infrared otoscopy for diagnosis of middle ear effusions: a machine-learning-based approach. Sci Rep 2021; 11:12509. [PMID: 34131163 PMCID: PMC8206083 DOI: 10.1038/s41598-021-91736-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 05/04/2021] [Indexed: 02/05/2023] Open
Abstract
Otitis media, a common disease marked by the presence of fluid within the middle ear space, imparts a significant global health and economic burden. Identifying an effusion through the tympanic membrane is critical to diagnostic success but remains challenging due to the inherent limitations of visible light otoscopy and user interpretation. Here we describe a powerful diagnostic approach to otitis media utilizing advancements in otoscopy and machine learning. We developed an otoscope that visualizes middle ear structures and fluid in the shortwave infrared region, holding several advantages over traditional approaches. Images were captured in vivo and then processed by a novel machine learning based algorithm. The model predicts the presence of effusions with greater accuracy than current techniques, offering specificity and sensitivity over 90%. This platform has the potential to reduce costs and resources associated with otitis media, especially as improvements are made in shortwave imaging and machine learning.
Collapse
Affiliation(s)
- Rustin G. Kashani
- grid.168010.e0000000419368956Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, 801 Welch Road, Palo Alto, CA 94304 USA
| | - Marcel C. Młyńczak
- grid.1035.70000000099214842Institute of Metrology and Biomedical Engineering, Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland
| | - David Zarabanda
- grid.168010.e0000000419368956Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, 801 Welch Road, Palo Alto, CA 94304 USA
| | - Paola Solis-Pazmino
- grid.168010.e0000000419368956Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, 801 Welch Road, Palo Alto, CA 94304 USA
| | - David M. Huland
- grid.168010.e0000000419368956Department of Radiology, Stanford University School of Medicine, Palo Alto, CA USA
| | - Iram N. Ahmad
- grid.168010.e0000000419368956Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, 801 Welch Road, Palo Alto, CA 94304 USA ,grid.414123.10000 0004 0450 875XLucile Packard Children’s Hospital, Palo Alto, CA USA
| | - Surya P. Singh
- grid.495560.b0000 0004 6003 8393Department of Biosciences and Bioengineering, Indian Institute of Technology Dharwad, Dharwad, Karnataka India
| | - Tulio A. Valdez
- grid.168010.e0000000419368956Department of Otolaryngology-Head and Neck Surgery, Stanford University School of Medicine, 801 Welch Road, Palo Alto, CA 94304 USA ,grid.414123.10000 0004 0450 875XLucile Packard Children’s Hospital, Palo Alto, CA USA
| |
Collapse
|
35
|
Ito Y, Unagami M, Yamabe F, Mitsui Y, Nakajima K, Nagao K, Kobayashi H. A method for utilizing automated machine learning for histopathological classification of testis based on Johnsen scores. Sci Rep 2021; 11:9962. [PMID: 33967273 PMCID: PMC8107178 DOI: 10.1038/s41598-021-89369-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 04/26/2021] [Indexed: 12/17/2022] Open
Abstract
We examined whether a tool for determining Johnsen scores automatically using artificial intelligence (AI) could be used in place of traditional Johnsen scoring to support pathologists’ evaluations. Average precision, precision, and recall were assessed by the Google Cloud AutoML Vision platform. We obtained testicular tissues for 275 patients and were able to use haematoxylin and eosin (H&E)-stained glass microscope slides from 264 patients. In addition, we cut out of parts of the histopathology images (5.0 × 5.0 cm) for expansion of Johnsen’s characteristic areas with seminiferous tubules. We defined four labels: Johnsen score 1–3, 4–5, 6–7, and 8–10 to distinguish Johnsen scores in clinical practice. All images were uploaded to the Google Cloud AutoML Vision platform. We obtained a dataset of 7155 images at magnification 400× and a dataset of 9822 expansion images for the 5.0 × 5.0 cm cutouts. For the 400× magnification image dataset, the average precision (positive predictive value) of the algorithm was 82.6%, precision was 80.31%, and recall was 60.96%. For the expansion image dataset (5.0 × 5.0 cm), the average precision was 99.5%, precision was 96.29%, and recall was 96.23%. This is the first report of an AI-based algorithm for predicting Johnsen scores.
Collapse
Affiliation(s)
- Yurika Ito
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Mami Unagami
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Fumito Yamabe
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Yozo Mitsui
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Koichi Nakajima
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Koichi Nagao
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan
| | - Hideyuki Kobayashi
- Department of Urology, Toho University School of Medicine, 6-11-1, Omori-Nishi, Ota-ku, Tokyo, 143-8541, Japan.
| |
Collapse
|
36
|
Won J, Monroy GL, Dsouza RI, Spillman DR, McJunkin J, Porter RG, Shi J, Aksamitiene E, Sherwood M, Stiger L, Boppart SA. Handheld Briefcase Optical Coherence Tomography with Real-Time Machine Learning Classifier for Middle Ear Infections. BIOSENSORS-BASEL 2021; 11:bios11050143. [PMID: 34063695 PMCID: PMC8147830 DOI: 10.3390/bios11050143] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/29/2021] [Accepted: 04/30/2021] [Indexed: 12/13/2022]
Abstract
A middle ear infection is a prevalent inflammatory disease most common in the pediatric population, and its financial burden remains substantial. Current diagnostic methods are highly subjective, relying on visual cues gathered by an otoscope. To address this shortcoming, optical coherence tomography (OCT) has been integrated into a handheld imaging probe. This system can non-invasively and quantitatively assess middle ear effusions and identify the presence of bacterial biofilms in the middle ear cavity during ear infections. Furthermore, the complete OCT system is housed in a standard briefcase to maximize its portability as a diagnostic device. Nonetheless, interpreting OCT images of the middle ear more often requires expertise in OCT as well as middle ear infections, making it difficult for an untrained user to operate the system as an accurate stand-alone diagnostic tool in clinical settings. Here, we present a briefcase OCT system implemented with a real-time machine learning platform for middle ear infections. A random forest-based classifier can categorize images based on the presence of middle ear effusions and biofilms. This study demonstrates that our briefcase OCT system coupled with machine learning can provide user-invariant classification results of middle ear conditions, which may greatly improve the utility of this technology for the diagnosis and management of middle ear infections.
Collapse
Affiliation(s)
- Jungeun Won
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA;
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Guillermo L. Monroy
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Roshan I. Dsouza
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Darold R. Spillman
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - Jonathan McJunkin
- Department of Otolaryngology, Carle Foundation Hospital, Champaign, IL 61822, USA; (J.M.); (R.G.P.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Ryan G. Porter
- Department of Otolaryngology, Carle Foundation Hospital, Champaign, IL 61822, USA; (J.M.); (R.G.P.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Jindou Shi
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Edita Aksamitiene
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
| | - MaryEllen Sherwood
- Stephens Family Clinical Research Institute, Carle Foundation Hospital, Urbana, IL 61801, USA; (M.S.); (L.S.)
| | - Lindsay Stiger
- Stephens Family Clinical Research Institute, Carle Foundation Hospital, Urbana, IL 61801, USA; (M.S.); (L.S.)
| | - Stephen A. Boppart
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA;
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA; (G.L.M.); (R.I.D.); (D.R.S.J.); (J.S.); (E.A.)
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
- Correspondence:
| |
Collapse
|
37
|
Wan KW, Wong CH, Ip HF, Fan D, Yuen PL, Fong HY, Ying M. Evaluation of the performance of traditional machine learning algorithms, convolutional neural network and AutoML Vision in ultrasound breast lesions classification: a comparative study. Quant Imaging Med Surg 2021; 11:1381-1393. [PMID: 33816176 DOI: 10.21037/qims-20-922] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Background In recent years, there was an increasing popularity in applying artificial intelligence in the medical field from computer-aided diagnosis (CAD) to patient prognosis prediction. Given the fact that not all healthcare professionals have the required expertise to develop a CAD system, the aim of this study was to investigate the feasibility of using AutoML Vision, a highly automatic machine learning model, for future clinical applications by comparing AutoML Vision with some commonly used CAD algorithms in the differentiation of benign and malignant breast lesions on ultrasound. Methods A total of 895 breast ultrasound images were obtained from the two online open-access ultrasound breast images datasets. Traditional machine learning models (comprising of seven commonly used CAD algorithms) with three content-based radiomic features (Hu Moments, Color Histogram, Haralick Texture) extracted, and a convolutional neural network (CNN) model were built using python language. AutoML Vision was trained in Google Cloud Platform. Sensitivity, specificity, F1 score and average precision (AUCPR) were used to evaluate the diagnostic performance of the models. Cochran's Q test was used to evaluate the statistical significance between all studied models and McNemar test was used as the post-hoc test to perform pairwise comparisons. The proposed AutoML model was also compared with the current related studies that involve similar medical imaging modalities in characterizing benign or malignant breast lesions. Results There was significant difference in the diagnostic performance among all studied traditional machine learning classifiers (P<0.05). Random Forest achieved the best performance in the differentiation of benign and malignant breast lesions (accuracy: 90%; sensitivity: 71%; specificity: 100%; F1 score: 0.83; AUCPR: 0.90) which was statistically comparable to the performance of CNN (accuracy: 91%; sensitivity: 82%; specificity: 96%; F1 score: 0.87; AUCPR: 0.88) and AutoML Vision (accuracy: 86%; sensitivity: 84%; specificity: 88%; F1 score: 0.83; AUCPR: 0.95) based on Cochran's Q test (P>0.05). Conclusions In this study, the performance of AutoML Vision was not significantly different from that of Random Forest (the best classifier among traditional machine learning models) and CNN. AutoML Vision showed relatively high accuracy and comparable to current commonly used classifiers which may prompt for future application in clinical practice.
Collapse
Affiliation(s)
- Ka Wing Wan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Chun Hoi Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Ho Fung Ip
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Dejian Fan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Pak Leung Yuen
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Hoi Ying Fong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| |
Collapse
|
38
|
Crowson MG, Hartnick CJ, Diercks GR, Gallagher TQ, Fracchia MS, Setlur J, Cohen MS. Machine Learning for Accurate Intraoperative Pediatric Middle Ear Effusion Diagnosis. Pediatrics 2021; 147:peds.2020-034546. [PMID: 33731369 DOI: 10.1542/peds.2020-034546] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/16/2020] [Indexed: 11/24/2022] Open
Abstract
OBJECTIVES Misdiagnosis of acute and chronic otitis media in children can result in significant consequences from either undertreatment or overtreatment. Our objective was to develop and train an artificial intelligence algorithm to accurately predict the presence of middle ear effusion in pediatric patients presenting to the operating room for myringotomy and tube placement. METHODS We trained a neural network to classify images as " normal" (no effusion) or "abnormal" (effusion present) using tympanic membrane images from children taken to the operating room with the intent of performing myringotomy and possible tube placement for recurrent acute otitis media or otitis media with effusion. Model performance was tested on held-out cases and fivefold cross-validation. RESULTS The mean training time for the neural network model was 76.0 (SD ± 0.01) seconds. Our model approach achieved a mean image classification accuracy of 83.8% (95% confidence interval [CI]: 82.7-84.8). In support of this classification accuracy, the model produced an area under the receiver operating characteristic curve performance of 0.93 (95% CI: 0.91-0.94) and F1-score of 0.80 (95% CI: 0.77-0.82). CONCLUSIONS Artificial intelligence-assisted diagnosis of acute or chronic otitis media in children may generate value for patients, families, and the health care system by improving point-of-care diagnostic accuracy. With a small training data set composed of intraoperative images obtained at time of tympanostomy tube insertion, our neural network was accurate in predicting the presence of a middle ear effusion in pediatric ear cases. This diagnostic accuracy performance is considerably higher than human-expert otoscopy-based diagnostic performance reported in previous studies.
Collapse
Affiliation(s)
- Matthew G Crowson
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts; .,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Christopher J Hartnick
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Gillian R Diercks
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Thomas Q Gallagher
- Department of Otolaryngology-Head and Neck Surgery, Eastern Virginia Medical School, Norfolk, Virginia
| | - Mary S Fracchia
- Department of Pediatrics, Massachusetts General Hospital for Children, Boston, Massachusetts; and.,Department of Pediatrics, Harvard Medical School, Harvard University, Boston, Massachusetts
| | - Jennifer Setlur
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Michael S Cohen
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
39
|
McMahon CM, Nieman CL, Thorne PR, Emmett SD, Bhutta MF. The inaugural World Report on Hearing: From barriers to a platform for change. Clin Otolaryngol 2021; 46:459-463. [PMID: 33733605 DOI: 10.1111/coa.13756] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 03/07/2021] [Indexed: 12/20/2022]
Abstract
The inaugural World Report on Hearing was recently published by the World Health Organisation, and outlines the burden of hearing loss, and strategies to overcome this through preventative and public health approaches. Here, we identify barriers to wide-scale adoption, including historic low prioritisation of hearing loss against other public health needs, a lack of a health workforce with relevant training, poor access to assistive technology, and individual and community-level stigma and misunderstanding. Overcoming these barriers will require multi-sector stakeholder collaboration, involving ear and hearing care professionals, patients, communities, industry and policymakers.
Collapse
Affiliation(s)
- Catherine M McMahon
- HEAR Centre, Macquarie University, Sydney, NSW, Australia.,Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, NSW, Australia
| | - Carrie L Nieman
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Cochlear Center for Hearing & Public Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Peter R Thorne
- Section of Audiology, University of Auckland, Auckland, New Zealand.,Eisdell Moore Centre, University of Auckland, Auckland, New Zealand
| | - Susan D Emmett
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, NC, USA.,Duke Global Health Institute, Durham, NC, USA
| | - Mahmood F Bhutta
- University Hospitals Sussex, Brighton, UK.,Brighton & Sussex Medical School, Brighton, UK
| |
Collapse
|
40
|
Wu Z, Lin Z, Li L, Pan H, Chen G, Fu Y, Qiu Q. Deep Learning for Classification of Pediatric Otitis Media. Laryngoscope 2020; 131:E2344-E2351. [PMID: 33369754 DOI: 10.1002/lary.29302] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/15/2020] [Accepted: 11/23/2020] [Indexed: 12/20/2022]
Abstract
OBJECTIVES/HYPOTHESIS To create a new strategy for monitoring pediatric otitis media (OM), we developed a brief, reliable, and objective method for automated classification using convolutional neural networks (CNNs) with images from otoscope. STUDY DESIGN Prospective study. METHODS An otoscopic image classifier for pediatric OM was built upon the idea of deep learning and transfer learning using the two most widely used CNN architectures named Xception and MobileNet-V2. Otoscopic images, including acute otitis media (AOM), otitis media with effusion (OME), and normal ears were obtained from our institution. Among qualified otoendoscopic images, 10,703 images were used for training, and 1,500 images were used for testing. In addition, 102 images captured by smartphone with WI-FI connected otoscope were used as a prospective test set to evaluate the model for home screening and monitoring. RESULTS For all diagnoses combined in the test set, the Xception model and the MobileNet-V2 model had similar overall accuracies of 97.45% (95% CI 96.81%-97.94%) and 95.72% (95% CI 95.12%-96.16%). The overall accuracies of two models with smartphone images were 90.66% (95% CI 90.21%-90.98%) and 88.56% (95% CI 87.86%-90.05%). The class activation map results showed that the extracted features of smartphone images were the same as those of otoendoscopic images. CONCLUSIONS We have developed deep learning algorithms for the successfully automated classification of pediatric AOM and OME with otoscopic images. With a smartphone-enabled wireless otoscope, artificial intelligence may assist parents in early detection and continuous monitoring at home to decrease the visit frequencies. LEVEL OF EVIDENCE NA Laryngoscope, 131:E2344-E2351, 2021.
Collapse
Affiliation(s)
- Zebin Wu
- Department of Otolaryngology, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Zheqi Lin
- Department of R&D, Shenzhen Accurate Technology Co., Ltd, Shenzhen, China
| | - Lan Li
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Hongguang Pan
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Guowei Chen
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Yuqing Fu
- Department of Otolaryngology, Shenzhen Children's Hospital, Shenzhen, China
| | - Qianhui Qiu
- Department of Otolaryngology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
41
|
Kirubarajan A, Taher A, Khan S, Masood S. Artificial intelligence in emergency medicine: A scoping review. J Am Coll Emerg Physicians Open 2020; 1:1691-1702. [PMID: 33392578 PMCID: PMC7771825 DOI: 10.1002/emp2.12277] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 09/04/2020] [Accepted: 09/22/2020] [Indexed: 01/08/2023] Open
Abstract
INTRODUCTION Despite the growing investment in and adoption of artificial intelligence (AI) in medicine, the applications of AI in an emergency setting remain unclear. This scoping review seeks to identify available literature regarding the applications of AI in emergency medicine. METHODS The scoping review was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for scoping reviews using Medline-OVID, EMBASE, CINAHL, and IEEE, with a double screening and extraction process. The search included articles published until February 28, 2020. Articles were excluded if they did not self-classify as studying an AI intervention, were not relevant to the emergency department (ED), or did not report outcomes or evaluation. RESULTS Of the 1483 original database citations, 395 were eligible for full-text evaluation. Of these articles, a total of 150 were included in the scoping review. The majority of included studies were retrospective in nature (n = 124, 82.7%), with only 3 (2.0%) prospective controlled trials. We found 37 (24.7%) interventions aimed at improving diagnosis within the ED. Among the 150 studies, 19 (12.7%) focused on diagnostic imaging within the ED. A total of 16 (10.7%) studies were conducted in the out-of-hospital environment (eg, emergency medical services, paramedics) with the remainder occurring either in the ED or the trauma bay. Of the 24 (16%) studies that had human comparators, there were 12 (8%) studies in which AI interventions outperformed clinicians in at least 1 measured outcome. CONCLUSION AI-related research is rapidly increasing in emergency medicine. There are several promising AI interventions that can improve emergency care, particularly for acute radiographic imaging and prediction-based diagnoses. Higher quality evidence is needed to further assess both short- and long-term clinical outcomes.
Collapse
Affiliation(s)
- Abirami Kirubarajan
- Faculty of MedicineUniversity of TorontoTorontoOntarioCanada
- Institute of Health Policy Management and EvaluationUniversity of TorontoTorontoOntarioCanada
| | - Ahmed Taher
- Division of Emergency Medicine, Department of MedicineUniversity of TorontoTorontoOntarioCanada
| | - Shawn Khan
- Faculty of MedicineUniversity of TorontoTorontoOntarioCanada
| | - Sameer Masood
- Division of Emergency Medicine, Department of MedicineUniversity of TorontoTorontoOntarioCanada
- Toronto General Hospital Research InstituteUniversity Health NetworkTorontoOntarioCanada
| |
Collapse
|
42
|
Tama BA, Kim DH, Kim G, Kim SW, Lee S. Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery. Clin Exp Otorhinolaryngol 2020; 13:326-339. [PMID: 32631041 PMCID: PMC7669308 DOI: 10.21053/ceo.2020.00654] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/24/2020] [Accepted: 06/09/2020] [Indexed: 12/12/2022] Open
Abstract
This study presents an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, considering opportunities, research challenges, and research directions. We searched PubMed, the Cochrane Central Register of Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles. The exclusion of non-English publications and duplicates yielded a total of 90 remaining studies. These 90 studies were divided into those analyzing medical images, voice, medical devices, and clinical diagnoses and treatments. Most studies (42.2%, 38/90) used AI for image-based analysis, followed by clinical diagnoses and treatments (24 studies). Each of the remaining two subcategories included 14 studies. Machine learning and deep learning have been extensively applied in the field of otorhinolaryngology. However, the performance of AI models varies and research challenges remain.
Collapse
Affiliation(s)
- Bayu Adhi Tama
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Do Hyun Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Gyuwon Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Soo Whan Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Pohang, Korea
| |
Collapse
|
43
|
Kim IK, Lee K, Park JH, Baek J, Lee WK. Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform. Br J Ophthalmol 2020; 105:856-861. [PMID: 32620684 DOI: 10.1136/bjophthalmol-2020-316108] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 05/21/2020] [Accepted: 06/08/2020] [Indexed: 01/08/2023]
Abstract
AIMS Automatic identification of pachychoroid maybe used as an adjunctive method to confirm the condition and be of help in treatment for macular diseases. This study investigated the feasibility of classifying pachychoroid disease on ultra-widefield indocyanine green angiography (UWF ICGA) images using an automated machine-learning platform. METHODS Two models were trained with a set including 783 UWF ICGA images of patients with pachychoroid (n=376) and non-pachychoroid (n=349) diseases using the AutoML Vision (Google). Pachychoroid was confirmed using quantitative and qualitative choroidal morphology on multimodal imaging by two retina specialists. Model 1 used the original and Model 2 used images of the left eye horizontally flipped to the orientation of the right eye to increase accuracy by equalising the mirror image of the right eye and left eye. The performances were compared with those of human experts. RESULTS In total, 284, 279 and 220 images of central serous chorioretinopathy, polypoidal choroidal vasculopathy and neovascular age-related maculopathy were included. The precision and recall were 87.84% and 87.84% for Model 1 and 89.19% and 89.19% for Model 2, which were comparable to the results of the retinal specialists (90.91% and 95.24%) and superior to those of ophthalmic residents (68.18% and 92.50%). CONCLUSIONS Auto machine-learning platform can be used in the classification of pachychoroid on UWF ICGA images after careful consideration for pachychoroid definition and limitation of the platform including unstable performance on the medical image.
Collapse
Affiliation(s)
- In Ki Kim
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Kook Lee
- Department of Ophthalmology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jae Hyun Park
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Jiwon Baek
- Department of Ophthalmology, Bucheon St Mary's Hospital, College of Medicine, The Catholic University of Korea, Gyeonggi-do, Republic of Korea
| | - Won Ki Lee
- Nune Eye Center, Seoul, Republic of Korea
| |
Collapse
|