1
|
Bourdillon AT. Computer Vision-Radiomics & Pathognomics. Otolaryngol Clin North Am 2024:S0030-6665(24)00072-0. [PMID: 38910065 DOI: 10.1016/j.otc.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The role of computer vision in extracting radiographic (radiomics) and histopathologic (pathognomics) features is an extension of molecular biomarkers that have been foundational to our understanding across the spectrum of head and neck disorders. Especially within head and neck cancers, machine learning and deep learning applications have yielded advances in the characterization of tumor features, nodal features, and various outcomes. This review aims to overview the landscape of radiomic and pathognomic applications, informing future work to address gaps. Novel methodologies will be needed to potentially engineer ways of integrating multidimensional data inputs to examine disease features to guide prognosis comprehensively and ultimately clinical management.
Collapse
Affiliation(s)
- Alexandra T Bourdillon
- Department of Otolaryngology-Head & Neck Surgery, University of California-San Francisco, San Francisco, CA 94115, USA.
| |
Collapse
|
2
|
Ganeshan V, Bidwell J, Gyawali D, Nguyen TS, Morse J, Smith MP, Barton BM, McCoul ED. Enhancing nasal endoscopy: Classification, detection, and segmentation of anatomic landmarks using a convolutional neural network. Int Forum Allergy Rhinol 2024. [PMID: 38853655 DOI: 10.1002/alr.23384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 05/14/2024] [Accepted: 05/23/2024] [Indexed: 06/11/2024]
Abstract
KEY POINTS A convolutional neural network (CNN)-based model can accurately localize and segment turbinates in images obtained during nasal endoscopy (NE). This model represents a starting point for algorithms that comprehensively interpret NE findings.
Collapse
Affiliation(s)
- Vinayak Ganeshan
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
| | - Jonathan Bidwell
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
| | - Dipesh Gyawali
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
| | - Thinh S Nguyen
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
| | - Jonathan Morse
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
| | - Madeline P Smith
- Ochsner Clinical School, University of Queensland, New Orleans, Louisiana, USA
- Department of Otolaryngology, Tulane University School of Medicine, New Orleans, Louisiana, USA
| | - Blair M Barton
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
- Ochsner Clinical School, University of Queensland, New Orleans, Louisiana, USA
- Department of Otolaryngology, Tulane University School of Medicine, New Orleans, Louisiana, USA
| | - Edward D McCoul
- Department of Otorhinolaryngology, Ochsner Health, New Orleans, Louisiana, USA
- Ochsner Clinical School, University of Queensland, New Orleans, Louisiana, USA
- Department of Otolaryngology, Tulane University School of Medicine, New Orleans, Louisiana, USA
| |
Collapse
|
3
|
Bhattacharya D, Behrendt F, Becker BT, Maack L, Beyersdorff D, Petersen E, Petersen M, Cheng B, Eggert D, Betz C, Hoffmann AS, Schlaefer A. Self-supervised learning for classifying paranasal anomalies in the maxillary sinus. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03172-5. [PMID: 38850438 DOI: 10.1007/s11548-024-03172-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 05/01/2024] [Indexed: 06/10/2024]
Abstract
PURPOSE Paranasal anomalies, frequently identified in routine radiological screenings, exhibit diverse morphological characteristics. Due to the diversity of anomalies, supervised learning methods require large labelled dataset exhibiting diverse anomaly morphology. Self-supervised learning (SSL) can be used to learn representations from unlabelled data. However, there are no SSL methods designed for the downstream task of classifying paranasal anomalies in the maxillary sinus (MS). METHODS Our approach uses a 3D convolutional autoencoder (CAE) trained in an unsupervised anomaly detection (UAD) framework. Initially, we train the 3D CAE to reduce reconstruction errors when reconstructing normal maxillary sinus (MS) image. Then, this CAE is applied to an unlabelled dataset to generate coarse anomaly locations by creating residual MS images. Following this, a 3D convolutional neural network (CNN) reconstructs these residual images, which forms our SSL task. Lastly, we fine-tune the encoder part of the 3D CNN on a labelled dataset of normal and anomalous MS images. RESULTS The proposed SSL technique exhibits superior performance compared to existing generic self-supervised methods, especially in scenarios with limited annotated data. When trained on just 10% of the annotated dataset, our method achieves an area under the precision-recall curve (AUPRC) of 0.79 for the downstream classification task. This performance surpasses other methods, with BYOL attaining an AUPRC of 0.75, SimSiam at 0.74, SimCLR at 0.73 and masked autoencoding using SparK at 0.75. CONCLUSION A self-supervised learning approach that inherently focuses on localizing paranasal anomalies proves to be advantageous, particularly when the subsequent task involves differentiating normal from anomalous maxillary sinuses. Access our code at https://github.com/mtec-tuhh/self-supervised-paranasal-anomaly .
Collapse
Affiliation(s)
- Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany.
- Department of Otorhinolaryngology, Head and Neck Surgery and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Benjamin Tobias Becker
- Department of Otorhinolaryngology, Head and Neck Surgery and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Lennart Maack
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| | - Dirk Beyersdorff
- Clinic and Polyclinic for Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Elina Petersen
- Population Health Research Department, University Heart and Vascular Center, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Marvin Petersen
- Clinic and Polyclinic for Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Bastian Cheng
- Clinic and Polyclinic for Neurology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Dennis Eggert
- Department of Otorhinolaryngology, Head and Neck Surgery and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Christian Betz
- Department of Otorhinolaryngology, Head and Neck Surgery and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Anna Sophie Hoffmann
- Department of Otorhinolaryngology, Head and Neck Surgery and Oncology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Technische Universitaet Hamburg, Hamburg, Germany
| |
Collapse
|
4
|
Ayoub NF, Glicksman JT. Artificial Intelligence in Rhinology. Otolaryngol Clin North Am 2024:S0030-6665(24)00068-9. [PMID: 38821734 DOI: 10.1016/j.otc.2024.04.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
Rhinology, allergy, and skull base surgery are fields primed for the integration and implementation of artificial intelligence (AI). The heterogeneity of the disease processes within these fields highlights the opportunity for AI to augment clinical care and promote personalized medicine. Numerous research studies have been published demonstrating the development and clinical potential of AI models within the field. Most describe in silico evaluation models without direct clinical implementation. The major themes of existing studies include diagnostic or clinical decisions support, clustering patients into specific phenotypes or endotypes, predicting post-treatment outcomes, and surgical planning.
Collapse
Affiliation(s)
- Noel F Ayoub
- Department of Otolaryngology-Head & Neck Surgery, Mass Eye and Ear/Harvard Medical School, Boston, MA, USA.
| | - Jordan T Glicksman
- Department of Otolaryngology-Head & Neck Surgery, Mass Eye and Ear/Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Hsu YC, Lin KT, Lee MS, Shen LS, Yeh TH, Lin YT. Multiple instance learning for eosinophil quantification of sinonasal histopathology images: A hierarchical determination on whole slide images. Int Forum Allergy Rhinol 2024. [PMID: 38767581 DOI: 10.1002/alr.23365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 04/26/2024] [Accepted: 05/04/2024] [Indexed: 05/22/2024]
Abstract
KEY POINTS We proposed a hierarchical framework including an unsupervised candidate image selection and a weakly supervised patch image detection based on multiple instance learning (MIL) to effectively estimate eosinophil quantities in tissue samples from whole slide images. MIL is an innovative approach that can help deal with the variability in cell distribution detection and enable automated eosinophil quantification from sinonasal histopathological images with a high degree of accuracy. The study lays the foundation for further research and development in the field of automated histopathological image analysis, and validation on more extensive and diverse datasets will contribute to real-world application.
Collapse
Affiliation(s)
- Yen-Chi Hsu
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Kao-Tsung Lin
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| | - Ming-Sui Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Li-Sung Shen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Te-Huei Yeh
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| | - Yi-Tsen Lin
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
6
|
Hildenbrand T, Weber RK. [Inverted papilloma of the nose and paranasal sinuses : Diagnosis, treatment, and malignant transformation]. HNO 2024; 72:257-264. [PMID: 38214715 DOI: 10.1007/s00106-023-01406-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/28/2023] [Indexed: 01/13/2024]
Abstract
Inverted papilloma (IP) are benign tumors that show a locally aggressive behavior, a high rate of recurrence, and a potential for malignant transformation. Specific radiological signs such as hyperostosis at the origin of the IP and convoluted cerebriform patterns, as well as the typical endoscopic aspect, can lead to diagnosis and enable preoperative planning of surgical access and the extent of surgery. Endonasal endoscopic techniques are considered the gold standard and the introduction of extended surgical techniques such as the prelacrimal approach, frontal drillout, or orbital transposition facilitate complete subperiosteal resection with preservation of important physiological structures. There is a risk of synchronous and metachronous squamous cell carcinomas (IP-SCC). Research focuses on radiological criteria to differentiate benign IP from IP-SCC, genetic and epigenetic factors in the process of malignant transformation, and estimation of the risk of IP progressing to IP-SCC.
Collapse
Affiliation(s)
- Tanja Hildenbrand
- Klinik für Hals‑, Nasen- und Ohrenheilkunde, Universitätsklinikum Freiburg, Killianstr. 5, 79106, Freiburg, Deutschland.
| | - Rainer K Weber
- Sektion Nasennebenhöhlen- und Schädelbasischirurgie, Traumatologie, Klinik für Hals‑, Nasen- und Ohrenheilkunde, Städtisches Klinikum Karlsruhe, Karlsruhe, Deutschland
| |
Collapse
|
7
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
8
|
Fujima N, Kamagata K, Ueda D, Fujita S, Fushimi Y, Yanagawa M, Ito R, Tsuboyama T, Kawamura M, Nakaura T, Yamada A, Nozaki T, Fujioka T, Matsui Y, Hirata K, Tatsugami F, Naganawa S. Current State of Artificial Intelligence in Clinical Applications for Head and Neck MR Imaging. Magn Reson Med Sci 2023; 22:401-414. [PMID: 37532584 PMCID: PMC10552661 DOI: 10.2463/mrms.rev.2023-0047] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/09/2023] [Indexed: 08/04/2023] Open
Abstract
Due primarily to the excellent soft tissue contrast depictions provided by MRI, the widespread application of head and neck MRI in clinical practice serves to assess various diseases. Artificial intelligence (AI)-based methodologies, particularly deep learning analyses using convolutional neural networks, have recently gained global recognition and have been extensively investigated in clinical research for their applicability across a range of categories within medical imaging, including head and neck MRI. Analytical approaches using AI have shown potential for addressing the clinical limitations associated with head and neck MRI. In this review, we focus primarily on the technical advancements in deep-learning-based methodologies and their clinical utility within the field of head and neck MRI, encompassing aspects such as image acquisition and reconstruction, lesion segmentation, disease classification and diagnosis, and prognostic prediction for patients presenting with head and neck diseases. We then discuss the limitations of current deep-learning-based approaches and offer insights regarding future challenges in this field.
Collapse
Affiliation(s)
- Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Osaka, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Tokyo, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Kyoto, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Okayama, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Hiroshima, Hiroshima, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
9
|
Liu GS, Hodges JM, Yu J, Sung CK, Erickson‐DiRenzo E, Doyle PC. End-to-end deep learning classification of vocal pathology using stacked vowels. Laryngoscope Investig Otolaryngol 2023; 8:1312-1318. [PMID: 37899847 PMCID: PMC10601590 DOI: 10.1002/lio2.1144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 08/13/2023] [Indexed: 10/31/2023] Open
Abstract
Objectives Advances in artificial intelligence (AI) technology have increased the feasibility of classifying voice disorders using voice recordings as a screening tool. This work develops upon previous models that take in single vowel recordings by analyzing multiple vowel recordings simultaneously to enhance prediction of vocal pathology. Methods Voice samples from the Saarbruecken Voice Database, including three sustained vowels (/a/, /i/, /u/) from 687 healthy human participants and 334 dysphonic patients, were used to train 1-dimensional convolutional neural network models for multiclass classification of healthy, hyperfunctional dysphonia, and laryngitis voice recordings. Three models were trained: (1) a baseline model that analyzed individual vowels in isolation, (2) a stacked vowel model that analyzed three vowels (/a/, /i/, /u/) in the neutral pitch simultaneously, and (3) a stacked pitch model that analyzed the /a/ vowel in three pitches (low, neutral, and high) simultaneously. Results For multiclass classification of healthy, hyperfunctional dysphonia, and laryngitis voice recordings, the stacked vowel model demonstrated higher performance compared with the baseline and stacked pitch models (F1 score 0.81 vs. 0.77 and 0.78, respectively). Specifically, the stacked vowel model achieved higher performance for class-specific classification of hyperfunctional dysphonia voice samples compared with the baseline and stacked pitch models (F1 score 0.56 vs. 0.49 and 0.50, respectively). Conclusions This study demonstrates the feasibility and potential of analyzing multiple sustained vowel recordings simultaneously to improve AI-driven screening and classification of vocal pathology. The stacked vowel model architecture in particular offers promise to enhance such an approach. Lay Summary AI analysis of multiple vowel recordings can improve classification of voice pathologies compared with models using a single sustained vowel and offer a strategy to enhance AI-driven screening of voice disorders. Level of Evidence 3.
Collapse
Affiliation(s)
- George S. Liu
- Department of Otolaryngology Head and Neck SurgeryStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
- Division of LaryngologyStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
| | - Jordan M. Hodges
- Computer Science DepartmentSchool of Engineering, Stanford UniversityStanfordCaliforniaUSA
| | - Jingzhi Yu
- Biomedical Informatics, Department of Biomedical Data ScienceStanford University School of MedicineStanfordCaliforniaUSA
| | - C. Kwang Sung
- Department of Otolaryngology Head and Neck SurgeryStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
- Division of LaryngologyStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
| | - Elizabeth Erickson‐DiRenzo
- Department of Otolaryngology Head and Neck SurgeryStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
- Division of LaryngologyStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
| | - Philip C. Doyle
- Department of Otolaryngology Head and Neck SurgeryStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
- Division of LaryngologyStanford University School of Medicine, Stanford UniversityStanfordCaliforniaUSA
| |
Collapse
|
10
|
Park MJ, Cho W, Kim JH, Chung YS, Jang YJ, Yu MS. Preoperative Prediction of Sinonasal Inverted Papilloma-associated Squamous Cell Carcinoma (IP-SCC). Laryngoscope 2023; 133:2502-2510. [PMID: 36683553 DOI: 10.1002/lary.30583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 12/28/2022] [Accepted: 01/06/2023] [Indexed: 01/24/2023]
Abstract
INTRODUCTION Sinonasal inverted papillomas (IP) can undergo transformation into IP-squamous cell carcinomas (IP-SCC). More aggressive treatment plan should be established when IP-SCC is suspected. Nevertheless, inaccuracy of the preoperative punch biopsy results to detect IP-SCC from IP raises the need for an additional strategy. The present study aimed to investigate significant clinicoradiological remarks associated with IP-SCC than IP. MATERIAL AND METHODS Postoperative surgical specimens obtained from patients with confirmed IP or IP-SCC at a single tertiary medical center from 1997 to 2018 were retrospectively evaluated. Patients' demographic and clinical characteristics, preoperative in-office punch biopsy results, and preoperative computed tomography (CT) or magnetic resonance images were reviewed. Univariate and multivariate analyses were performed to assess the odds ratio (OR) associated with IP-SCC. The area under the curve (AUC) in the receiver Operating Characteristic (ROC) curve was calculated in the prediction model to discriminate IP-SCC from IP. RESULTS The study included 44 IP-SCC and 301 patients with IP. The diagnostic sensitivity of in-office punch biopsy to detect IP-SCC was 70.7%. Multivariate analysis showed that factors significantly associated with IP-SCC included tobacco smoking >10PY (adjusted-OR [aOR]: 4.1), epistaxis (aOR: 3.4), facial pain (aOR: 4.2), bony destruction (aOR: 37.6), bony remodeling (aOR: 36.3), and invasion of adjacent structures (aOR: 31.6) (all p < 0.05). Combining all significantly related clinicoradiological features, the ability to discriminate IP-SCC from IP reached an AUC of 0.974. CONCLUSION IP patients with a history of tobacco smoking, facial pain, epistaxis, and bony destruction, remodeling, or invasion of an adjacent structure on preoperative images may be at higher risk for IP-SCC. LEVEL OF EVIDENCE 3 Laryngoscope, 133:2502-2510, 2023.
Collapse
Affiliation(s)
- Marn Joon Park
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Inha University Medical Center, Inha University School of Medicine, Incheon, South Korea
| | - Wonki Cho
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Ji Heui Kim
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Yoo-Sam Chung
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Yong Ju Jang
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Myeong Sang Yu
- Department of Otorhinolaryngology-Head & Neck Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| |
Collapse
|
11
|
Yui R, Takahashi M, Noda K, Yoshida K, Sakurai R, Ohira S, Omura K, Otori N, Wada K, Kojima H. Preoperative prediction of sinonasal papilloma by artificial intelligence using nasal video endoscopy: a retrospective study. Sci Rep 2023; 13:12439. [PMID: 37532726 PMCID: PMC10397257 DOI: 10.1038/s41598-023-38913-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 07/17/2023] [Indexed: 08/04/2023] Open
Abstract
Sinonasal inverted papilloma (IP) is at risk of recurrence and malignancy, and early diagnosis using nasal endoscopy is essential. We thus developed a diagnostic system using artificial intelligence (AI) to identify nasal sinus papilloma. Endoscopic surgery videos of 53 patients undergoing endoscopic sinus surgery were edited to train and evaluate deep neural network models and then a diagnostic system was developed. The correct diagnosis rate based on visual examination by otolaryngologists was also evaluated using the same videos and compared with that of the AI diagnostic system patients. Main outcomes evaluated included the percentage of correct diagnoses compared to AI diagnosis and the correct diagnosis rate for otolaryngologists based on years of practice experience. The diagnostic system had an area under the curve of 0.874, accuracy of 0.843, false positive rate of 0.124, and false negative rate of 0.191. The average correct diagnosis rate among otolaryngologists was 69.4%, indicating that the AI was highly accurate. Evidently, although the number of cases was small, a highly accurate diagnostic system was created. Future studies with larger samples to improve the accuracy of the system and expand the range of diseases that can be detected for more clinical applications are warranted.
Collapse
Affiliation(s)
- Ryosuke Yui
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan
- Department of Otolaryngology, Head and Neck Surgery, Toho University Faculty of Medicine, Tokyo, Japan
| | - Masahiro Takahashi
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan.
| | - Katsuhiko Noda
- SIOS Technology Inc., Minami-Azabu, Minato-ku, Tokyo, Japan
| | - Kaname Yoshida
- SIOS Technology Inc., Minami-Azabu, Minato-ku, Tokyo, Japan
| | - Rinko Sakurai
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan
| | - Shinya Ohira
- Department of Otolaryngology, Head and Neck Surgery, Toho University Faculty of Medicine, Tokyo, Japan
| | - Kazuhiro Omura
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan
| | - Nobuyoshi Otori
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan
| | - Kota Wada
- Department of Otolaryngology, Head and Neck Surgery, Toho University Faculty of Medicine, Tokyo, Japan
| | - Hiromi Kojima
- Department of Otorhinolaryngology, Jikei University School of Medicine, Nishi-Shimbashi, Minato-ku, Tokyo, Japan
| |
Collapse
|
12
|
Amanian A, Heffernan A, Ishii M, Creighton FX, Thamboo A. The Evolution and Application of Artificial Intelligence in Rhinology: A State of the Art Review. Otolaryngol Head Neck Surg 2023; 169:21-30. [PMID: 35787221 PMCID: PMC11110957 DOI: 10.1177/01945998221110076] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To provide a comprehensive overview on the applications of artificial intelligence (AI) in rhinology, highlight its limitations, and propose strategies for its integration into surgical practice. DATA SOURCES Medline, Embase, CENTRAL, Ei Compendex, IEEE, and Web of Science. REVIEW METHODS English studies from inception until January 2022 and those focusing on any application of AI in rhinology were included. Study selection was independently performed by 2 authors; discrepancies were resolved by the senior author. Studies were categorized by rhinology theme, and data collection comprised type of AI utilized, sample size, and outcomes, including accuracy and precision among others. CONCLUSIONS An overall 5435 articles were identified. Following abstract and title screening, 130 articles underwent full-text review, and 59 articles were selected for analysis. Eleven studies were from the gray literature. Articles were stratified into image processing, segmentation, and diagnostics (n = 27); rhinosinusitis classification (n = 14); treatment and disease outcome prediction (n = 8); optimizing surgical navigation and phase assessment (n = 3); robotic surgery (n = 2); olfactory dysfunction (n = 2); and diagnosis of allergic rhinitis (n = 3). Most AI studies were published from 2016 onward (n = 45). IMPLICATIONS FOR PRACTICE This state of the art review aimed to highlight the increasing applications of AI in rhinology. Next steps will entail multidisciplinary collaboration to ensure data integrity, ongoing validation of AI algorithms, and integration into clinical practice. Future research should be tailored at the interplay of AI with robotics and surgical education.
Collapse
Affiliation(s)
- Ameen Amanian
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Austin Heffernan
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andrew Thamboo
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| |
Collapse
|