1
|
Baldini C, Azam MA, Sampieri C, Ioppi A, Ruiz-Sevilla L, Vilaseca I, Alegre B, Tirrito A, Pennacchi A, Peretti G, Moccia S, Mattos LS. An automated approach for real-time informative frames classification in laryngeal endoscopy using deep learning. Eur Arch Otorhinolaryngol 2024; 281:4255-4264. [PMID: 38698163 PMCID: PMC11266252 DOI: 10.1007/s00405-024-08676-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Accepted: 04/08/2024] [Indexed: 05/05/2024]
Abstract
PURPOSE Informative image selection in laryngoscopy has the potential for improving automatic data extraction alone, for selective data storage and a faster review process, or in combination with other artificial intelligence (AI) detection or diagnosis models. This paper aims to demonstrate the feasibility of AI in providing automatic informative laryngoscopy frame selection also capable of working in real-time providing visual feedback to guide the otolaryngologist during the examination. METHODS Several deep learning models were trained and tested on an internal dataset (n = 5147 images) and then tested on an external test set (n = 646 images) composed of both white light and narrow band images. Four videos were used to assess the real-time performance of the best-performing model. RESULTS ResNet-50, pre-trained with the pretext strategy, reached a precision = 95% vs. 97%, recall = 97% vs, 89%, and the F1-score = 96% vs. 93% on the internal and external test set respectively (p = 0.062). The four testing videos are provided in the supplemental materials. CONCLUSION The deep learning model demonstrated excellent performance in identifying diagnostically relevant frames within laryngoscopic videos. With its solid accuracy and real-time capabilities, the system is promising for its development in a clinical setting, either autonomously for objective quality control or in conjunction with other algorithms within a comprehensive AI toolset aimed at enhancing tumor detection and diagnosis.
Collapse
Affiliation(s)
- Chiara Baldini
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Departement of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Departement of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Claudio Sampieri
- Department of Experimental Medicine (DIMES), University of Genoa, Genoa, Italy.
- Department of Otolaryngology, Hospital Clínic, C. de Villarroel, 170, 08029, Barcelona, Spain.
- Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain.
| | | | - Laura Ruiz-Sevilla
- Otorhinolaryngology Head-Neck Surgery Department, Hospital Universitari Joan XXIII de Tarragona, Tarragona, Spain
| | - Isabel Vilaseca
- Department of Otolaryngology, Hospital Clínic, C. de Villarroel, 170, 08029, Barcelona, Spain
- Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
- Translational Genomics and Target Therapies in Solid Tumors Group, Institut d́Investigacions Biomèdiques August Pi i Sunyer, IDIBAPS, Barcelona, Spain
- Faculty of Medicine, University of Barcelona, Barcelona, Spain
| | - Berta Alegre
- Department of Otolaryngology, Hospital Clínic, C. de Villarroel, 170, 08029, Barcelona, Spain
- Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
| | - Alessandro Tirrito
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessia Pennacchi
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Giorgio Peretti
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Sara Moccia
- The BioRobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
2
|
Mamidi IS, Dunham ME, Adkins LK, McWhorter AJ, Fang Z, Banh BT. Laryngeal Cancer Screening During Flexible Video Laryngoscopy Using Large Computer Vision Models. Ann Otol Rhinol Laryngol 2024; 133:720-728. [PMID: 38755974 DOI: 10.1177/00034894241253376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
OBJECTIVE Develop an artificial intelligence assisted computer vision model to screen for laryngeal cancer during flexible laryngoscopy. METHODS Using laryngeal images and flexible laryngoscopy video recordings, we developed computer vision models to classify video frames for usability and cancer screening. A separate model segments any identified lesions on the frames. We used these computer vision models to construct a video stream annotation system. This system classifies findings from flexible laryngoscopy as "potentially malignant" or "probably benign" and segments any detected lesions. Additionally, the model provides a confidence level for each classification. RESULTS The overall accuracy of the flexible laryngoscopy cancer screening model was 92%. For cancer screening, it achieved a sensitivity of 97.7% and a specificity of 76.9%. The segmentation model attained an average precision at a 0.50 intersection-over-union of 0.595. The confidence level for positive screening results can assist clinicians in counseling patients regarding the findings. CONCLUSION Our model is highly sensitive and adequately specific for laryngeal cancer screening. Segmentation helps endoscopists identify and describe potential lesions. Further optimization is required to enable the model's deployment in clinical settings for real-time annotation during flexible laryngoscopy.
Collapse
Affiliation(s)
- Ishwarya S Mamidi
- Department of Otolaryngology-Head and Neck Surgery, Louisiana State University Health Sciences Center, New Orleans, LA, USA
| | - Michael E Dunham
- Department of Otolaryngology-Head and Neck Surgery, Louisiana State University Health Sciences Center, New Orleans, LA, USA
| | - Lacey K Adkins
- Department of Otolaryngology-Head and Neck Surgery, Louisiana State University Health Sciences Center, New Orleans, LA, USA
| | - Andrew J McWhorter
- Department of Otolaryngology-Head and Neck Surgery, Louisiana State University Health Sciences Center, New Orleans, LA, USA
| | - Zhide Fang
- Biostatistics Program, School of Public Health, Louisiana State University Health Sciences Center, New Orleans LA, USA
| | - Britney T Banh
- Our Lady of the Lake Voice Center, Our Lady of the Lake Regional Medical Center, Baton Rouge, LA, USA
| |
Collapse
|
3
|
Paderno A, Bedi N, Rau A, Holsinger CF. Computer Vision and Videomics in Otolaryngology-Head and Neck Surgery: Bridging the Gap Between Clinical Needs and the Promise of Artificial Intelligence. Otolaryngol Clin North Am 2024:S0030-6665(24)00074-4. [PMID: 38981809 DOI: 10.1016/j.otc.2024.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
This article discusses the role of computer vision in otolaryngology, particularly through endoscopy and surgery. It covers recent applications of artificial intelligence (AI) in nonradiologic imaging within otolaryngology, noting the benefits and challenges, such as improving diagnostic accuracy and optimizing therapeutic outcomes, while also pointing out the necessity for enhanced data curation and standardized research methodologies to advance clinical applications. Technical aspects are also covered, providing a detailed view of the progression from manual feature extraction to more complex AI models, including convolutional neural networks and vision transformers and their potential application in clinical settings.
Collapse
Affiliation(s)
- Alberto Paderno
- IRCCS Humanitas Research Hospital, via Manzoni 56, Rozzano, Milan 20089, Italy; Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, Milan 20072, Italy.
| | - Nikita Bedi
- Division of Head and Neck Surgery, Department of Otolaryngology, Stanford University, Palo Alto, CA, USA
| | - Anita Rau
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, USA
| | | |
Collapse
|
4
|
Barlow J, Sragi Z, Rivera-Rivera G, Al-Awady A, Daşdöğen Ü, Courey MS, Kirke DN. The Use of Deep Learning Software in the Detection of Voice Disorders: A Systematic Review. Otolaryngol Head Neck Surg 2024; 170:1531-1543. [PMID: 38168017 DOI: 10.1002/ohn.636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 11/30/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024]
Abstract
OBJECTIVE To summarize the use of deep learning in the detection of voice disorders using acoustic and laryngoscopic input, compare specific neural networks in terms of accuracy, and assess their effectiveness compared to expert clinical visual examination. DATA SOURCES Embase, MEDLINE, and Cochrane Central. REVIEW METHODS Databases were screened through November 11, 2023 for relevant studies. The inclusion criteria required studies to utilize a specified deep learning method, use laryngoscopy or acoustic input, and measure accuracy of binary classification between healthy patients and those with voice disorders. RESULTS Thirty-four studies met the inclusion criteria, with 18 focusing on voice analysis, 15 on imaging analysis, and 1 both. Across the 18 acoustic studies, 21 programs were used for identification of organic and functional voice disorders. These technologies included 10 convolutional neural networks (CNNs), 6 multilayer perceptrons (MLPs), and 5 other neural networks. The binary classification systems yielded a mean accuracy of 89.0% overall, including 93.7% for MLP programs and 84.5% for CNNs. Among the 15 imaging analysis studies, a total of 23 programs were utilized, resulting in a mean accuracy of 91.3%. Specifically, the twenty CNNs achieved a mean accuracy of 92.6% compared to 83.0% for the 3 MLPs. CONCLUSION Deep learning models were shown to be highly accurate in the detection of voice pathology, with CNNs most effective for assessing laryngoscopy images and MLPs most effective for assessing acoustic input. While deep learning methods outperformed expert clinical exam in limited comparisons, further studies integrating external validation are necessary.
Collapse
Affiliation(s)
- Joshua Barlow
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Zara Sragi
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Gabriel Rivera-Rivera
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Abdurrahman Al-Awady
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Ümit Daşdöğen
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Mark S Courey
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Diana N Kirke
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| |
Collapse
|
5
|
Tie CW, Li DY, Zhu JQ, Wang ML, Wang JH, Chen BH, Li Y, Zhang S, Liu L, Guo L, Yang L, Yang LQ, Wei J, Jiang F, Zhao ZQ, Wang GQ, Zhang W, Zhang QM, Ni XG. Multi-Instance Learning for Vocal Fold Leukoplakia Diagnosis Using White Light and Narrow-Band Imaging: A Multicenter Study. Laryngoscope 2024. [PMID: 38801129 DOI: 10.1002/lary.31537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 05/01/2024] [Accepted: 05/09/2024] [Indexed: 05/29/2024]
Abstract
OBJECTIVES Vocal fold leukoplakia (VFL) is a precancerous lesion of laryngeal cancer, and its endoscopic diagnosis poses challenges. We aim to develop an artificial intelligence (AI) model using white light imaging (WLI) and narrow-band imaging (NBI) to distinguish benign from malignant VFL. METHODS A total of 7057 images from 426 patients were used for model development and internal validation. Additionally, 1617 images from two other hospitals were used for model external validation. Modeling learning based on WLI and NBI modalities was conducted using deep learning combined with a multi-instance learning approach (MIL). Furthermore, 50 prospectively collected videos were used to evaluate real-time model performance. A human-machine comparison involving 100 patients and 12 laryngologists assessed the real-world effectiveness of the model. RESULTS The model achieved the highest area under the receiver operating characteristic curve (AUC) values of 0.868 and 0.884 in the internal and external validation sets, respectively. AUC in the video validation set was 0.825 (95% CI: 0.704-0.946). In the human-machine comparison, AI significantly improved AUC and accuracy for all laryngologists (p < 0.05). With the assistance of AI, the diagnostic abilities and consistency of all laryngologists improved. CONCLUSIONS Our multicenter study developed an effective AI model using MIL and fusion of WLI and NBI images for VFL diagnosis, particularly aiding junior laryngologists. However, further optimization and validation are necessary to fully assess its potential impact in clinical settings. LEVEL OF EVIDENCE 3 Laryngoscope, 2024.
Collapse
Affiliation(s)
- Cheng-Wei Tie
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - De-Yang Li
- The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Ji-Qing Zhu
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mei-Ling Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Jian-Hui Wang
- Department of Endoscopy, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, China
| | - Bing-Hong Chen
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Ying Li
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Sen Zhang
- Department of Otolaryngology Head and Neck Surgery, The First Hospital, Shanxi Medical University, Taiyuan, China
| | - Lin Liu
- Department of Otolaryngology Head and Neck Surgery, Dalian Friendship Hospital, Dalian, China
| | - Li Guo
- Department of Otolaryngology Head and Neck Surgery, The First Affiliated Hospital, College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Long Yang
- Department of Otolaryngology, The Second People's Hospital of Baoshan City, Baoshan, China
| | - Li-Qun Yang
- Department of Otolaryngology, The Second People's Hospital of Baoshan City, Baoshan, China
| | - Jiao Wei
- Department of Otolaryngology, Qujing Second People's Hospital of Yunnan Province, Qujing, China
| | - Feng Jiang
- Department of Otolaryngology, Kunming First People's Hospital, Kunming, China
| | - Zhi-Qiang Zhao
- Department of Otolaryngology, Baoshan People's Hospital, Baoshan, China
| | - Gui-Qi Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wei Zhang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Quan-Mao Zhang
- Department of Endoscopy, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, China
| | - Xiao-Guang Ni
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
6
|
Marchi F, Bellini E, Iandelli A, Sampieri C, Peretti G. Exploring the landscape of AI-assisted decision-making in head and neck cancer treatment: a comparative analysis of NCCN guidelines and ChatGPT responses. Eur Arch Otorhinolaryngol 2024; 281:2123-2136. [PMID: 38421392 DOI: 10.1007/s00405-024-08525-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 02/02/2024] [Indexed: 03/02/2024]
Abstract
PURPOSE Recent breakthroughs in natural language processing and machine learning, exemplified by ChatGPT, have spurred a paradigm shift in healthcare. Released by OpenAI in November 2022, ChatGPT rapidly gained global attention. Trained on massive text datasets, this large language model holds immense potential to revolutionize healthcare. However, existing literature often overlooks the need for rigorous validation and real-world applicability. METHODS This head-to-head comparative study assesses ChatGPT's capabilities in providing therapeutic recommendations for head and neck cancers. Simulating every NCCN Guidelines scenarios. ChatGPT is queried on primary treatments, adjuvant treatment, and follow-up, with responses compared to the NCCN Guidelines. Performance metrics, including sensitivity, specificity, and F1 score, are employed for assessment. RESULTS The study includes 68 hypothetical cases and 204 clinical scenarios. ChatGPT exhibits promising capabilities in addressing NCCN-related queries, achieving high sensitivity and overall accuracy across primary treatment, adjuvant treatment, and follow-up. The study's metrics showcase robustness in providing relevant suggestions. However, a few inaccuracies are noted, especially in primary treatment scenarios. CONCLUSION Our study highlights the proficiency of ChatGPT in providing treatment suggestions. The model's alignment with the NCCN Guidelines sets the stage for a nuanced exploration of AI's evolving role in oncological decision support. However, challenges related to the interpretability of AI in clinical decision-making and the importance of clinicians understanding the underlying principles of AI models remain unexplored. As AI continues to advance, collaborative efforts between models and medical experts are deemed essential for unlocking new frontiers in personalized cancer care.
Collapse
Affiliation(s)
- Filippo Marchi
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy
| | - Elisa Bellini
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy.
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy.
| | - Andrea Iandelli
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
| | - Claudio Sampieri
- Department of Experimental Medicine (DIMES), University of Genoa, Genoa, Italy
- Department of Otolaryngology-Hospital Cliníc, Barcelona, Spain
- Functional Unit of Head and Neck Tumors-Hospital Cliníc, Barcelona, Spain
| | - Giorgio Peretti
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Largo Rosanna Benzi, 10, 16132, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, 16132, Genoa, Italy
| |
Collapse
|
7
|
Dijkhuis TH, Bijlstra OD, Warmerdam MI, Faber RA, Linders DGJ, Galema HA, Broersen A, Dijkstra J, Kuppen PJK, Vahrmeijer AL, Mieog JSD. Semi-automatic standardized analysis method to objectively evaluate near-infrared fluorescent dyes in image-guided surgery. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:026001. [PMID: 38312853 PMCID: PMC10833575 DOI: 10.1117/1.jbo.29.2.026001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/18/2023] [Accepted: 12/20/2023] [Indexed: 02/06/2024]
Abstract
Significance Near-infrared fluorescence imaging still lacks a standardized, objective method to evaluate fluorescent dye efficacy in oncological surgical applications. This results in difficulties in translation between preclinical to clinical studies with fluorescent dyes and in the reproduction of results between studies, which in turn hampers further clinical translation of novel fluorescent dyes. Aim Our aim is to develop and evaluate a semi-automatic standardized method to objectively assess fluorescent signals in resected tissue. Approach A standardized imaging procedure was designed and quantitative analysis methods were developed to evaluate non-targeted and tumor-targeted fluorescent dyes. The developed analysis methods included manual selection of region of interest (ROI) on white light images, automated fluorescence signal ROI selection, and automatic quantitative image analysis. The proposed analysis method was then compared with a conventional analysis method, where fluorescence signal ROIs were manually selected on fluorescence images. Dice similarity coefficients and intraclass correlation coefficients were calculated to determine the inter- and intraobserver variabilities of the ROI selections and the determined signal- and tumor-to-background ratios. Results The proposed non-targeted fluorescent dyes analysis method showed statistically significantly improved variabilities after application on indocyanine green specimens. For specimens with the targeted dye SGM-101, the variability of the background ROI selection was statistically significantly improved by implementing the proposed method. Conclusion Semi-automatic methods for standardized quantitative analysis of fluorescence images were successfully developed and showed promising results to further improve the reproducibility and standardization of clinical studies evaluating fluorescent dyes.
Collapse
Affiliation(s)
- Tom H. Dijkhuis
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Okker D. Bijlstra
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
- Amsterdam University Medical Center, Cancer Center Amsterdam, Department of Surgery, Amsterdam, The Netherlands
| | - Mats I. Warmerdam
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
- Centre of Human Drug Research, Leiden, The Netherlands
| | - Robin A. Faber
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Daan G. J. Linders
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | - Hidde A. Galema
- Erasmus MC Cancer Institute, Department of Surgical Oncology and Gastrointestinal Surgery, Rotterdam, The Netherlands
| | - Alexander Broersen
- Leiden University Medical Center, Department of Radiology, Leiden, The Netherlands
| | - Jouke Dijkstra
- Leiden University Medical Center, Department of Radiology, Leiden, The Netherlands
| | - Peter J. K. Kuppen
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| | | | - Jan Sven David Mieog
- Leiden University Medical Center, Department of Surgery, Leiden, The Netherlands
| |
Collapse
|
8
|
Wang SX, Li Y, Zhu JQ, Wang ML, Zhang W, Tie CW, Wang GQ, Ni XG. The Detection of Nasopharyngeal Carcinomas Using a Neural Network Based on Nasopharyngoscopic Images. Laryngoscope 2024; 134:127-135. [PMID: 37254946 DOI: 10.1002/lary.30781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 05/14/2023] [Accepted: 05/15/2023] [Indexed: 06/01/2023]
Abstract
OBJECTIVE To construct and validate a deep convolutional neural network (DCNN)-based artificial intelligence (AI) system for the detection of nasopharyngeal carcinoma (NPC) using archived nasopharyngoscopic images. METHODS We retrospectively collected 14107 nasopharyngoscopic images (7108 NPCs and 6999 noncancers) to construct a DCNN model and prepared a validation dataset containing 3501 images (1744 NPCs and 1757 noncancers) from a single center between January 2009 and December 2020. The DCNN model was established using the You Only Look Once (YOLOv5) architecture. Four otolaryngologists were asked to review the images of the validation set to benchmark the DCNN model performance. RESULTS The DCNN model analyzed the 3501 images in 69.35 s. For the validation dataset, the precision, recall, accuracy, and F1 score of the DCNN model in the detection of NPCs on white light imaging (WLI) and narrow band imaging (NBI) were 0.845 ± 0.038, 0.942 ± 0.021, 0.920 ± 0.024, and 0.890 ± 0.045, and 0.895 ± 0.045, 0.941 ± 0.018, and 0.975 ± 0.013, 0.918 ± 0.036, respectively. The diagnostic outcome of the DCNN model on WLI and NBI images was significantly higher than that of two junior otolaryngologists (p < 0.05). CONCLUSION The DCNN model showed better diagnostic outcomes for NPCs than those of junior otolaryngologists. Therefore, it could assist them in improving their diagnostic level and reducing missed diagnoses. LEVEL OF EVIDENCE 3 Laryngoscope, 134:127-135, 2024.
Collapse
Affiliation(s)
- Shi-Xu Wang
- Department of Head and Neck Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ying Li
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Ji-Qing Zhu
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Mei-Ling Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Wei Zhang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Cheng-Wei Tie
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Gui-Qi Wang
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xiao-Guang Ni
- Department of Endoscopy, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
9
|
Tsilivigkos C, Athanasopoulos M, Micco RD, Giotakis A, Mastronikolis NS, Mulita F, Verras GI, Maroulis I, Giotakis E. Deep Learning Techniques and Imaging in Otorhinolaryngology-A State-of-the-Art Review. J Clin Med 2023; 12:6973. [PMID: 38002588 PMCID: PMC10672270 DOI: 10.3390/jcm12226973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Revised: 11/02/2023] [Accepted: 11/06/2023] [Indexed: 11/26/2023] Open
Abstract
Over the last decades, the field of medicine has witnessed significant progress in artificial intelligence (AI), the Internet of Medical Things (IoMT), and deep learning (DL) systems. Otorhinolaryngology, and imaging in its various subspecialties, has not remained untouched by this transformative trend. As the medical landscape evolves, the integration of these technologies becomes imperative in augmenting patient care, fostering innovation, and actively participating in the ever-evolving synergy between computer vision techniques in otorhinolaryngology and AI. To that end, we conducted a thorough search on MEDLINE for papers published until June 2023, utilizing the keywords 'otorhinolaryngology', 'imaging', 'computer vision', 'artificial intelligence', and 'deep learning', and at the same time conducted manual searching in the references section of the articles included in our manuscript. Our search culminated in the retrieval of 121 related articles, which were subsequently subdivided into the following categories: imaging in head and neck, otology, and rhinology. Our objective is to provide a comprehensive introduction to this burgeoning field, tailored for both experienced specialists and aspiring residents in the domain of deep learning algorithms in imaging techniques in otorhinolaryngology.
Collapse
Affiliation(s)
- Christos Tsilivigkos
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Michail Athanasopoulos
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Riccardo di Micco
- Department of Otolaryngology and Head and Neck Surgery, Medical School of Hannover, 30625 Hannover, Germany;
| | - Aris Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| | - Nicholas S. Mastronikolis
- Department of Otolaryngology, University Hospital of Patras, 265 04 Patras, Greece; (M.A.); (N.S.M.)
| | - Francesk Mulita
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Georgios-Ioannis Verras
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Ioannis Maroulis
- Department of Surgery, University Hospital of Patras, 265 04 Patras, Greece; (G.-I.V.); (I.M.)
| | - Evangelos Giotakis
- 1st Department of Otolaryngology, National and Kapodistrian University of Athens, Hippocrateion Hospital, 115 27 Athens, Greece; (A.G.); (E.G.)
| |
Collapse
|
10
|
Sampieri C, Baldini C, Azam MA, Moccia S, Mattos LS, Vilaseca I, Peretti G, Ioppi A. Artificial Intelligence for Upper Aerodigestive Tract Endoscopy and Laryngoscopy: A Guide for Physicians and State-of-the-Art Review. Otolaryngol Head Neck Surg 2023; 169:811-829. [PMID: 37051892 DOI: 10.1002/ohn.343] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 03/03/2023] [Accepted: 03/23/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVE The endoscopic and laryngoscopic examination is paramount for laryngeal, oropharyngeal, nasopharyngeal, nasal, and oral cavity benign lesions and cancer evaluation. Nevertheless, upper aerodigestive tract (UADT) endoscopy is intrinsically operator-dependent and lacks objective quality standards. At present, there has been an increased interest in artificial intelligence (AI) applications in this area to support physicians during the examination, thus enhancing diagnostic performances. The relative novelty of this research field poses a challenge both for the reviewers and readers as clinicians often lack a specific technical background. DATA SOURCES Four bibliographic databases were searched: PubMed, EMBASE, Cochrane, and Google Scholar. REVIEW METHODS A structured review of the current literature (up to September 2022) was performed. Search terms related to topics of AI, machine learning (ML), and deep learning (DL) in UADT endoscopy and laryngoscopy were identified and queried by 3 independent reviewers. Citations of selected studies were also evaluated to ensure comprehensiveness. CONCLUSIONS Forty-one studies were included in the review. AI and computer vision techniques were used to achieve 3 fundamental tasks in this field: classification, detection, and segmentation. All papers were summarized and reviewed. IMPLICATIONS FOR PRACTICE This article comprehensively reviews the latest developments in the application of ML and DL in UADT endoscopy and laryngoscopy, as well as their future clinical implications. The technical basis of AI is also explained, providing guidance for nonexpert readers to allow critical appraisal of the evaluation metrics and the most relevant quality requirements.
Collapse
Affiliation(s)
- Claudio Sampieri
- Department of Experimental Medicine (DIMES), University of Genoa, Genoa, Italy
- Functional Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
- Otorhinolaryngology Department, Hospital Clínic, Barcelona, Spain
| | - Chiara Baldini
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi (DIBRIS), University of Genoa, Genoa, Italy
| | - Muhammad Adeel Azam
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria dei Sistemi (DIBRIS), University of Genoa, Genoa, Italy
| | - Sara Moccia
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Pisa, Italy
| | - Leonardo S Mattos
- Department of Advanced Robotics, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Isabel Vilaseca
- Functional Unit of Head and Neck Tumors, Hospital Clínic, Barcelona, Spain
- Otorhinolaryngology Department, Hospital Clínic, Barcelona, Spain
- Head Neck Clínic, Agència de Gestió d'Ajuts Universitaris i de Recerca, Barcelona, Catalunya, Spain
- Surgery and Medical-Surgical Specialties Department, Faculty of Medicine and Health Sciences, Universitat de Barcelona, Barcelona, Spain
- Translational Genomics and Target Therapies in Solid Tumors Group, Faculty of Medicine, Institut d́Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
- University of Barcelona, Barcelona, Spain
| | - Giorgio Peretti
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Alessandro Ioppi
- Unit of Otorhinolaryngology-Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
- Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| |
Collapse
|
11
|
Paderno A, Villani FP, Fior M, Berretti G, Gennarini F, Zigliani G, Ulaj E, Montenegro C, Sordi A, Sampieri C, Peretti G, Moccia S, Piazza C. Instance segmentation of upper aerodigestive tract cancer: site-specific outcomes. ACTA OTORHINOLARYNGOLOGICA ITALICA : ORGANO UFFICIALE DELLA SOCIETA ITALIANA DI OTORINOLARINGOLOGIA E CHIRURGIA CERVICO-FACCIALE 2023; 43:283-290. [PMID: 37488992 PMCID: PMC10366566 DOI: 10.14639/0392-100x-n2336] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 01/08/2023] [Indexed: 07/26/2023]
Abstract
Objective To achieve instance segmentation of upper aerodigestive tract (UADT) neoplasms using a deep learning (DL) algorithm, and to identify differences in its diagnostic performance in three different sites: larynx/hypopharynx, oral cavity and oropharynx. Methods A total of 1034 endoscopic images from 323 patients were examined under narrow band imaging (NBI). The Mask R-CNN algorithm was used for the analysis. The dataset split was: 935 training, 48 validation and 51 testing images. Dice Similarity Coefficient (Dsc) was the main outcome measure. Results Instance segmentation was effective in 76.5% of images. The mean Dsc was 0.90 ± 0.05. The algorithm correctly predicted 77.8%, 86.7% and 55.5% of lesions in the larynx/hypopharynx, oral cavity, and oropharynx, respectively. The mean Dsc was 0.90 ± 0.05 for the larynx/hypopharynx, 0.60 ± 0.26 for the oral cavity, and 0.81 ± 0.30 for the oropharynx. The analysis showed inferior diagnostic results in the oral cavity compared with the larynx/hypopharynx (p < 0.001). Conclusions The study confirms the feasibility of instance segmentation of UADT using DL algorithms and shows inferior diagnostic results in the oral cavity compared with other anatomic areas.
Collapse
Affiliation(s)
- Alberto Paderno
- Unit of Otorhinolaryngology, Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | | | - Milena Fior
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Giulia Berretti
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Francesca Gennarini
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Gabriele Zigliani
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Emanuela Ulaj
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Claudia Montenegro
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Alessandra Sordi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| | - Claudio Sampieri
- Unit of Otorhinolaryngology, Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Giorgio Peretti
- Unit of Otorhinolaryngology, Head and Neck Surgery, IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology, Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, School of Medicine, Brescia, Italy
| |
Collapse
|
12
|
Gong Y, Bao L, Xu T, Yi X, Chen J, Wang S, Pan Z, Huang P, Ge M. The tumor ecosystem in head and neck squamous cell carcinoma and advances in ecotherapy. Mol Cancer 2023; 22:68. [PMID: 37024932 PMCID: PMC10077663 DOI: 10.1186/s12943-023-01769-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 03/27/2023] [Indexed: 04/08/2023] Open
Abstract
The development of head and neck squamous cell carcinoma (HNSCC) is a multi-step process, and its survival depends on a complex tumor ecosystem, which not only promotes tumor growth but also helps to protect tumor cells from immune surveillance. With the advances of existing technologies and emerging models for ecosystem research, the evidence for cell-cell interplay is increasing. Herein, we discuss the recent advances in understanding the interaction between tumor cells, the major components of the HNSCC tumor ecosystem, and summarize the mechanisms of how biological and abiotic factors affect the tumor ecosystem. In addition, we review the emerging ecological treatment strategy for HNSCC based on existing studies.
Collapse
Affiliation(s)
- Yingying Gong
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Lisha Bao
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Tong Xu
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Xiaofen Yi
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Jinming Chen
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Shanshan Wang
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China
| | - Zongfu Pan
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China.
- Key Laboratory of Endocrine Gland Diseases of Zhejiang Province, Zhejiang Provincial People's Hospital, Hangzhou, China.
- Clinical Research Center for Cancer of Zhejiang Province, Hangzhou, People's Republic of China.
| | - Ping Huang
- Center for Clinical Pharmacy, Cancer Center, Department of Pharmacy, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China.
- Key Laboratory of Endocrine Gland Diseases of Zhejiang Province, Zhejiang Provincial People's Hospital, Hangzhou, China.
- Clinical Research Center for Cancer of Zhejiang Province, Hangzhou, People's Republic of China.
| | - Minghua Ge
- Otolaryngology & Head and Neck Center, Cancer Center, Department of Head and Neck Surgery, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, China.
- Key Laboratory of Endocrine Gland Diseases of Zhejiang Province, Zhejiang Provincial People's Hospital, Hangzhou, China.
- Clinical Research Center for Cancer of Zhejiang Province, Hangzhou, People's Republic of China.
| |
Collapse
|
13
|
The Role of Peritumoral Depapillation and Its Impact on Narrow-Band Imaging in Oral Tongue Squamous Cell Carcinoma. Cancers (Basel) 2023; 15:cancers15041196. [PMID: 36831538 PMCID: PMC9954546 DOI: 10.3390/cancers15041196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023] Open
Abstract
A recent study reported that the occurrence of depapillated mucosa surrounding oral tongue squamous cell carcinomas (OTSCC) is associated with perineural invasion (PNI). The present study evaluates the reliability of depapillation as a PNI predictor and how it could affect narrow-band imaging (NBI) performance. This is thus a retrospective study on patients affected by OTSCC submitted to radical surgery. The preoperative endoscopy was evaluated to identify the presence of depapillation. Differences in distribution between depapillation and clinicopathological variables were analyzed. NBI vascular patterns were reported, and the impact of depapillation on those was studied. We enrolled seventy-six patients. After evaluation of the preoperative endoscopies, 40 (53%) patients had peritumoral depapillation, while 59 (78%) had a positive NBI pattern. Depapillation was strongly correlated to PNI, 54% vs. 28% (p = 0.022). Regarding the NBI pattern, there was no particular association with depapillation-associated tumors. The presence of depapillation did not affect the intralesional pattern detected by the NBI, while no NBI-positive pattern was found in the depapillation area. Finally, the NBI-guided resection margins were not affected by depapillation. Peritumoral depapillation is a reliable feature for PNI in OTSCC. NBI margin detection is not impaired by depapillation.
Collapse
|
14
|
Intraoperative Imaging Techniques to Improve Surgical Resection Margins of Oropharyngeal Squamous Cell Cancer: A Comprehensive Review of Current Literature. Cancers (Basel) 2023; 15:cancers15030896. [PMID: 36765858 PMCID: PMC9913756 DOI: 10.3390/cancers15030896] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 02/04/2023] Open
Abstract
Inadequate resection margins in head and neck squamous cell carcinoma surgery necessitate adjuvant therapies such as re-resection and radiotherapy with or without chemotherapy and imply increasing morbidity and worse prognosis. On the other hand, taking larger margins by extending the resection also leads to avoidable increased morbidity. Oropharyngeal squamous cell carcinomas (OPSCCs) are often difficult to access; resections are limited by anatomy and functionality and thus carry an increased risk for close or positive margins. Therefore, there is a need to improve intraoperative assessment of resection margins. Several intraoperative techniques are available, but these often lead to prolonged operative time and are only suitable for a subgroup of patients. In recent years, new diagnostic tools have been the subject of investigation. This study reviews the available literature on intraoperative techniques to improve resection margins for OPSCCs. A literature search was performed in Embase, PubMed, and Cochrane. Narrow band imaging (NBI), high-resolution microendoscopic imaging, confocal laser endomicroscopy, frozen section analysis (FSA), ultrasound (US), computed tomography scan (CT), (auto) fluorescence imaging (FI), and augmented reality (AR) have all been used for OPSCC. NBI, FSA, and US are most commonly used and increase the rate of negative margins. Other techniques will become available in the future, of which fluorescence imaging has high potential for use with OPSCC.
Collapse
|
15
|
Zhang Y, Wu Y, Pan D, Zhang Z, Jiang L, Feng X, Jiang Y, Luo X, Chen Q. Accuracy of narrow band imaging for detecting the malignant transformation of oral potentially malignant disorders: A systematic review and meta-analysis. Front Surg 2023; 9:1068256. [PMID: 36684262 PMCID: PMC9857777 DOI: 10.3389/fsurg.2022.1068256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/28/2022] [Indexed: 01/09/2023] Open
Abstract
Objective Oral potentially malignant disorders (OPMDs) are a spectrum of diseases that harbor the potential of malignant transformation and developing into oral squamous cell carcinoma (OSCC). Narrow band imaging (NBI) has been clinically utilized for the adjuvant diagnosis of OPMD and OSCC. This study aimed to comprehensively evaluate the diagnostic accuracy of NBI for malignant transformations of OPMD by applying the intraepithelial papillary capillary loop (IPCL) classification approach. Methods Studies reporting the diagnostic validity of NBI in the detection of OPMD/OSCC were selected. Four databases were searched and 11 articles were included in the meta-analysis. We performed four subgroup analyses by defining IPCL I/II as negative diagnostic results and no/mild dysplasia as negative pathological outcome. Pooled data were analyzed using random-effects models. Meta-regression analysis was performed to explore heterogeneity. Results After pooled analysis of the four subgroups, we found that subgroup 1, defining IPCL II and above as a clinically positive result, demonstrated the most optimal overall diagnostic accuracy for the malignant transformation of OPMDs, with a sensitivity and specificity of NBI of 0.87 (95% confidence interval (CI) [0.67, 0.96], p < 0.001) and 0.83 [95% CI (0.56, 0.95), p < 0.001], respectively; while the other 3 subgroups displayed relatively low sensitivity or specificity. Conclusions NBI is a promising and non-invasive adjunctive tool for identifying malignant transformations of OPMDs. The IPCL grading is currently a sound criterion for the clinical application of NBI. After excluding potentially false positive results, these oral lesions classified as IPCL II or above are suggested to undergo biopsy for early and accurate diagnosis as well as management.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Xiaobo Luo
- Correspondence: Qianming Chen Xiaobo Luo
| | | |
Collapse
|
16
|
Cobianchi L, Dal Mas F, Ansaloni L. Editorial: New Frontiers for Artificial Intelligence in Surgical Decision Making and its Organizational Impacts. Front Surg 2022; 9:933673. [PMID: 35800112 PMCID: PMC9253456 DOI: 10.3389/fsurg.2022.933673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 06/06/2022] [Indexed: 11/16/2022] Open
Affiliation(s)
- Lorenzo Cobianchi
- Department of Clinical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Department of General Surgery, IRCCS Policlinico San Matteo Foundation, Pavia, Italy
| | - Francesca Dal Mas
- Department of Management, Ca’ Foscari University of Venice, Venice, Italy
| | - Luca Ansaloni
- Department of Clinical, Diagnostic and Pediatric Sciences, University of Pavia, Pavia, Italy
- Department of General Surgery, IRCCS Policlinico San Matteo Foundation, Pavia, Italy
| |
Collapse
|