1
|
Ross T, Tanna R, Lilaonitkul W, Mehta N. Deep Learning for Automated Image Segmentation of the Middle Ear: A Scoping Review. Otolaryngol Head Neck Surg 2024; 170:1544-1554. [PMID: 38667630 DOI: 10.1002/ohn.758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 02/28/2024] [Accepted: 03/15/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE Convolutional neural networks (CNNs) have revolutionized medical image segmentation in recent years. This scoping review aimed to carry out a comprehensive review of the literature describing automated image segmentation of the middle ear using CNNs from computed tomography (CT) scans. DATA SOURCES A comprehensive literature search, generated jointly with a medical librarian, was performed on Medline, Embase, Scopus, Web of Science, and Cochrane, using Medical Subject Heading terms and keywords. Databases were searched from inception to July 2023. Reference lists of included papers were also screened. REVIEW METHODS Ten studies were included for analysis, which contained a total of 866 scans which were used in model training/testing. Thirteen different architectures were described to perform automated segmentation. The best Dice similarity coefficient (DSC) for the entire ossicular chain was 0.87 using ResNet. The highest DSC for any structure was the incus using 3D-V-Net at 0.93. The most difficult structure to segment was the stapes, with the highest DSC of 0.84 using 3D-V-Net. CONCLUSIONS Numerous architectures have demonstrated good performance in segmenting the middle ear using CNNs. To overcome some of the difficulties in segmenting the stapes, we recommend the development of an architecture trained on cone beam CTs to provide improved spatial resolution to assist with delineating the smallest ossicle. IMPLICATIONS FOR PRACTICE This has clinical applications for preoperative planning, diagnosis, and simulation.
Collapse
Affiliation(s)
- Talisa Ross
- Department of Ear, Nose and Throat Surgery, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- evidENT Team, Ear Institute, University College London, London, UK
| | - Ravina Tanna
- Department of Ear, Nose and Throat Surgery, Great Ormond Street Hospital, London, UK
| | | | - Nishchay Mehta
- evidENT Team, Ear Institute, University College London, London, UK
- Department of Ear, Nose and Throat Surgery, Royal National Ear Nose and Throat Hospital, London, UK
| |
Collapse
|
2
|
Gangil T, Rao D. Examining Diagnostic Errors in the Field of Otorhinolaryngology within the Challenging Landscape of Limited-Resource Healthcare. Indian J Otolaryngol Head Neck Surg 2024; 76:2714-2721. [PMID: 38883455 PMCID: PMC11169281 DOI: 10.1007/s12070-024-04490-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 12/31/2023] [Indexed: 06/18/2024] Open
Abstract
Diagnostic accuracy is vital in otorhinolaryngology for effective patient care, yet diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists can occur. However, studies investigating such mismatches in low-resource healthcare environments are limited. This study aims to analyze diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. A publicly available dataset assessing diagnostic outcomes from non-otorhinolaryngology clinicians and ENT specialists was analyzed. The dataset included demographic characteristics, referral diagnoses, and final ENT specialist diagnoses. Descriptive statistics and appropriate statistical tests were employed to assess the prevalence of diagnostic mismatches and associated factors. The analysis comprised 1544 cases. The prevalence of diagnostic mismatches between non-otorhinolaryngology clinicians and ENT specialists was 67.4%. Certain specific ENT diseases demonstrated higher frequencies of diagnostic mismatches. Factors such as mismatch in the diagnosis and compliance of patient were found to influence the occurrence of diagnostic mismatches. This study highlights the presence of diagnostic mismatches in otorhinolaryngology within a low-resource healthcare environment. The prevalence of these mismatches underscores the need for improved diagnostic practices in such settings. Factors contributing to diagnostic mismatches should be further explored to develop strategies for enhancing diagnostic accuracy and reducing diagnostic errors in otorhinolaryngology.
Collapse
Affiliation(s)
- Tarun Gangil
- Department of Radiotherapy and Oncology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
3
|
Lastrucci A, Wandael Y, Ricci R, Maccioni G, Giansanti D. The Integration of Deep Learning in Radiotherapy: Exploring Challenges, Opportunities, and Future Directions through an Umbrella Review. Diagnostics (Basel) 2024; 14:939. [PMID: 38732351 PMCID: PMC11083654 DOI: 10.3390/diagnostics14090939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 05/13/2024] Open
Abstract
This study investigates, through a narrative review, the transformative impact of deep learning (DL) in the field of radiotherapy, particularly in light of the accelerated developments prompted by the COVID-19 pandemic. The proposed approach was based on an umbrella review following a standard narrative checklist and a qualification process. The selection process identified 19 systematic review studies. Through an analysis of current research, the study highlights the revolutionary potential of DL algorithms in optimizing treatment planning, image analysis, and patient outcome prediction in radiotherapy. It underscores the necessity of further exploration into specific research areas to unlock the full capabilities of DL technology. Moreover, the study emphasizes the intricate interplay between digital radiology and radiotherapy, revealing how advancements in one field can significantly influence the other. This interdependence is crucial for addressing complex challenges and advancing the integration of cutting-edge technologies into clinical practice. Collaborative efforts among researchers, clinicians, and regulatory bodies are deemed essential to effectively navigate the evolving landscape of DL in radiotherapy. By fostering interdisciplinary collaborations and conducting thorough investigations, stakeholders can fully leverage the transformative power of DL to enhance patient care and refine therapeutic strategies. Ultimately, this promises to usher in a new era of personalized and optimized radiotherapy treatment for improved patient outcomes.
Collapse
Affiliation(s)
- Andrea Lastrucci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Yannick Wandael
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | - Renzo Ricci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy; (A.L.); (Y.W.); (R.R.)
| | | | | |
Collapse
|
4
|
Alkojak Almansi A, Sugarova S, Alsanosi A, Almuhawas F, Hofmeyr L, Wagner F, Kedves E, Sriperumbudur K, Dhanasingh A, Kedves A. A novel radiological software prototype for automatically detecting the inner ear and classifying normal from malformed anatomy. Comput Biol Med 2024; 171:108168. [PMID: 38432006 DOI: 10.1016/j.compbiomed.2024.108168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/29/2024] [Accepted: 02/15/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND To develop an effective radiological software prototype that could read Digital Imaging and Communications in Medicine (DICOM) files, crop the inner ear automatically based on head computed tomography (CT), and classify normal and inner ear malformation (IEM). METHODS A retrospective analysis was conducted on 2053 patients from 3 hospitals. We extracted 1200 inner ear CTs for importing, cropping, and training, testing, and validating an artificial intelligence (AI) model. Automated cropping algorithms based on CTs were developed to precisely isolate the inner ear volume. Additionally, a simple graphical user interface (GUI) was implemented for user interaction. Using cropped CTs as input, a deep learning convolutional neural network (DL CNN) with 5-fold cross-validation was used to classify inner ear anatomy as normal or abnormal. Five specific IEM types (cochlear hypoplasia, ossification, incomplete partition types I and III, and common cavity) were included, with data equally distributed between classes. Both the cropping tool and the AI model were extensively validated. RESULTS The newly developed DICOM viewer/software successfully achieved its objectives: reading CT files, automatically cropping inner ear volumes, and classifying them as normal or malformed. The cropping tool demonstrated an average accuracy of 92.25%. The DL CNN model achieved an area under the curve (AUC) of 0.86 (95% confidence interval: 0.81-0.91). Performance metrics for the AI model were: accuracy (0.812), precision (0.791), recall (0.8), and F1-score (0.766). CONCLUSION This study successfully developed and validated a fully automated workflow for classifying normal versus abnormal inner ear anatomy using a combination of advanced image processing and deep learning techniques. The tool exhibited good diagnostic accuracy, suggesting its potential application in risk stratification. However, it is crucial to emphasize the need for supervision by qualified medical professionals when utilizing this tool for clinical decision-making.
Collapse
Affiliation(s)
- Abdulrahman Alkojak Almansi
- University of Pecs, Faculty of Engineering and Information Technology, Institute of Information and Electrical Technology, Pecs, Hungary
| | - Sima Sugarova
- St. Petersburg ENT and Speech Research Institute, St. Petersburg, Russia
| | - Abdulrahman Alsanosi
- King Saud University, King Abdullah Ear Specialist Center (KAESC), Department of Otolaryngology, Riyadh, Saudi Arabia
| | - Fida Almuhawas
- King Saud University, King Abdullah Ear Specialist Center (KAESC), Department of Otolaryngology, Riyadh, Saudi Arabia
| | - Louis Hofmeyr
- Dr Loius Hofmeyr's workplace to Stellenbosch University Division of Otorhinolaryngology, Stellenbosch, South Africa
| | - Franca Wagner
- University Hospital Bern, University Institute for Diagnostic and Interventional Neuroradiology, Switzerland
| | - Emerencia Kedves
- University of Sopron, Doctoral School of Wood Sciences and Technologies, Sopron, Hungary
| | - Kiran Sriperumbudur
- MED-EL Medical Electronics GmbH., Department of Research and Development, Innsbruck, Austria
| | - Anandhan Dhanasingh
- MED-EL Medical Electronics GmbH., Department of Research and Development, Innsbruck, Austria.
| | - Andras Kedves
- MED-EL Medical Electronics GmbH., Department of Research and Development, Innsbruck, Austria; University of Pecs, Faculty of Engineering and Information Technology, Institute of Information and Electrical Technology, Pecs, Hungary.
| |
Collapse
|
5
|
Wu Q, Wang X, Liang G, Luo X, Zhou M, Deng H, Zhang Y, Huang X, Yang Q. Advances in Image-Based Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery: A Systematic Review. Otolaryngol Head Neck Surg 2023; 169:1132-1142. [PMID: 37288505 DOI: 10.1002/ohn.391] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/27/2023] [Accepted: 05/13/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To update the literature and provide a systematic review of image-based artificial intelligence (AI) applications in otolaryngology, highlight its advances, and propose future challenges. DATA SOURCES Web of Science, Embase, PubMed, and Cochrane Library. REVIEW METHODS Studies written in English, published between January 2020 and December 2022. Two independent authors screened the search results, extracted data, and assessed studies. RESULTS Overall, 686 studies were identified. After screening titles and abstracts, 325 full-text studies were assessed for eligibility, and 78 studies were included in this systematic review. The studies originated from 16 countries. Among these countries, the top 3 were China (n = 29), Korea (n = 8), the United States, and Japan (n = 7 each). The most common area was otology (n = 35), followed by rhinology (n = 20), pharyngology (n = 18), and head and neck surgery (n = 5). Most applications of AI in otology, rhinology, pharyngology, and head and neck surgery mainly included chronic otitis media (n = 9), nasal polyps (n = 4), laryngeal cancer (n = 12), and head and neck squamous cell carcinoma (n = 3), respectively. The overall performance of AI in accuracy, the area under the curve, sensitivity, and specificity were 88.39 ± 9.78%, 91.91 ± 6.70%, 86.93 ± 11.59%, and 88.62 ± 14.03%, respectively. CONCLUSION This state-of-the-art review aimed to highlight the increasing applications of image-based AI in otorhinolaryngology head and neck surgery. The following steps will entail multicentre collaboration to ensure data reliability, ongoing optimization of AI algorithms, and integration into real-world clinical practice. Future studies should consider 3-dimensional (3D)-based AI, such as 3D surgical AI.
Collapse
Affiliation(s)
- Qingwu Wu
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xinyue Wang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Guixian Liang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xin Luo
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Min Zhou
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Huiyi Deng
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yana Zhang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xuekun Huang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Qintai Yang
- Department of Otorhinolaryngology-Head and Neck Surgery, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Allergy, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
6
|
Azeem M, Javaid S, Khalil RA, Fahim H, Althobaiti T, Alsharif N, Saeed N. Neural Networks for the Detection of COVID-19 and Other Diseases: Prospects and Challenges. Bioengineering (Basel) 2023; 10:850. [PMID: 37508877 PMCID: PMC10416184 DOI: 10.3390/bioengineering10070850] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/09/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Artificial neural networks (ANNs) ability to learn, correct errors, and transform a large amount of raw data into beneficial medical decisions for treatment and care has increased in popularity for enhanced patient safety and quality of care. Therefore, this paper reviews the critical role of ANNs in providing valuable insights for patients' healthcare decisions and efficient disease diagnosis. We study different types of ANNs in the existing literature that advance ANNs' adaptation for complex applications. Specifically, we investigate ANNs' advances for predicting viral, cancer, skin, and COVID-19 diseases. Furthermore, we propose a deep convolutional neural network (CNN) model called ConXNet, based on chest radiography images, to improve the detection accuracy of COVID-19 disease. ConXNet is trained and tested using a chest radiography image dataset obtained from Kaggle, achieving more than 97% accuracy and 98% precision, which is better than other existing state-of-the-art models, such as DeTraC, U-Net, COVID MTNet, and COVID-Net, having 93.1%, 94.10%, 84.76%, and 90% accuracy and 94%, 95%, 85%, and 92% precision, respectively. The results show that the ConXNet model performed significantly well for a relatively large dataset compared with the aforementioned models. Moreover, the ConXNet model reduces the time complexity by using dropout layers and batch normalization techniques. Finally, we highlight future research directions and challenges, such as the complexity of the algorithms, insufficient available data, privacy and security, and integration of biosensing with ANNs. These research directions require considerable attention for improving the scope of ANNs for medical diagnostic and treatment applications.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK;
| | - Shumaila Javaid
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Ruhul Amin Khalil
- Department of Electrical Engineering, University of Engineering and Technology, Peshawar 25120, Pakistan;
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| | - Hamza Fahim
- Department of Control Science and Engineering, College of Electronics and Information Engineering, Tongji University, Shanghai 201804, China; (S.J.); (H.F.)
| | - Turke Althobaiti
- Department of Computer Science, Faculty of Science, Northern Border University, Arar 73222, Saudi Arabia;
| | - Nasser Alsharif
- Department of Administrative and Financial Sciences, Ranyah University Collage, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| | - Nasir Saeed
- Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al-Ain 15551, United Arab Emirates
| |
Collapse
|
7
|
Muacevic A, Adler JR, Jones RH, Collins HR, Kabakus IM, McBee MP. COVID-19 Diagnosis on Chest Radiograph Using Artificial Intelligence. Cureus 2022; 14:e31897. [PMID: 36579217 PMCID: PMC9792347 DOI: 10.7759/cureus.31897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2022] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND The coronavirus disease 2019 (COVID-19) pandemic has disrupted the world since 2019, causing significant morbidity and mortality in developed and developing countries alike. Although substantial resources have been diverted to developing diagnostic, preventative, and treatment measures, disparities in the availability and efficacy of these tools vary across countries. We seek to assess the ability of commercial artificial intelligence (AI) technology to diagnose COVID-19 by analyzing chest radiographs. MATERIALS AND METHODS Chest radiographs taken from symptomatic patients within two days of polymerase chain reaction (PCR) tests were assessed for COVID-19 infection by board-certified radiologists and commercially available AI software. Sixty patients with negative and 60 with positive COVID reverse transcription-polymerase chain reaction (RT-PCR) tests were chosen. Results were compared against results of the PCR test for accuracy and statistically analyzed by receiver operating characteristic (ROC) curves along with area under the curve (AUC) values. RESULTS A total of 120 chest radiographs (60 positive and 60 negative RT-PCR tests) radiographs were analyzed. The AI software performed significantly better than chance (p = 0.001) and did not differ significantly from the radiologist ROC curve (p = 0.78). CONCLUSION Commercially available AI software was not inferior compared with trained radiologists in accurately identifying COVID-19 cases by analyzing radiographs. While RT-PCR testing remains the standard, current advances in AI help correctly analyze chest radiographs to diagnose COVID-19 infection.
Collapse
|