1
|
Viscaino M, Talamilla M, Maass JC, Henríquez P, Délano PH, Auat Cheein C, Auat Cheein F. Color Dependence Analysis in a CNN-Based Computer-Aided Diagnosis System for Middle and External Ear Diseases. Diagnostics (Basel) 2022; 12:diagnostics12040917. [PMID: 35453965 PMCID: PMC9031192 DOI: 10.3390/diagnostics12040917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 03/15/2022] [Accepted: 03/23/2022] [Indexed: 02/01/2023] Open
Abstract
Artificial intelligence-assisted otologic diagnosis has been of growing interest in the scientific community, where middle and external ear disorders are the most frequent diseases in daily ENT practice. There are some efforts focused on reducing medical errors and enhancing physician capabilities using conventional artificial vision systems. However, approaches with multispectral analysis have not yet been addressed. Tissues of the tympanic membrane possess optical properties that define their characteristics in specific light spectra. This work explores color wavelengths dependence in a model that classifies four middle and external ear conditions: normal, chronic otitis media, otitis media with effusion, and earwax plug. The model is constructed under a computer-aided diagnosis system that uses a convolutional neural network architecture. We trained several models using different single-channel images by taking each color wavelength separately. The results showed that a single green channel model achieves the best overall performance in terms of accuracy (92%), sensitivity (85%), specificity (95%), precision (86%), and F1-score (85%). Our findings can be a suitable alternative for artificial intelligence diagnosis systems compared to the 50% of overall misdiagnosis of a non-specialist physician.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390382, Chile;
- Advanced Center of Electrical and Electronic Engineering, Valparaíso 2390136, Chile;
| | - Matias Talamilla
- Interdisciplinary Program of Physiology and Biophysics, Institute of Biomedical Sciences (ICBM), Faculty of Medicine, University of Chile, Santiago 8320328, Chile; (M.T.); (J.C.M.)
| | - Juan Cristóbal Maass
- Interdisciplinary Program of Physiology and Biophysics, Institute of Biomedical Sciences (ICBM), Faculty of Medicine, University of Chile, Santiago 8320328, Chile; (M.T.); (J.C.M.)
- Department of Otolaryngology, Hospital Clínico Universidad de Chile, Faculty of Medicine, University of Chile, Santiago 8320328, Chile;
- Unit of Otolaryngology, Department of Surgery, Clínica Alemana de Santiago, Facultad de Medicina Clínica Alemana-Universidad del Desarrollo, Santiago 0323142, Chile
| | - Pablo Henríquez
- Department of Otolaryngology, Hospital Clínico Universidad de Chile, Faculty of Medicine, University of Chile, Santiago 8320328, Chile;
- Medical Sciences Doctorate Program, Postgraduate School, Faculty of Medicine, University of Chile, Santiago 8320328, Chile
| | - Paul H. Délano
- Advanced Center of Electrical and Electronic Engineering, Valparaíso 2390136, Chile;
- Department of Otolaryngology, Hospital Clínico Universidad de Chile, Faculty of Medicine, University of Chile, Santiago 8320328, Chile;
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago 8320328, Chile
| | - Cecilia Auat Cheein
- Facultad de Ciencias Médicas, Universidad Nacional de Santiago del Estero, Santiago del Estero 4200, Argentina;
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390382, Chile;
- Advanced Center of Electrical and Electronic Engineering, Valparaíso 2390136, Chile;
- Correspondence:
| |
Collapse
|
2
|
Binol H, Niazi MKK, Elmaraghy C, Moberly AC, Gurcan MN. OtoXNet—automated identification of eardrum diseases from otoscope videos: a deep learning study for video-representing images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07107-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
3
|
Chawdhary G, Shoman N. Emerging artificial intelligence applications in otological imaging. Curr Opin Otolaryngol Head Neck Surg 2021; 29:357-364. [PMID: 34459798 DOI: 10.1097/moo.0000000000000754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW To highlight the recent literature on artificial intelligence (AI) pertaining to otological imaging and to discuss future directions, obstacles and opportunities. RECENT FINDINGS The main themes in the recent literature centre around automated otoscopic image diagnosis and automated image segmentation for application in virtual reality surgical simulation and planning. Other applications that have been studied include identification of tinnitus MRI biomarkers, facial palsy analysis, intraoperative augmented reality systems, vertigo diagnosis and endolymphatic hydrops ratio calculation in Meniere's disease. Studies are presently at a preclinical, proof-of-concept stage. SUMMARY The recent literature on AI in otological imaging is promising and demonstrates the future potential of this technology in automating certain imaging tasks in a healthcare environment of ever-increasing demand and workload. Some studies have shown equivalence or superiority of the algorithm over physicians, albeit in narrowly defined realms. Future challenges in developing this technology include the compilation of large high quality annotated datasets, fostering strong collaborations between the health and technology sectors, testing the technology within real-world clinical pathways and bolstering trust among patients and physicians in this new method of delivering healthcare.
Collapse
Affiliation(s)
- Gaurav Chawdhary
- ENT Department, Royal Hallamshire Hospital, Broomhall, Sheffield, UK
| | - Nael Shoman
- ENT Department, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
4
|
Camalan S, Mahmood H, Binol H, Araújo ALD, Santos-Silva AR, Vargas PA, Lopes MA, Khurram SA, Gurcan MN. Convolutional Neural Network-Based Clinical Predictors of Oral Dysplasia: Class Activation Map Analysis of Deep Learning Results. Cancers (Basel) 2021; 13:cancers13061291. [PMID: 33799466 PMCID: PMC8001078 DOI: 10.3390/cancers13061291] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/08/2021] [Accepted: 03/10/2021] [Indexed: 11/16/2022] Open
Abstract
Oral cancer/oral squamous cell carcinoma is among the top ten most common cancers globally, with over 500,000 new cases and 350,000 associated deaths every year worldwide. There is a critical need for objective, novel technologies that facilitate early, accurate diagnosis. For this purpose, we have developed a method to classify images as "suspicious" and "normal" by performing transfer learning on Inception-ResNet-V2 and generated automated heat maps to highlight the region of the images most likely to be involved in decision making. We have tested the developed method's feasibility on two independent datasets of clinical photographic images of 30 and 24 patients from the UK and Brazil, respectively. Both 10-fold cross-validation and leave-one-patient-out validation methods were performed to test the system, achieving accuracies of 73.6% (±19%) and 90.9% (±12%), F1-scores of 97.9% and 87.2%, and precision values of 95.4% and 99.3% at recall values of 100.0% and 81.1% on these two respective cohorts. This study presents several novel findings and approaches, namely the development and validation of our methods on two datasets collected in different countries showing that using patches instead of the whole lesion image leads to better performance and analyzing which regions of the images are predictive of the classes using class activation map analysis.
Collapse
Affiliation(s)
- Seda Camalan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA; (H.B.); (M.N.G.)
- Correspondence: ; Tel.: +1-(336)-713-7675
| | - Hanya Mahmood
- School of Clinical Dentistry, The University of Sheffield, Sheffield S10 2TA, UK; (H.M.); (S.A.K.)
| | - Hamidullah Binol
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA; (H.B.); (M.N.G.)
| | - Anna Luiza Damaceno Araújo
- Oral Diagnosis Department, Semiology and Oral Pathology Areas, Piracicaba Dental School, University of Campinas (UNICAMP), Bairro Areão, Piracicaba 13414-903, São Paulo, Brazil; (A.L.D.A.); (A.R.S.-S.); (P.A.V.); (M.A.L.)
| | - Alan Roger Santos-Silva
- Oral Diagnosis Department, Semiology and Oral Pathology Areas, Piracicaba Dental School, University of Campinas (UNICAMP), Bairro Areão, Piracicaba 13414-903, São Paulo, Brazil; (A.L.D.A.); (A.R.S.-S.); (P.A.V.); (M.A.L.)
| | - Pablo Agustin Vargas
- Oral Diagnosis Department, Semiology and Oral Pathology Areas, Piracicaba Dental School, University of Campinas (UNICAMP), Bairro Areão, Piracicaba 13414-903, São Paulo, Brazil; (A.L.D.A.); (A.R.S.-S.); (P.A.V.); (M.A.L.)
| | - Marcio Ajudarte Lopes
- Oral Diagnosis Department, Semiology and Oral Pathology Areas, Piracicaba Dental School, University of Campinas (UNICAMP), Bairro Areão, Piracicaba 13414-903, São Paulo, Brazil; (A.L.D.A.); (A.R.S.-S.); (P.A.V.); (M.A.L.)
| | - Syed Ali Khurram
- School of Clinical Dentistry, The University of Sheffield, Sheffield S10 2TA, UK; (H.M.); (S.A.K.)
| | - Metin N. Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA; (H.B.); (M.N.G.)
| |
Collapse
|