1
|
Duran-Sierra E, Cheng S, Cuenca R, Ahmed B, Ji J, Yakovlev VV, Martinez M, Al-Khalil M, Al-Enazi H, Jo JA. Clinical label-free endoscopic imaging of biochemical and metabolic autofluorescence biomarkers of benign, precancerous, and cancerous oral lesions. Biomed Opt Express 2022; 13:3685-3698. [PMID: 35991912 PMCID: PMC9352301 DOI: 10.1364/boe.460081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Early detection is critical for improving the survival rate and quality of life of oral cancer patients; unfortunately, dysplastic and early-stage cancerous oral lesions are often difficult to distinguish from oral benign lesions during standard clinical oral examination. Therefore, there is a critical need for novel clinical technologies that would enable reliable oral cancer screening. The autofluorescence properties of the oral epithelial tissue provide quantitative information about morphological, biochemical, and metabolic tissue and cellular alterations accompanying carcinogenesis. This study aimed to identify novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer that could be clinically imaged using novel multispectral autofluorescence lifetime imaging (maFLIM) endoscopy technologies. In vivo maFLIM clinical endoscopic images of benign, precancerous, and cancerous lesions from 67 patients were acquired using a novel maFLIM endoscope. Widefield maFLIM feature maps were generated, and statistical analyses were applied to identify maFLIM features providing contrast between dysplastic/cancerous vs. benign oral lesions. A total of 14 spectral and time-resolved maFLIM features were found to provide contrast between dysplastic/cancerous vs. benign oral lesions, representing novel biochemical and metabolic autofluorescence biomarkers of oral epithelial dysplasia and cancer. To the best of our knowledge, this is the first demonstration of clinical widefield maFLIM endoscopic imaging of novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer, supporting the potential of maFLIM endoscopy for early detection of oral cancer.
Collapse
Affiliation(s)
- Elvis Duran-Sierra
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Shuna Cheng
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Rodrigo Cuenca
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Beena Ahmed
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney 2052, Australia
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha 23874, Qatar
| | - Vladislav V. Yakovlev
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Mathias Martinez
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Moustafa Al-Khalil
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Hussain Al-Enazi
- Department of Otorhinolaryngology Head and Neck Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Javier A. Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
2
|
Caughlin K, Duran-Sierra E, Cheng S, Cuenca R, Ahmed B, Ji J, Martinez M, Al-Khalil M, Al-Enazi H, Cheng YSL, Wright J, Jo JA, Busso C. Aligning Small Datasets using Domain Adversarial Learning: Applications in Automated In Vivo Oral Cancer Diagnosis. IEEE J Biomed Health Inform 2022; 27:457-468. [PMID: 36279347 PMCID: PMC10079633 DOI: 10.1109/jbhi.2022.3217015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Deep learning approaches for medical image analysis are limited by small data set size due to factors such as patient privacy and difficulties in obtaining expert labelling for each image. In medical imaging system development pipelines, phases for system development and classification algorithms often overlap with data collection, creating small disjoint data sets collected at numerous locations with differing protocols. In this setting, merging data from different data collection centers increases the amount of training data. However, a direct combination of datasets will likely fail due to domain shifts between imaging centers. In contrast to previous approaches that focus on a single data set, we add a domain adaptation module to a neural network and train using multiple data sets. Our approach encourages domain invariance between two multispectral autofluorescence imaging (maFLIM) data sets of in vivo oral lesions collected with an imaging system currently in development. The two data sets have differences in the sub-populations imaged and in the calibration procedures used during data collection. We mitigate these differences using a gradient reversal layer and domain classifier. Our final model trained with two data sets substantially increases performance, including a significant increase in specificity. We also achieve a significant increase in average performance over the best baseline model train with two domains (p = 0.0341). Our approach lays the foundation for faster development of computer-aided diagnostic systems and presents a feasible approach for creating a robust classifier that aligns images from multiple data centers in the presence of domain shifts.
Collapse
Affiliation(s)
- Kayla Caughlin
- Department of Electrical and Computer Engineering, The University of Texas at Dallas, Richardson, TX, USA
| | - Elvis Duran-Sierra
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Shuna Cheng
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Rodrigo Cuenca
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA
| | - Beena Ahmed
- School of Electrical Engineering and Telecommunications, University of New South Wales, New South Wales, Sydney, Australia
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Mathias Martinez
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Moustafa Al-Khalil
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Hussain Al-Enazi
- Department of Otorhinolaryngology Head and Neck Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | - John Wright
- College of Dentistry, Texas A&M University, Dallas, TX, USA
| | - Javier A. Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA
| | - Carlos Busso
- Department of Electrical and Computer Engineering, The University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
3
|
Caughlin K, Duran-Sierra E, Cheng S, Cuenca R, Ahmed B, Ji J, Yakovlev VV, Martinez M, Al-Khalil M, Al-Enazi H, Jo JA, Busso C. End-to-End Neural Network for Feature Extraction and Cancer Diagnosis of In Vivo Fluorescence Lifetime Images of Oral Lesions. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:3894-3897. [PMID: 34892083 DOI: 10.1109/embc46164.2021.9629739] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In contrast to previous studies that focused on classical machine learning algorithms and hand-crafted features, we present an end-to-end neural network classification method able to accommodate lesion heterogeneity for improved oral cancer diagnosis using multispectral autofluorescence lifetime imaging (maFLIM) endoscopy. Our method uses an autoencoder framework jointly trained with a classifier designed to handle overfitting problems with reduced databases, which is often the case in healthcare applications. The autoencoder guides the feature extraction process through the reconstruction loss and enables the potential use of unsupervised data for domain adaptation and improved generalization. The classifier ensures the features extracted are task-specific, providing discriminative information for the classification task. The data-driven feature extraction method automatically generates task-specific features directly from fluorescence decays, eliminating the need for iterative signal reconstruction. We validate our proposed neural network method against support vector machine (SVM) baselines, with our method showing a 6.5%-8.3% increase in sensitivity. Our results show that neural networks that implement data-driven feature extraction provide superior results and enable the capacity needed to target specific issues, such as inter-patient variability and the heterogeneity of oral lesions.Clinical relevance- We improve standard classification algorithms for in vivo diagnosis of oral cancer lesions from maFLIm for clinical use in cancer screening, reducing unnecessary biopsies and facilitating early detection of oral cancer.
Collapse
|
4
|
Duran-Sierra E, Cheng S, Cuenca-Martinez R, Malik B, Maitland KC, Lisa Cheng YS, Wright J, Ahmed B, Ji J, Martinez M, Al-Khalil M, Al-Enazi H, Jo JA. Clinical label-free biochemical and metabolic fluorescence lifetime endoscopic imaging of precancerous and cancerous oral lesions. Oral Oncol 2020; 105:104635. [PMID: 32247986 DOI: 10.1016/j.oraloncology.2020.104635] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 02/15/2020] [Accepted: 03/05/2020] [Indexed: 11/18/2022]
Abstract
INTRODUCTION Incomplete head and neck cancer resection occurs in up to 85% of cases, leading to increased odds of local recurrence and regional metastases; thus, image-guided surgical tools for accurate, in situ and fast detection of positive margins during head and neck cancer resection surgery are urgently needed. Oral epithelial dysplasia and cancer development is accompanied by morphological, biochemical, and metabolic tissue and cellular alterations that can modulate the autofluorescence properties of the oral epithelial tissue. OBJECTIVE This study aimed to test the hypothesis that autofluorescence biomarkers of oral precancer and cancer can be clinically imaged and quantified by means of multispectral fluorescence lifetime imaging (FLIM) endoscopy. METHODS Multispectral autofluorescence lifetime images of precancerous and cancerous lesions from 39 patients were imaged in vivo using a novel multispectral FLIM endoscope and processed to generate widefield maps of biochemical and metabolic autofluorescence biomarkers of oral precancer and cancer. RESULTS Statistical analyses applied to the quantified multispectral FLIM endoscopy based autofluorescence biomarkers indicated their potential to provide contrast between precancerous/cancerous vs. healthy oral epithelial tissue. CONCLUSION To the best of our knowledge, this study represents the first demonstration of label-free biochemical and metabolic clinical imaging of precancerous and cancerous oral lesions by means of widefield multispectral autofluorescence lifetime endoscopy. Future studies will focus on demonstrating the capabilities of endogenous multispectral FLIM endoscopy as an image-guided surgical tool for positive margin detection during head and neck cancer resection surgery.
Collapse
Affiliation(s)
- Elvis Duran-Sierra
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Shuna Cheng
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | - Rodrigo Cuenca-Martinez
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Bilal Malik
- QT Ultrasound Labs, 3 Hamilton Landing, Suite 160, Novato, CA, USA
| | - Kristen C Maitland
- Department of Biomedical Engineering, Texas A&M University, College Station, TX, USA
| | | | - John Wright
- Texas A&M College of Dentistry, Dallas, TX, USA
| | - Beena Ahmed
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Mathias Martinez
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Moustafa Al-Khalil
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Hussain Al-Enazi
- Department of Otorhinolaryngology Head and Neck Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Javier A Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, USA.
| |
Collapse
|