1
|
Li J, Kot WY, McGrath CP, Chan BWA, Ho JWK, Zheng LW. Diagnostic accuracy of artificial intelligence assisted clinical imaging in the detection of oral potentially malignant disorders and oral cancer: a systematic review and meta-analysis. Int J Surg 2024; 110:5034-5046. [PMID: 38652301 PMCID: PMC11325952 DOI: 10.1097/js9.0000000000001469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/30/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND The objective of this study is to examine the application of artificial intelligence (AI) algorithms in detecting oral potentially malignant disorders (OPMD) and oral cancerous lesions, and to evaluate the accuracy variations among different imaging tools employed in these diagnostic processes. MATERIALS AND METHODS A systematic search was conducted in four databases: Embase, Web of Science, PubMed, and Scopus. The inclusion criteria included studies using machine learning algorithms to provide diagnostic information on specific oral lesions, prospective or retrospective design, and inclusion of OPMD. Sensitivity and specificity analyses were also required. Forest plots were generated to display overall diagnostic odds ratio (DOR), sensitivity, specificity, negative predictive values, and summary receiver operating characteristic (SROC) curves. Meta-regression analysis was conducted to examine potential differences among different imaging tools. RESULTS The overall DOR for AI-based screening of OPMD and oral mucosal cancerous lesions from normal mucosa was 68.438 (95% CI= [39.484-118.623], I2 =86%). The area under the SROC curve was 0.938, indicating excellent diagnostic performance. AI-assisted screening showed a sensitivity of 89.9% (95% CI= [0.866-0.925]; I2 =81%), specificity of 89.2% (95% CI= [0.851-0.922], I2 =79%), and a high negative predictive value of 89.5% (95% CI= [0.851-0.927], I2 =96%). Meta-regression analysis revealed no significant difference among the three image tools. After generating a GOSH plot, the DOR was calculated to be 49.30, and the area under the SROC curve was 0.877. Additionally, sensitivity, specificity, and negative predictive value were 90.5% (95% CI [0.873-0.929], I2 =4%), 87.0% (95% CI [0.813-0.912], I2 =49%) and 90.1% (95% CI [0.860-0.931], I2 =57%), respectively. Subgroup analysis showed that clinical photography had the highest diagnostic accuracy. CONCLUSIONS AI-based detection using clinical photography shows a high DOR and is easily accessible in the current era with billions of phone subscribers globally. This indicates that there is significant potential for AI to enhance the diagnostic capabilities of general practitioners to the level of specialists by utilizing clinical photographs, without the need for expensive specialized imaging equipment.
Collapse
Affiliation(s)
- JingWen Li
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| | - Wai Ying Kot
- Faculty of Dentistry, The University of Hong Kong
| | - Colman Patrick McGrath
- Division of Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong
| | - Bik Wan Amy Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, The Chinese University of Hong Kong
| | - Joshua Wing Kei Ho
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, People's Republic of China
| | - Li Wu Zheng
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| |
Collapse
|
2
|
Alajaji SA, Khoury ZH, Jessri M, Sciubba JJ, Sultan AS. An Update on the Use of Artificial Intelligence in Digital Pathology for Oral Epithelial Dysplasia Research. Head Neck Pathol 2024; 18:38. [PMID: 38727841 PMCID: PMC11087425 DOI: 10.1007/s12105-024-01643-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/30/2024] [Indexed: 05/13/2024]
Abstract
INTRODUCTION Oral epithelial dysplasia (OED) is a precancerous histopathological finding which is considered the most important prognostic indicator for determining the risk of malignant transformation into oral squamous cell carcinoma (OSCC). The gold standard for diagnosis and grading of OED is through histopathological examination, which is subject to inter- and intra-observer variability, impacting accurate diagnosis and prognosis. The aim of this review article is to examine the current advances in digital pathology for artificial intelligence (AI) applications used for OED diagnosis. MATERIALS AND METHODS We included studies that used AI for diagnosis, grading, or prognosis of OED on histopathology images or intraoral clinical images. Studies utilizing imaging modalities other than routine light microscopy (e.g., scanning electron microscopy), or immunohistochemistry-stained histology slides, or immunofluorescence were excluded from the study. Studies not focusing on oral dysplasia grading and diagnosis, e.g., to discriminate OSCC from normal epithelial tissue were also excluded. RESULTS A total of 24 studies were included in this review. Nineteen studies utilized deep learning (DL) convolutional neural networks for histopathological OED analysis, and 4 used machine learning (ML) models. Studies were summarized by AI method, main study outcomes, predictive value for malignant transformation, strengths, and limitations. CONCLUSION ML/DL studies for OED grading and prediction of malignant transformation are emerging as promising adjunctive tools in the field of digital pathology. These adjunctive objective tools can ultimately aid the pathologist in more accurate diagnosis and prognosis prediction. However, further supportive studies that focus on generalization, explainable decisions, and prognosis prediction are needed.
Collapse
Affiliation(s)
- Shahd A Alajaji
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA
- Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, Meharry Medical College School of Dentistry, Nashville, TN, USA
| | - Maryam Jessri
- Oral Medicine and Pathology Department, School of Dentistry, University of Queensland, Herston, QLD, Australia
- Oral Medicine Department, Metro North Hospital and Health Services, Queensland Health, Brisbane, QLD, Australia
| | - James J Sciubba
- Department of Otolaryngology, Head & Neck Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Ahmed S Sultan
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA.
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA.
- University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, MD, USA.
| |
Collapse
|
3
|
Gouzou D, Taimori A, Haloubi T, Finlayson N, Wang Q, Hopgood JR, Vallejo M. Applications of machine learning in time-domain fluorescence lifetime imaging: a review. Methods Appl Fluoresc 2024; 12:022001. [PMID: 38055998 PMCID: PMC10851337 DOI: 10.1088/2050-6120/ad12f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/25/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Many medical imaging modalities have benefited from recent advances in Machine Learning (ML), specifically in deep learning, such as neural networks. Computers can be trained to investigate and enhance medical imaging methods without using valuable human resources. In recent years, Fluorescence Lifetime Imaging (FLIm) has received increasing attention from the ML community. FLIm goes beyond conventional spectral imaging, providing additional lifetime information, and could lead to optical histopathology supporting real-time diagnostics. However, most current studies do not use the full potential of machine/deep learning models. As a developing image modality, FLIm data are not easily obtainable, which, coupled with an absence of standardisation, is pushing back the research to develop models which could advance automated diagnosis and help promote FLIm. In this paper, we describe recent developments that improve FLIm image quality, specifically time-domain systems, and we summarise sensing, signal-to-noise analysis and the advances in registration and low-level tracking. We review the two main applications of ML for FLIm: lifetime estimation and image analysis through classification and segmentation. We suggest a course of action to improve the quality of ML studies applied to FLIm. Our final goal is to promote FLIm and attract more ML practitioners to explore the potential of lifetime imaging.
Collapse
Affiliation(s)
- Dorian Gouzou
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| | - Ali Taimori
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Tarek Haloubi
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Neil Finlayson
- Neil Finlayson is with Institute for Integrated Micro and Nano Systems, School of Engineering, University ofEdinburgh, Edinburgh EH9 3FF, United Kingdom
| | - Qiang Wang
- Qiang Wang is with Centre for Inflammation Research, University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom
| | - James R Hopgood
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Marta Vallejo
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| |
Collapse
|
4
|
Vasanthakumari P, Zhu Y, Brettin T, Partin A, Shukla M, Xia F, Narykov O, Weil MR, Stevens RL. A Comprehensive Investigation of Active Learning Strategies for Conducting Anti-Cancer Drug Screening. Cancers (Basel) 2024; 16:530. [PMID: 38339281 PMCID: PMC10854925 DOI: 10.3390/cancers16030530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/12/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
It is well-known that cancers of the same histology type can respond differently to a treatment. Thus, computational drug response prediction is of paramount importance for both preclinical drug screening studies and clinical treatment design. To build drug response prediction models, treatment response data need to be generated through screening experiments and used as input to train the prediction models. In this study, we investigate various active learning strategies of selecting experiments to generate response data for the purposes of (1) improving the performance of drug response prediction models built on the data and (2) identifying effective treatments. Here, we focus on constructing drug-specific response prediction models for cancer cell lines. Various approaches have been designed and applied to select cell lines for screening, including a random, greedy, uncertainty, diversity, combination of greedy and uncertainty, sampling-based hybrid, and iteration-based hybrid approach. All of these approaches are evaluated and compared using two criteria: (1) the number of identified hits that are selected experiments validated to be responsive, and (2) the performance of the response prediction model trained on the data of selected experiments. The analysis was conducted for 57 drugs and the results show a significant improvement on identifying hits using active learning approaches compared with the random and greedy sampling method. Active learning approaches also show an improvement on response prediction performance for some of the drugs and analysis runs compared with the greedy sampling method.
Collapse
Affiliation(s)
- Priyanka Vasanthakumari
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Yitan Zhu
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Thomas Brettin
- Computing, Environment and Life Sciences, Argonne National Laboratory, Lemont, IL 60439, USA; (T.B.); (R.L.S.)
| | - Alexander Partin
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Maulik Shukla
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Fangfang Xia
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Oleksandr Narykov
- Division of Data Science and Learning, Argonne National Laboratory, Lemont, IL 60439, USA; (Y.Z.); (A.P.); (M.S.); (F.X.); (O.N.)
| | - Michael Ryan Weil
- Cancer Research Technology Program, Cancer Data Science Initiatives, Frederick National Laboratory for Cancer Research, Frederick, MD 21701, USA;
| | - Rick L. Stevens
- Computing, Environment and Life Sciences, Argonne National Laboratory, Lemont, IL 60439, USA; (T.B.); (R.L.S.)
- Department of Computer Science, The University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
5
|
Huang J, Tang X, Chen Z, Li X, Zhang Y, Huang X, Zhang D, An G, Lee HJ. Rapid azoospermia classification by stimulated Raman scattering and second harmonic generation microscopy. BIOMEDICAL OPTICS EXPRESS 2023; 14:5569-5582. [PMID: 38021145 PMCID: PMC10659792 DOI: 10.1364/boe.501623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 09/26/2023] [Indexed: 12/01/2023]
Abstract
Disease diagnosis and classification pose significant challenges due to the limited capabilities of traditional methods to obtain molecular information with spatial distribution. Optical imaging techniques, utilizing (auto)fluorescence and nonlinear optical signals, introduce new dimensions for biomarkers exploration that can improve diagnosis and classification. Nevertheless, these signals often cover only a limited number of species, impeding a comprehensive assessment of the tissue microenvironment, which is crucial for effective disease diagnosis and therapy. To address this challenge, we developed a multimodal platform, termed stimulated Raman scattering and second harmonic generation microscopy (SRASH), capable of simultaneously providing both chemical bonds and structural information of tissues. Applying SRASH imaging to azoospermia patient samples, we successfully identified lipids, protein, and collagen contrasts, unveiling molecular and structural signatures for non-obstructive azoospermia. This achievement is facilitated by LiteBlendNet-Dx (LBNet-Dx), our diagnostic algorithm, which achieved an outstanding 100% sample-level accuracy in classifying azoospermia, surpassing conventional imaging modalities. As a label-free technique, SRASH imaging eliminates the requirement for sample pre-treatment, demonstrating great potential for clinical translation and enabling molecular imaging-based diagnosis and therapy.
Collapse
Affiliation(s)
- Jie Huang
- Zhejiang Polytechnic Institute, Polytechnic Institute, Zhejiang University, Hangzhou 310058, China
- College of Biomedical Engineering & Instrument Science; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310058, China
| | - Xiaobin Tang
- Interdisciplinary Centre for Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, and Department of Physics, Zhejiang University; Hangzhou 310027, China
| | - Zhicong Chen
- Department of Obstetrics and Gynecology, Center for Reproductive Medicine; Guangdong Provincial Key Laboratory of Major Obstetric Diseases; Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology; Guangdong-Hong Kong-Macao Greater Bay Area Higher Education Joint Laboratory of Maternal-Fetal Medicine; The Third Affiliated Hospital of Guangzhou Medical University; Guangzhou 510150, China
| | - Xiaomin Li
- Department of Obstetrics and Gynecology, Center for Reproductive Medicine; Guangdong Provincial Key Laboratory of Major Obstetric Diseases; Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology; Guangdong-Hong Kong-Macao Greater Bay Area Higher Education Joint Laboratory of Maternal-Fetal Medicine; The Third Affiliated Hospital of Guangzhou Medical University; Guangzhou 510150, China
| | - Yongqing Zhang
- Interdisciplinary Centre for Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, and Department of Physics, Zhejiang University; Hangzhou 310027, China
| | - Xiangjie Huang
- College of Biomedical Engineering & Instrument Science; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310058, China
| | - Delong Zhang
- Interdisciplinary Centre for Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, and Department of Physics, Zhejiang University; Hangzhou 310027, China
| | - Geng An
- Department of Obstetrics and Gynecology, Center for Reproductive Medicine; Guangdong Provincial Key Laboratory of Major Obstetric Diseases; Guangdong Provincial Clinical Research Center for Obstetrics and Gynecology; Guangdong-Hong Kong-Macao Greater Bay Area Higher Education Joint Laboratory of Maternal-Fetal Medicine; The Third Affiliated Hospital of Guangzhou Medical University; Guangzhou 510150, China
| | - Hyeon Jeong Lee
- College of Biomedical Engineering & Instrument Science; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
6
|
Hassan MA, Weyers BW, Bec J, Fereidouni F, Qi J, Gui D, Bewley AF, Abouyared M, Farwell DG, Birkeland AC, Marcu L. Anatomy-Specific Classification Model Using Label-Free FLIm to Aid Intraoperative Surgical Guidance of Head and Neck Cancer. IEEE Trans Biomed Eng 2023; 70:2863-2873. [PMID: 37043314 PMCID: PMC10833893 DOI: 10.1109/tbme.2023.3266678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2023]
Abstract
Intraoperative identification of head and neck cancer tissue is essential to achieve complete tumor resection and mitigate tumor recurrence. Mesoscopic fluorescence lifetime imaging (FLIm) of intrinsic tissue fluorophores emission has demonstrated the potential to demarcate the extent of the tumor in patients undergoing surgical procedures of the oral cavity and the oropharynx. Here, we report FLIm-based classification methods using standard machine learning models that account for the diverse anatomical and biochemical composition across the head and neck anatomy to improve tumor region identification. Three anatomy-specific binary classification models were developed (i.e., "base of tongue," "palatine tonsil," and "oral tongue"). FLIm data from patients (N = 85) undergoing upper aerodigestive oncologic surgery were used to train and validate the classification models using a leave-one-patient-out cross-validation method. These models were evaluated for two classification tasks: (1) to discriminate between healthy and cancer tissue, and (2) to apply the binary classification model trained on healthy and cancer to discriminate dysplasia through transfer learning. This approach achieved superior classification performance compared to models that are anatomy-agnostic; specifically, a ROC-AUC of 0.94 was for the first task and 0.92 for the second. Furthermore, the model demonstrated detection of dysplasia, highlighting the generalization of the FLIm-based classifier. Current findings demonstrate that a classifier that accounts for tumor location can improve the ability to accurately identify surgical margins and underscore FLIm's potential as a tool for surgical guidance in head and neck cancer patients, including those subjects of robotic surgery.
Collapse
|
7
|
Tiwari A, Gupta N, Singla D, Ranjan Swain J, Gupta R, Mehta D, Kumar S. Artificial Intelligence's Use in the Diagnosis of Mouth Ulcers: A Systematic Review. Cureus 2023; 15:e45187. [PMID: 37842407 PMCID: PMC10576017 DOI: 10.7759/cureus.45187] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 09/13/2023] [Indexed: 10/17/2023] Open
Abstract
Artificial intelligence (AI) has been cited as being helpful in the diagnosis of diseases, the prediction of prognoses, and the development of patient-specific therapeutic strategies. AI can help dentists, in particular, when they need to make important judgments quickly. It can eliminate human mistakes in making decisions, resulting in superior and consistent medical treatment while lowering the workload on dentists. The existing studies relevant to the study and application of AI in the diagnosis of various forms of mouth ulcers are reviewed in this work. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards were followed in the preparation of the review. There were no rule violations, with the significant exception of the use of a better search method that led to more accurate findings. Using search terms mainly such as AI, oral health, oral ulcers, oral herpes simplex, oral lichen planus, pemphigus vulgaris, recurrent aphthous ulcer (RAU), oral cancer, premalignant and malignant disorders, etc., a comprehensive search was carried out in the reliable sources of literature, namely PubMed, Scopus, Embase, Web of Science, Ovid, Global Health, and PsycINFO. For all papers, exhaustive searches were done using inclusion criteria as well as exclusion criteria between June 28, 2018, and June 28, 2023. An AI framework for the automatic categorization of oral ulcers from oral clinical photographs was developed by the authors, and it performed satisfactorily. The newly designed AI model works better than the current convolutional neural network image categorization techniques and shows a fair level of precision in the classification of oral ulcers. However, despite being useful for identifying oral ulcers, the suggested technique needs a broader set of data for validation and training purposes before being used in clinical settings. Automated OCSCC identification using a deep learning-based technique is a quick, harmless, affordable, and practical approach to evaluating the effectiveness of cancer treatment. The categorization and identification of RAU lesions through the use of non-intrusive oral pictures using the previously developed ResNet50 and YOLOV algorithms demonstrated better accuracy as well as adequate potential for the future, which could be helpful in clinical practice. Moreover, the most reliable projections for the likelihood of the presence or absence of RAU were made by the optimized neural network. The authors also discovered variables associated with RAU that might be used as input information to build artificial neural networks that anticipate RAU.
Collapse
Affiliation(s)
- Anushree Tiwari
- Clinical Quality and Value, American Academy of Orthopaedic Surgeons, Rosemont, USA
| | - Neha Gupta
- Department of Oral Pathology, Microbiology & Forensic Odontology, Dental College, Rajendra Institute of Medical Sciences, Ranchi, IND
| | - Deepika Singla
- Department of Conservative Dentistry & Endodontics, Desh Bhagat Dental College & Hospital, Malout, IND
| | - Jnana Ranjan Swain
- Department of Pedodontics and Preventive Dentistry, Institute of Dental Sciences, Siksha 'O' Anusandhan, Bhubaneswar, IND
| | - Ruchi Gupta
- Department of Prosthodontics, Rungta College of Dental Sciences and Research, Bhilai, IND
| | - Dhaval Mehta
- Department of Oral Medicine and Radiology, Narsinbhai Patel Dental College and Hospital, Sankalchand Patel University, Visnagar, IND
| | - Santosh Kumar
- Department of Periodontology and Implantology, Karnavati School of Dentistry, Karnavati University, Gandhinagar, IND
| |
Collapse
|
8
|
Dixit S, Kumar A, Srinivasan K. A Current Review of Machine Learning and Deep Learning Models in Oral Cancer Diagnosis: Recent Technologies, Open Challenges, and Future Research Directions. Diagnostics (Basel) 2023; 13:1353. [PMID: 37046571 PMCID: PMC10093759 DOI: 10.3390/diagnostics13071353] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 03/25/2023] [Accepted: 04/03/2023] [Indexed: 04/08/2023] Open
Abstract
Cancer is a problematic global health issue with an extremely high fatality rate throughout the world. The application of various machine learning techniques that have appeared in the field of cancer diagnosis in recent years has provided meaningful insights into efficient and precise treatment decision-making. Due to rapid advancements in sequencing technologies, the detection of cancer based on gene expression data has improved over the years. Different types of cancer affect different parts of the body in different ways. Cancer that affects the mouth, lip, and upper throat is known as oral cancer, which is the sixth most prevalent form of cancer worldwide. India, Bangladesh, China, the United States, and Pakistan are the top five countries with the highest rates of oral cavity disease and lip cancer. The major causes of oral cancer are excessive use of tobacco and cigarette smoking. Many people's lives can be saved if oral cancer (OC) can be detected early. Early identification and diagnosis could assist doctors in providing better patient care and effective treatment. OC screening may advance with the implementation of artificial intelligence (AI) techniques. AI can provide assistance to the oncology sector by accurately analyzing a large dataset from several imaging modalities. This review deals with the implementation of AI during the early stages of cancer for the proper detection and treatment of OC. Furthermore, performance evaluations of several DL and ML models have been carried out to show that the DL model can overcome the difficult challenges associated with early cancerous lesions in the mouth. For this review, we have followed the rules recommended for the extension of scoping reviews and meta-analyses (PRISMA-ScR). Examining the reference lists for the chosen articles helped us gather more details on the subject. Additionally, we discussed AI's drawbacks and its potential use in research on oral cancer. There are methods for reducing risk factors, such as reducing the use of tobacco and alcohol, as well as immunization against HPV infection to avoid oral cancer, or to lessen the burden of the disease. Additionally, officious methods for preventing oral diseases include training programs for doctors and patients as well as facilitating early diagnosis via screening high-risk populations for the disease.
Collapse
Affiliation(s)
- Shriniket Dixit
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Anant Kumar
- School of Bioscience and Technology, Vellore Institute of Technology, Vellore 632014, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
9
|
de Souza LL, Fonseca FP, Araújo ALD, Lopes MA, Vargas PA, Khurram SA, Kowalski LP, Dos Santos HT, Warnakulasuriya S, Dolezal J, Pearson AT, Santos-Silva AR. Machine learning for detection and classification of oral potentially malignant disorders: A conceptual review. J Oral Pathol Med 2023; 52:197-205. [PMID: 36792771 DOI: 10.1111/jop.13414] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 02/17/2023]
Abstract
Oral potentially malignant disorders represent precursor lesions that may undergo malignant transformation to oral cancer. There are many known risk factors associated with the development of oral potentially malignant disorders, and contribute to the risk of malignant transformation. Although many advances have been reported to understand the biological behavior of oral potentially malignant disorders, their clinical features that indicate the characteristics of malignant transformation are not well established. Early diagnosis of malignancy is the most important factor to improve patients' prognosis. The integration of machine learning into routine diagnosis has recently emerged as an adjunct to aid clinical examination. Increased performances of artificial intelligence AI-assisted medical devices are claimed to exceed the human capability in the clinical detection of early cancer. Therefore, the aim of this narrative review is to introduce artificial intelligence terminology, concepts, and models currently used in oncology to familiarize oral medicine scientists with the language skills, best research practices, and knowledge for developing machine learning models applied to the clinical detection of oral potentially malignant disorders.
Collapse
Affiliation(s)
- Lucas Lacerda de Souza
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Felipe Paiva Fonseca
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
- Department of Oral Surgery and Pathology, School of Dentistry, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | | | - Marcio Ajudarte Lopes
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Pablo Agustin Vargas
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| | - Syed Ali Khurram
- Unit of Oral & Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Luiz Paulo Kowalski
- Department of Head and Neck Surgery, University of Sao Paulo Medical School and Department of Head and Neck Surgery and Otorhinolaryngology, AC Camargo Cancer Center, Sao Paulo, Brazil
| | - Harim Tavares Dos Santos
- Department of Otolaryngology-Head and Neck Surgery, University of Missouri, Columbia, Missouri, USA
- Department of Bond Life Sciences Center, University of Missouri, Columbia, Missouri, USA
| | - Saman Warnakulasuriya
- King's College London, London, UK
- WHO Collaborating Centre for Oral Cancer, London, UK
| | - James Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Alan Roger Santos-Silva
- Oral Diagnosis, Piracicaba Dental School, University of Campinas (UNICAMP), São Paulo, Brazil
| |
Collapse
|
10
|
Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map. Cancers (Basel) 2023; 15:cancers15051421. [PMID: 36900210 PMCID: PMC10001266 DOI: 10.3390/cancers15051421] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/12/2023] Open
Abstract
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.
Collapse
|
11
|
Machine learning in point-of-care automated classification of oral potentially malignant and malignant disorders: a systematic review and meta-analysis. Sci Rep 2022; 12:13797. [PMID: 35963880 PMCID: PMC9376104 DOI: 10.1038/s41598-022-17489-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
Machine learning (ML) algorithms are becoming increasingly pervasive in the domains of medical diagnostics and prognostication, afforded by complex deep learning architectures that overcome the limitations of manual feature extraction. In this systematic review and meta-analysis, we provide an update on current progress of ML algorithms in point-of-care (POC) automated diagnostic classification systems for lesions of the oral cavity. Studies reporting performance metrics on ML algorithms used in automatic classification of oral regions of interest were identified and screened by 2 independent reviewers from 4 databases. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. 35 studies were suitable for qualitative synthesis, and 31 for quantitative analysis. Outcomes were assessed using a bivariate random-effects model following an assessment of bias and heterogeneity. 4 distinct methodologies were identified for POC diagnosis: (1) clinical photography; (2) optical imaging; (3) thermal imaging; (4) analysis of volatile organic compounds. Estimated AUROC across all studies was 0.935, and no difference in performance was identified between methodologies. We discuss the various classical and modern approaches to ML employed within identified studies, and highlight issues that will need to be addressed for implementation of automated classification systems in screening and early detection.
Collapse
|
12
|
Kim JS, Kim BG, Hwang SH. Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis. Cancers (Basel) 2022; 14:cancers14143499. [PMID: 35884560 PMCID: PMC9320189 DOI: 10.3390/cancers14143499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/16/2022] [Accepted: 07/17/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Early detection of oral cancer is important to increase the survival rate and reduce morbidity. For the past few years, the early detection of oral cancer using artificial intelligence (AI) technology based on autofluorescence imaging, photographic imaging, and optical coherence tomography imaging has been an important research area. In this study, diagnostic values including sensitivity and specificity data were comprehensively confirmed in various studies that performed AI analysis of images. The diagnostic sensitivity of AI-assisted screening was 0.92. In subgroup analysis, there was no statistically significant difference in the diagnostic rate according to each image tool. AI shows good diagnostic performance with high sensitivity for oral cancer. Image analysis using AI is expected to be used as a clinical tool for early detection and evaluation of treatment efficacy for oral cancer. Abstract The accuracy of artificial intelligence (AI)-assisted discrimination of oral cancerous lesions from normal mucosa based on mucosal images was evaluated. Two authors independently reviewed the database until June 2022. Oral mucosal disorder, as recorded by photographic images, autofluorescence, and optical coherence tomography (OCT), was compared with the reference results by histology findings. True-positive, true-negative, false-positive, and false-negative data were extracted. Seven studies were included for discriminating oral cancerous lesions from normal mucosa. The diagnostic odds ratio (DOR) of AI-assisted screening was 121.66 (95% confidence interval [CI], 29.60; 500.05). Twelve studies were included for discriminating all oral precancerous lesions from normal mucosa. The DOR of screening was 63.02 (95% CI, 40.32; 98.49). Subgroup analysis showed that OCT was more diagnostically accurate (324.33 vs. 66.81 and 27.63) and more negatively predictive (0.94 vs. 0.93 and 0.84) than photographic images and autofluorescence on the screening for all oral precancerous lesions from normal mucosa. Automated detection of oral cancerous lesions by AI would be a rapid, non-invasive diagnostic tool that could provide immediate results on the diagnostic work-up of oral cancer. This method has the potential to be used as a clinical tool for the early diagnosis of pathological lesions.
Collapse
Affiliation(s)
- Ji-Sun Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Byung Guk Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Se Hwan Hwang
- Department of Otolaryngology-Head and Neck Surgery, Bucheon St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Bucheon 14647, Korea
- Correspondence: ; Tel.: +82-32-340-7044
| |
Collapse
|
13
|
Duran-Sierra E, Cheng S, Cuenca R, Ahmed B, Ji J, Yakovlev VV, Martinez M, Al-Khalil M, Al-Enazi H, Jo JA. Clinical label-free endoscopic imaging of biochemical and metabolic autofluorescence biomarkers of benign, precancerous, and cancerous oral lesions. BIOMEDICAL OPTICS EXPRESS 2022; 13:3685-3698. [PMID: 35991912 PMCID: PMC9352301 DOI: 10.1364/boe.460081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Early detection is critical for improving the survival rate and quality of life of oral cancer patients; unfortunately, dysplastic and early-stage cancerous oral lesions are often difficult to distinguish from oral benign lesions during standard clinical oral examination. Therefore, there is a critical need for novel clinical technologies that would enable reliable oral cancer screening. The autofluorescence properties of the oral epithelial tissue provide quantitative information about morphological, biochemical, and metabolic tissue and cellular alterations accompanying carcinogenesis. This study aimed to identify novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer that could be clinically imaged using novel multispectral autofluorescence lifetime imaging (maFLIM) endoscopy technologies. In vivo maFLIM clinical endoscopic images of benign, precancerous, and cancerous lesions from 67 patients were acquired using a novel maFLIM endoscope. Widefield maFLIM feature maps were generated, and statistical analyses were applied to identify maFLIM features providing contrast between dysplastic/cancerous vs. benign oral lesions. A total of 14 spectral and time-resolved maFLIM features were found to provide contrast between dysplastic/cancerous vs. benign oral lesions, representing novel biochemical and metabolic autofluorescence biomarkers of oral epithelial dysplasia and cancer. To the best of our knowledge, this is the first demonstration of clinical widefield maFLIM endoscopic imaging of novel biochemical and metabolic autofluorescence biomarkers of oral dysplasia and cancer, supporting the potential of maFLIM endoscopy for early detection of oral cancer.
Collapse
Affiliation(s)
- Elvis Duran-Sierra
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Shuna Cheng
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Rodrigo Cuenca
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Beena Ahmed
- School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney 2052, Australia
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A&M University at Qatar, Doha 23874, Qatar
| | - Vladislav V. Yakovlev
- Department of Biomedical Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Mathias Martinez
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Moustafa Al-Khalil
- Department of Cranio-Maxillofacial Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Hussain Al-Enazi
- Department of Otorhinolaryngology Head and Neck Surgery, Hamad Medical Corporation, Doha 3050, Qatar
| | - Javier A. Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
14
|
Vasanthakumari P, Romano RA, Rosa RGT, Salvio AG, Yakovlev V, Kurachi C, Hirshburg JM, Jo JA. Discrimination of cancerous from benign pigmented skin lesions based on multispectral autofluorescence lifetime imaging dermoscopy and machine learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:066002. [PMID: 35701871 PMCID: PMC9196925 DOI: 10.1117/1.jbo.27.6.066002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
SIGNIFICANCE Accurate early diagnosis of malignant skin lesions is critical in providing adequate and timely treatment; unfortunately, initial clinical evaluation of similar-looking benign and malignant skin lesions can result in missed diagnosis of malignant lesions and unnecessary biopsy of benign ones. AIM To develop and validate a label-free and objective image-guided strategy for the clinical evaluation of suspicious pigmented skin lesions based on multispectral autofluorescence lifetime imaging (maFLIM) dermoscopy. APPROACH We tested the hypothesis that maFLIM-derived autofluorescence global features can be used in machine-learning (ML) models to discriminate malignant from benign pigmented skin lesions. Clinical widefield maFLIM dermoscopy imaging of 41 benign and 19 malignant pigmented skin lesions from 30 patients were acquired prior to tissue biopsy sampling. Three different pools of global image-level maFLIM features were extracted: multispectral intensity, time-domain biexponential, and frequency-domain phasor features. The classification potential of each feature pool to discriminate benign versus malignant pigmented skin lesions was evaluated by training quadratic discriminant analysis (QDA) classification models and applying a leave-one-patient-out cross-validation strategy. RESULTS Classification performance estimates obtained after unbiased feature selection were as follows: 68% sensitivity and 80% specificity with the phasor feature pool, 84% sensitivity, and 71% specificity with the biexponential feature pool, and 84% sensitivity and 32% specificity with the intensity feature pool. Ensemble combinations of QDA models trained with phasor and biexponential features yielded sensitivity of 84% and specificity of 90%, outperforming all other models considered. CONCLUSIONS Simple classification ML models based on time-resolved (biexponential and phasor) autofluorescence global features extracted from maFLIM dermoscopy images have the potential to provide objective discrimination of malignant from benign pigmented lesions. ML-assisted maFLIM dermoscopy could potentially assist with the clinical evaluation of suspicious lesions and the identification of those patients benefiting the most from biopsy examination.
Collapse
Affiliation(s)
- Priyanka Vasanthakumari
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| | - Renan A. Romano
- University of São Paulo, São Carlos Institute of Physics, São Paulo, Brazil
| | - Ramon G. T. Rosa
- University of São Paulo, São Carlos Institute of Physics, São Paulo, Brazil
| | - Ana G. Salvio
- Skin Department of Amaral Carvalho Hospital, São Paulo, Brazil
| | - Vladislav Yakovlev
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| | - Cristina Kurachi
- University of São Paulo, São Carlos Institute of Physics, São Paulo, Brazil
| | - Jason M. Hirshburg
- University of Oklahoma Health Science Center, Department of Dermatology, Oklahoma City, Oklahoma, United States
| | - Javier A. Jo
- University of Oklahoma, School of Electrical and Computer Engineering, Norman, Oklahoma, United States
| |
Collapse
|
15
|
Al-Rawi N, Sultan A, Rajai B, Shuaeeb H, Alnajjar M, Alketbi M, Mohammad Y, Shetty SR, Mashrah MA. The Effectiveness of Artificial Intelligence in Detection of Oral Cancer. Int Dent J 2022; 72:436-447. [PMID: 35581039 PMCID: PMC9381387 DOI: 10.1016/j.identj.2022.03.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/03/2022] [Accepted: 03/04/2022] [Indexed: 02/07/2023] Open
Abstract
Aim The early detection of oral cancer (OC) at the earliest stage significantly increases survival rates. Recently, there has been an increasing interest in the use of artificial intelligence (AI) technologies in diagnostic medicine. This study aimed to critically analyse the available evidence concerning the utility of AI in the diagnosis of OC. Special consideration was given to the diagnostic accuracy of AI and its ability to identify the early stages of OC. Materials and methods From the date of inception to December 2021, 4 databases (PubMed, Scopus, EBSCO, and OVID) were searched. Three independent authors selected studies on the basis of strict inclusion criteria. The risk of bias and applicability were assessed using the prediction model risk of bias assessment tool. Of the 606 initial records, 17 studies with a total of 7245 patients and 69,425 images were included. Ten statistical methods were used to assess AI performance in the included studies. Six studies used supervised machine learning, whilst 11 used deep learning. The results of deep learning ranged with an accuracy of 81% to 99.7%, sensitivity 79% to 98.75%, specificity 82% to 100%, and area under the curve (AUC) 79% to 99.5%. Results Results obtained from supervised machine learning demonstrated an accuracy ranging from 43.5% to 100%, sensitivity of 94% to 100%, specificity 16% to 100%, and AUC of 93%. Conclusions There is no clear consensus regarding the best AI method for OC detection. AI is a valuable diagnostic tool that represents a large evolutionary leap in the detection of OC in its early stages. Based on the evidence, deep learning, such as a deep convolutional neural network, is more accurate in the early detection of OC compared to supervised machine learning.
Collapse
Affiliation(s)
- Natheer Al-Rawi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Afrah Sultan
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Batool Rajai
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Haneen Shuaeeb
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Mariam Alnajjar
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Maryam Alketbi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Yara Mohammad
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Shishir Ram Shetty
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates.
| | | |
Collapse
|