1
|
Chilet-Martos E, Vila-Francés J, Bagan JV, Vives-Gilabert Y. Automated classification of oral cancer lesions: Vision transformers vs radiomics. Comput Biol Med 2025; 189:109913. [PMID: 40020550 DOI: 10.1016/j.compbiomed.2025.109913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 01/09/2025] [Accepted: 02/21/2025] [Indexed: 03/03/2025]
Abstract
BACKGROUND AND OBJECTIVE Early diagnosis is paramount in the effective management of oral cancer, offering numerous benefits including improved treatment outcomes, reduced morbidity and mortality, preservation of function and appearance, cost-effectiveness, and enhanced quality of life for patients. Transformer-based models, increasingly used in medical image analysis, are the focus of our study. We aim to compare a vision transformer (ViT) classification method with a fully automated radiomics approach. This involves using object detection and segmentation algorithms to effectively classify oral lesions in both cancer and control cases. A combined approach is also presented. METHODS The analysis included 322 patients with oral lesions, comprising 120 cancer cases and 202 controls, with standard JPG images. Pretrained transformer-based algorithms like DEtection TRansformer (DETR), Segment Anything (SAM), and Vision Transformers (ViT) were used to explore different pipelines for lesion classification. For the ViT approach, images were inputted in three configurations: the entire image, a bounding box around the lesion, and a lesion delineation. The radiomics approach involved pipelines with bounding boxes and lesion segmentations. Additionally, a ViT-Radiomics combined approach was proposed, using ViT attention maps as radiomics masks. To validate the models, a five-fold cross validation was used. RESULTS The combined ViT-radiomics model demonstrated superior performance for small training sets, achieving specificity = 0.97 ± 0.04, sensitivity = 0.96 ± 0.05, and accuracy = 0.97 ± 0.02 for the 100 % of the training set. When analyzed independently, the ViT approach using the entire image achieved the best results with specificity = 0.99 ± 0.02, sensitivity = 0.96 ± 0.05, and accuracy = 0.96 ± 0.02. Following closely was the pipeline using automatically obtained segmentations, while the one with bounding boxes had the least favourable outcomes. In the radiomics approach, the most effective classifier used the attention masks from the ViT classifier (derived from the entire image), achieving specificity = 0.97 ± 0.05, sensitivity = 0.95 ± 0.08, and accuracy = 0.94 ± 0.03. Manual segmentations yielded the best results for both approaches, indicating potential for performance enhancement through improved lesion segmentation. CONCLUSIONS The ViT classification surpassed the radiomics-based approach yet combining ViT with radiomics yielded similar results. However, the attention maps generated by ViT tend to associate oral lesions in cancer patients with regions distant from the lesions in control patients. For tasks requiring the examination and comparison of features within cancer and control oral lesions, utilizing the radiomics approach with an automatic lesion segmentation algorithm is recommended.
Collapse
Affiliation(s)
- Eva Chilet-Martos
- IDAL, Electronic Engineering Department, ETSE-UV, University of Valencia, Avgda. Universitat s/n, Burjassot, València, 46100, Spain
| | - Joan Vila-Francés
- IDAL, Electronic Engineering Department, ETSE-UV, University of Valencia, Avgda. Universitat s/n, Burjassot, València, 46100, Spain
| | - José V Bagan
- Head Service Stomatology and Maxillofacial Surgery, University General Hospital, University of Valencia, Valencia, Spain
| | - Yolanda Vives-Gilabert
- IDAL, Electronic Engineering Department, ETSE-UV, University of Valencia, Avgda. Universitat s/n, Burjassot, València, 46100, Spain.
| |
Collapse
|
2
|
Tun HM, Rahman HA, Naing L, Malik OA. Artificial intelligence utilization in cancer screening program across ASEAN: a scoping review. BMC Cancer 2025; 25:703. [PMID: 40234807 PMCID: PMC12001681 DOI: 10.1186/s12885-025-14026-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Accepted: 03/26/2025] [Indexed: 04/17/2025] Open
Abstract
BACKGROUND Cancer remains a significant health challenge in the ASEAN region, highlighting the need for effective screening programs. However, approaches, target demographics, and intervals vary across ASEAN member states, necessitating a comprehensive understanding of these variations to assess program effectiveness. Additionally, while artificial intelligence (AI) holds promise as a tool for cancer screening, its utilization in the ASEAN region is unexplored. PURPOSE This study aims to identify and evaluate different cancer screening programs across ASEAN, with a focus on assessing the integration and impact of AI in these programs. METHODS A scoping review was conducted using PRISMA-ScR guidelines to provide a comprehensive overview of cancer screening programs and AI usage across ASEAN. Data were collected from government health ministries, official guidelines, literature databases, and relevant documents. The use of AI in cancer screening reviews involved searches through PubMed, Scopus, and Google Scholar with the inclusion criteria of only included studies that utilized data from the ASEAN region from January 2019 to May 2024. RESULTS The findings reveal diverse cancer screening approaches in ASEAN. Countries like Myanmar, Laos, Cambodia, Vietnam, Brunei, Philippines, Indonesia and Timor-Leste primarily adopt opportunistic screening, while Singapore, Malaysia, and Thailand focus on organized programs. Cervical cancer screening is widespread, using both opportunistic and organized methods. Fourteen studies were included in the scoping review, covering breast (5 studies), cervical (2 studies), colon (4 studies), hepatic (1 study), lung (1 study), and oral (1 study) cancers. Studies revealed that different stages of AI integration for cancer screening: prospective clinical evaluation (50%), silent trial (36%) and exploratory model development (14%), with promising results in enhancing cancer screening accuracy and efficiency. CONCLUSION Cancer screening programs in the ASEAN region require more organized approaches targeting appropriate age groups at regular intervals to meet the WHO's 2030 screening targets. Efforts to integrate AI in Singapore, Malaysia, Vietnam, Thailand, and Indonesia show promise in optimizing screening processes, reducing costs, and improving early detection. AI technology integration enhances cancer identification accuracy during screening, improving early detection and cancer management across the ASEAN region.
Collapse
Affiliation(s)
- Hein Minn Tun
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei.
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei.
| | - Hanif Abdul Rahman
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei
| | - Lin Naing
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei
| | - Owais Ahmed Malik
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei
| |
Collapse
|
3
|
Mirfendereski P, Li GY, Pearson AT, Kerr AR. Artificial intelligence and the diagnosis of oral cavity cancer and oral potentially malignant disorders from clinical photographs: a narrative review. FRONTIERS IN ORAL HEALTH 2025; 6:1569567. [PMID: 40130020 PMCID: PMC11931071 DOI: 10.3389/froh.2025.1569567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Accepted: 02/25/2025] [Indexed: 03/26/2025] Open
Abstract
Oral cavity cancer is associated with high morbidity and mortality, particularly with advanced stage diagnosis. Oral cavity cancer, typically squamous cell carcinoma (OSCC), is often preceded by oral potentially malignant disorders (OPMDs), which comprise eleven disorders with variable risks for malignant transformation. While OPMDs are clinical diagnoses, conventional oral exam followed by biopsy and histopathological analysis is the gold standard for diagnosis of OSCC. There is vast heterogeneity in the clinical presentation of OPMDs, with possible visual similarities to early-stage OSCC or even to various benign oral mucosal abnormalities. The diagnostic challenge of OSCC/OPMDs is compounded in the non-specialist or primary care setting. There has been significant research interest in technology to assist in the diagnosis of OSCC/OPMDs. Artificial intelligence (AI), which enables machine performance of human tasks, has already shown promise in several domains of medical diagnostics. Computer vision, the field of AI dedicated to the analysis of visual data, has over the past decade been applied to clinical photographs for the diagnosis of OSCC/OPMDs. Various methodological concerns and limitations may be encountered in the literature on OSCC/OPMD image analysis. This narrative review delineates the current landscape of AI clinical photograph analysis in the diagnosis of OSCC/OPMDs and navigates the limitations, methodological issues, and clinical workflow implications of this field, providing context for future research considerations.
Collapse
Affiliation(s)
- Payam Mirfendereski
- Departmment of Oral and Maxillofacial Pathology, Radiology, and Medicine, New York University College of Dentistry, New York, NY, United States
| | - Grace Y. Li
- Department of Medicine, Section of Hematology/Oncology, University of Chicago Medical Center, Chicago, IL, United States
| | - Alexander T. Pearson
- Department of Medicine, Section of Hematology/Oncology, University of Chicago Medical Center, Chicago, IL, United States
| | - Alexander Ross Kerr
- Departmment of Oral and Maxillofacial Pathology, Radiology, and Medicine, New York University College of Dentistry, New York, NY, United States
| |
Collapse
|
4
|
Chantapakul W, Vetchaporn S, Auephanwiriyakul S, Theera-Umpon N, Wongkhuenkaew R, Yeesarapat U, Chamusri N, Wongsapai M. Detection of Architectural Dysplastic Features from Histopathological Imagery of Oral Mucosa Using Neural Networks. Bioengineering (Basel) 2025; 12:216. [PMID: 40150681 PMCID: PMC11939466 DOI: 10.3390/bioengineering12030216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Revised: 01/21/2025] [Accepted: 02/15/2025] [Indexed: 03/29/2025] Open
Abstract
Oral cancer is a serious illness, but it is potentially curable if early detection can be achieved successfully. Oral epithelial dysplasia (OED), which is a precursor to oral squamous cell carcinoma (OSCC), can provide abnormal characteristics to diagnose the risk of developing oral cancer. This paper proposes a neural network architecture for detecting dysplastic features of epithelial architecture, including irregular epithelial stratification and bulbous rete ridges. The different combinations of atrous convolution, batch normalization, global pooling, and dropout are discussed regarding their effects, along with an ablation study. A signature library containing image patches was constructed and utilized to train the models. The best-performing model in the validation set attained an average accuracy of 97.52%. The results of the blind test from the receiver operating characteristic (ROC) curves show that the best model reached the best probability of detection, 0.8571, for irregular epithelial stratifications and 0.8462 for the bulbous rete ridges.
Collapse
Affiliation(s)
- Watchanan Chantapakul
- Biomedical Engineering Institute, and the Biomedical Engineering and Innovation Research Center, Chiang Mai University, Chiang Mai 50200, Thailand; (W.C.); (N.T.-U.); (R.W.)
| | - Sirikanlaya Vetchaporn
- Intercountry Centre for Oral Health, Department of Health, Ministry of Public Health, Chiang Mai 50000, Thailand; (S.V.); (M.W.)
| | - Sansanee Auephanwiriyakul
- Biomedical Engineering Institute, and the Biomedical Engineering and Innovation Research Center, Chiang Mai University, Chiang Mai 50200, Thailand; (W.C.); (N.T.-U.); (R.W.)
- Department of Computer Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand;
| | - Nipon Theera-Umpon
- Biomedical Engineering Institute, and the Biomedical Engineering and Innovation Research Center, Chiang Mai University, Chiang Mai 50200, Thailand; (W.C.); (N.T.-U.); (R.W.)
- Department of Electrical Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand
| | - Ritipong Wongkhuenkaew
- Biomedical Engineering Institute, and the Biomedical Engineering and Innovation Research Center, Chiang Mai University, Chiang Mai 50200, Thailand; (W.C.); (N.T.-U.); (R.W.)
| | - Uklid Yeesarapat
- Department of Computer Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand;
| | - Nutchapon Chamusri
- Department of Oral Biology and Diagnostic Sciences, Faculty of Dentistry, Chiang Mai University, Chiang Mai 50200, Thailand;
| | - Mansuang Wongsapai
- Intercountry Centre for Oral Health, Department of Health, Ministry of Public Health, Chiang Mai 50000, Thailand; (S.V.); (M.W.)
| |
Collapse
|
5
|
Danjo A, Kuwada C, Aijima R, Kamohara A, Fukuda M, Ariji Y, Ariji E, Yamashita Y. Limitations of panoramic radiographs in predicting mandibular wisdom tooth extraction and the potential of deep learning models to overcome them. Sci Rep 2024; 14:30806. [PMID: 39730557 DOI: 10.1038/s41598-024-81153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 11/25/2024] [Indexed: 12/29/2024] Open
Abstract
Surgeons routinely interpret preoperative radiographic images for estimating the shape and position of the tooth prior to performing tooth extraction. In this study, we aimed to predict the difficulty of lower wisdom tooth extraction using only panoramic radiographs. Difficulty was evaluated using the modified Parant score. Two oral surgeons (a specialist and a clinical resident) predicted the difficulty level of the test data. This study also aimed to evaluate the performance of a deep learning model in predicting the necessity for tooth separation or bone removal during wisdom tooth extraction. Two convolutional neural networks (AlexNet and VGG-16) were created and trained using panoramic X-ray images. Both surgeons interpreted the same images and classified them into three groups. The accuracies for humans were 54.4% for both surgeons, 57.7% for AlexNet, and 54.4% for VGG-16. These results indicate that accurately predict the difficulty of wisdom teeth extraction using panoramic radiographs alone is challenging. However, AlexNet and VGG-16 had sensitivities of more than 90% for crown and root separation. The predictive ability of our proposed model is equivalent to that of an oral surgery specialist, and a recall value > 90% makes it suitable for screening in clinical settings.
Collapse
Affiliation(s)
- Atsushi Danjo
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan.
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Reona Aijima
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan
| | - Asana Kamohara
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan
| | - Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, Osaka, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshio Yamashita
- Department of Oral and Maxillofacial Surgery, Faculty of Medicine, Saga University, 5-1-1 Nabeshima, Saga, 849-8501, Japan
| |
Collapse
|
6
|
Li L, Pu C, Tao J, Zhu L, Hu S, Qiao B, Xing L, Wei B, Shi C, Chen P, Zhang H. Development of an oral cancer detection system through deep learning. BMC Oral Health 2024; 24:1468. [PMID: 39633342 PMCID: PMC11619268 DOI: 10.1186/s12903-024-05195-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 11/12/2024] [Indexed: 12/07/2024] Open
Abstract
OBJECTIVE We aimed to develop an AI-based model that uses a portable electronic oral endoscope to capture intraoral images of patients for the detection of oral cancer. SUBJECTS AND METHODS From September 2019 to October 2023, 205 high-quality annotated images of oral cancer were collected using a portable oral electronic endoscope at the Chinese PLA General Hospital for this study. The U-Net and ResNet-34 deep learning models were employed for oral cancer detection. The performance of these models was evaluated using several metrics: Dice coefficient, Intersection over Union (IoU), Loss, Precision, Recall, and F1 Score. RESULTS During the algorithm model training phase, the Dice values were approximately 0.8, the Loss values were close to 0, and the IoU values were around 0.7. In the validation phase, the highest Dice values ranged between 0.4 and 0.5, while the Loss values increased, and the training loss began to decrease gradually. In the test phase, the model achieved a maximum Precision of 0.96 with a confidence threshold of 0.990. Additionally, with a confidence threshold of 0.010, the highest F1 score reached was 0.58. CONCLUSION This study provides an initial demonstration of the potential of deep learning models in identifying oral cancer.
Collapse
Affiliation(s)
- Liangbo Li
- Medical School of Chinese PLA, Beijing, China
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Cheng Pu
- Key Laboratory of Animal Disease and Human Health of Sichuan Province, Beijing, China
- College of Veterinary Medicine, Sichuan Agricultural University, Sichuan, China
| | - Jingqiao Tao
- Medical School of Chinese PLA, Beijing, China
- Department of stomatology , Southern Medical Branch of PLA General Hospital, Beijing, 100842, China
| | - Liang Zhu
- Medical School of Chinese PLA, Beijing, China
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Suixin Hu
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Bo Qiao
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Lejun Xing
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Bo Wei
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Chuyan Shi
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China
| | - Peng Chen
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China.
| | - Haizhong Zhang
- Department of Stomatology, Chinese PLA General Hospital, 28 Fuxing road, Haidian District, Beijing, 100853, China.
| |
Collapse
|
7
|
Thakuria T, Rahman T, Mahanta DR, Khataniar SK, Goswami RD, Rahman T, Mahanta LB. Deep learning for early diagnosis of oral cancer via smartphone and DSLR image analysis: a systematic review. Expert Rev Med Devices 2024; 21:1189-1204. [PMID: 39587051 DOI: 10.1080/17434440.2024.2434732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 10/19/2024] [Accepted: 11/22/2024] [Indexed: 11/27/2024]
Abstract
INTRODUCTION Diagnosing oral cancer is crucial in healthcare, with technological advancements enhancing early detection and outcomes. This review examines the impact of handheld AI-based tools, focusing on Convolutional Neural Networks (CNNs) and their advanced architectures in oral cancer diagnosis. METHODS A comprehensive search across PubMed, Scopus, Google Scholar, and Web of Science identified papers on deep learning (DL) in oral cancer diagnosis using digital images. The review, registered with PROSPERO, employed PRISMA and QUADAS-2 for search and risk assessment, with data analyzed through bubble and bar charts. RESULTS Twenty-five papers were reviewed, highlighting classification, segmentation, and object detection as key areas. Despite challenges like limited annotated datasets and data imbalance, models such as DenseNet121, VGG19, and EfficientNet-B0 excelled in binary classification, while EfficientNet-B4, Inception-V4, and Faster R-CNN were effective for multiclass classification and object detection. Models achieved up to 100% precision, 99% specificity, and 97.5% accuracy, showcasing AI's potential to improve diagnostic accuracy. Combining datasets and leveraging transfer learning enhances detection, particularly in resource-limited settings. CONCLUSION Handheld AI tools are transforming oral cancer diagnosis, with ethical considerations guiding their integration into healthcare systems. DL offers explainability, builds trust in AI-driven diagnoses, and facilitates telemedicine integration.
Collapse
Affiliation(s)
- Tapabrat Thakuria
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Taibur Rahman
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Deva Raj Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | | | | | - Tashnin Rahman
- Department of Head & Neck Oncology, Dr. B Borooah Cancer Institute, Guwahati, India
| | - Lipi B Mahanta
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology, Guwahati, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| |
Collapse
|
8
|
Chen Y, Du P, Zhang Y, Guo X, Song Y, Wang J, Yang LL, He W. Image-based multi-omics analysis for oral science: Recent progress and perspectives. J Dent 2024; 151:105425. [PMID: 39427959 DOI: 10.1016/j.jdent.2024.105425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 10/01/2024] [Accepted: 10/18/2024] [Indexed: 10/22/2024] Open
Abstract
OBJECTIVES The diagnosis and treatment of oral and dental diseases rely heavily on various types of medical imaging. Deep learning-mediated multi-omics analysis can extract more representative features than those identified through traditional diagnostic methods. This review aims to discuss the applications and recent advances in image-based multi-omics analysis in oral science and to highlight its potential to enhance traditional diagnostic approaches for oral diseases. STUDY SELECTION, DATA, AND SOURCES A systematic search was conducted in the PubMed, Web of Science, and Google Scholar databases, covering all available records. This search thoroughly examined and summarized advances in image-based multi-omics analysis in oral and maxillofacial medicine. CONCLUSIONS This review comprehensively summarizes recent advancements in image-based multi-omics analysis for oral science, including radiomics, pathomics, and photographic-based omics analysis. It also discusses the ongoing challenges and future perspectives that could provide new insights into exploiting the potential of image-based omics analysis in the field of oral science. CLINICAL SIGNIFICANCE This review article presents the state of image-based multi-omics analysis in stomatology, aiming to help oral clinicians recognize the utility of combining omics analyses with imaging during diagnosis and treatment, which can improve diagnostic accuracy, shorten times to diagnosis, save medical resources, and reduce disparity in professional knowledge among clinicians.
Collapse
Affiliation(s)
- Yizhuo Chen
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Pengxi Du
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Yinyin Zhang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Xin Guo
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Yujing Song
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Jianhua Wang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China
| | - Lei-Lei Yang
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China.
| | - Wei He
- Department of Stomatology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, China.
| |
Collapse
|
9
|
Kulkarni P, Sarwe N, Pingale A, Sarolkar Y, Patil RR, Shinde G, Kaur G. Exploring the efficacy of various CNN architectures in diagnosing oral cancer from squamous cell carcinoma. MethodsX 2024; 13:103034. [PMID: 39610794 PMCID: PMC11603122 DOI: 10.1016/j.mex.2024.103034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 11/04/2024] [Indexed: 11/30/2024] Open
Abstract
Oral cancer can result from mutations in cells located in the lips or mouth. Diagnosing oral cavity squamous cell carcinoma (OCSCC) is particularly challenging, often occurring at advanced stages. To address this, computer-aided diagnosis methods are increasingly being used. In this work, a deep learning-based approach utilizing models such as VGG16, ResNet50, LeNet-5, MobileNetV2, and Inception V3 is presented. NEOR and OCSCC datasets were used for feature extraction, with virtual slide images divided into tiles and classified as normal or squamous cell cancer. Performance metrics like accuracy, F1-score, AUC, precision, and recall were analyzed to determine the prerequisites for optimal CNN performance. The proposed CNN approaches were effective for classifying OCSCC and oral dysplasia, with the highest accuracy of 95.41 % achieved using MobileNetV2. Key findings Deep learning models, particularly MobileNetV2, achieved high classification accuracy (95.41 %) for OCSCC.CNN-based methods show promise for early-stage OCSCC and oral dysplasia diagnosis. Performance parameters like precision, recall, and F1-score help optimize CNN model selection for this task.
Collapse
Affiliation(s)
- Prerna Kulkarni
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Nidhi Sarwe
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Abhishek Pingale
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Yash Sarolkar
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Rutuja Rajendra Patil
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Gitanjali Shinde
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Gagandeep Kaur
- CSE Department, Symbiosis Institute of Technology, Nagpur Campus, Symbiosis International (Deemed University), Pune, India
| |
Collapse
|
10
|
Chen W, Dhawan M, Liu J, Ing D, Mehta K, Tran D, Lawrence D, Ganhewa M, Cirillo N. Mapping the Use of Artificial Intelligence-Based Image Analysis for Clinical Decision-Making in Dentistry: A Scoping Review. Clin Exp Dent Res 2024; 10:e70035. [PMID: 39600121 PMCID: PMC11599430 DOI: 10.1002/cre2.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 09/19/2024] [Accepted: 10/20/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES Artificial intelligence (AI) is an emerging field in dentistry. AI is gradually being integrated into dentistry to improve clinical dental practice. The aims of this scoping review were to investigate the application of AI in image analysis for decision-making in clinical dentistry and identify trends and research gaps in the current literature. MATERIAL AND METHODS This review followed the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). An electronic literature search was performed through PubMed and Scopus. After removing duplicates, a preliminary screening based on titles and abstracts was performed. A full-text review and analysis were performed according to predefined inclusion criteria, and data were extracted from eligible articles. RESULTS Of the 1334 articles returned, 276 met the inclusion criteria (consisting of 601,122 images in total) and were included in the qualitative synthesis. Most of the included studies utilized convolutional neural networks (CNNs) on dental radiographs such as orthopantomograms (OPGs) and intraoral radiographs (bitewings and periapicals). AI was applied across all fields of dentistry - particularly oral medicine, oral surgery, and orthodontics - for direct clinical inference and segmentation. AI-based image analysis was use in several components of the clinical decision-making process, including diagnosis, detection or classification, prediction, and management. CONCLUSIONS A variety of machine learning and deep learning techniques are being used for dental image analysis to assist clinicians in making accurate diagnoses and choosing appropriate interventions in a timely manner.
Collapse
Affiliation(s)
- Wei Chen
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Monisha Dhawan
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Jonathan Liu
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Damie Ing
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Kruti Mehta
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | - Daniel Tran
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
| | | | - Max Ganhewa
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| | - Nicola Cirillo
- Melbourne Dental SchoolThe University of MelbourneCarltonVictoriaAustralia
- CoTreatAI, CoTreat Pty Ltd.MelbourneVictoriaAustralia
| |
Collapse
|
11
|
Sahoo RK, Sahoo KC, Dash GC, Kumar G, Baliarsingh SK, Panda B, Pati S. Diagnostic performance of artificial intelligence in detecting oral potentially malignant disorders and oral cancer using medical diagnostic imaging: a systematic review and meta-analysis. FRONTIERS IN ORAL HEALTH 2024; 5:1494867. [PMID: 39568787 PMCID: PMC11576460 DOI: 10.3389/froh.2024.1494867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 10/22/2024] [Indexed: 11/22/2024] Open
Abstract
Objective Oral cancer is a widespread global health problem characterised by high mortality rates, wherein early detection is critical for better survival outcomes and quality of life. While visual examination is the primary method for detecting oral cancer, it may not be practical in remote areas. AI algorithms have shown some promise in detecting cancer from medical images, but their effectiveness in oral cancer detection remains Naïve. This systematic review aims to provide an extensive assessment of the existing evidence about the diagnostic accuracy of AI-driven approaches for detecting oral potentially malignant disorders (OPMDs) and oral cancer using medical diagnostic imaging. Methods Adhering to PRISMA guidelines, the review scrutinised literature from PubMed, Scopus, and IEEE databases, with a specific focus on evaluating the performance of AI architectures across diverse imaging modalities for the detection of these conditions. Results The performance of AI models, measured by sensitivity and specificity, was assessed using a hierarchical summary receiver operating characteristic (SROC) curve, with heterogeneity quantified through I2 statistic. To account for inter-study variability, a random effects model was utilized. We screened 296 articles, included 55 studies for qualitative synthesis, and selected 18 studies for meta-analysis. Studies evaluating the diagnostic efficacy of AI-based methods reveal a high sensitivity of 0.87 and specificity of 0.81. The diagnostic odds ratio (DOR) of 131.63 indicates a high likelihood of accurate diagnosis of oral cancer and OPMDs. The SROC curve (AUC) of 0.9758 indicates the exceptional diagnostic performance of such models. The research showed that deep learning (DL) architectures, especially CNNs (convolutional neural networks), were the best at finding OPMDs and oral cancer. Histopathological images exhibited the greatest sensitivity and specificity in these detections. Conclusion These findings suggest that AI algorithms have the potential to function as reliable tools for the early diagnosis of OPMDs and oral cancer, offering significant advantages, particularly in resource-constrained settings. Systematic Review Registration https://www.crd.york.ac.uk/, PROSPERO (CRD42023476706).
Collapse
Affiliation(s)
- Rakesh Kumar Sahoo
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| | - Krushna Chandra Sahoo
- Health Technology Assessment in India (HTAIn), Department of Health Research, Ministry of Health & Family Welfare, Govt. of India, New Delhi, India
| | | | - Gunjan Kumar
- Kalinga Institute of Dental Sciences, KIIT Deemed to be University, Bhubaneswar, India
| | | | - Bhuputra Panda
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
| | - Sanghamitra Pati
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| |
Collapse
|
12
|
Zhang R, Lu M, Zhang J, Chen X, Zhu F, Tian X, Chen Y, Cao Y. Research and Application of Deep Learning Models with Multi-Scale Feature Fusion for Lesion Segmentation in Oral Mucosal Diseases. Bioengineering (Basel) 2024; 11:1107. [PMID: 39593767 PMCID: PMC11591966 DOI: 10.3390/bioengineering11111107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/24/2024] [Accepted: 10/31/2024] [Indexed: 11/28/2024] Open
Abstract
Given the complexity of oral mucosal disease diagnosis and the limitations in the precision of traditional object detection methods, this study aims to develop a high-accuracy artificial intelligence-assisted diagnostic approach based on the SegFormer semantic segmentation model. This method is designed to automatically segment lesion areas in white-light images of oral mucosal diseases, providing objective and quantifiable evidence for clinical diagnosis. This study utilized a dataset of oral mucosal diseases provided by the Affiliated Stomatological Hospital of Zhejiang University School of Medicine, comprising 838 high-resolution images of three diseases: oral lichen planus, oral leukoplakia, and oral submucous fibrosis. These images were annotated at the pixel level by oral specialists using Labelme software (v5.5.0) to construct a semantic segmentation dataset. This study designed a SegFormer model based on the Transformer architecture, employed cross-validation to divide training and testing sets, and compared SegFormer models of different capacities with classical segmentation models such as UNet and DeepLabV3. Quantitative metrics including the Dice coefficient and mIoU were evaluated, and a qualitative visual analysis of the segmentation results was performed to comprehensively assess model performance. The SegFormer-B2 model achieved optimal performance on the test set, with a Dice coefficient of 0.710 and mIoU of 0.786, significantly outperforming other comparative algorithms. The visual results demonstrate that this model could accurately segment the lesion areas of three common oral mucosal diseases. The SegFormer model proposed in this study effectively achieves the precise automatic segmentation of three common oral mucosal diseases, providing a reliable auxiliary tool for clinical diagnosis. It shows promising prospects in improving the efficiency and accuracy of oral mucosal disease diagnosis and has potential clinical application value.
Collapse
Affiliation(s)
- Rui Zhang
- Zhejiang Provincial Key Laboratory of Internet Multimedia Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, China; (R.Z.); (X.T.); (Y.C.)
- Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Engineering Research Center of Oral Biomaterials and Devices of Zhejiang Province, Hangzhou 310053, China; (X.C.); (F.Z.)
- Life Health Innovation and Entrepreneurship Center, Institute of Wenzhou, Zhejiang University, Wenzhou 325000, China
| | - Miao Lu
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (M.L.); (J.Z.)
| | - Jiayuan Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (M.L.); (J.Z.)
| | - Xiaoyan Chen
- Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Engineering Research Center of Oral Biomaterials and Devices of Zhejiang Province, Hangzhou 310053, China; (X.C.); (F.Z.)
| | - Fudong Zhu
- Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Engineering Research Center of Oral Biomaterials and Devices of Zhejiang Province, Hangzhou 310053, China; (X.C.); (F.Z.)
| | - Xiang Tian
- Zhejiang Provincial Key Laboratory of Internet Multimedia Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, China; (R.Z.); (X.T.); (Y.C.)
| | - Yaowu Chen
- Zhejiang Provincial Key Laboratory of Internet Multimedia Technology, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou 310027, China; (R.Z.); (X.T.); (Y.C.)
| | - Yuqi Cao
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China; (M.L.); (J.Z.)
| |
Collapse
|
13
|
Alharbi SS, Alhasson HF. Exploring the Applications of Artificial Intelligence in Dental Image Detection: A Systematic Review. Diagnostics (Basel) 2024; 14:2442. [PMID: 39518408 PMCID: PMC11545562 DOI: 10.3390/diagnostics14212442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 10/10/2024] [Accepted: 10/12/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND Dental care has been transformed by neural networks, introducing advanced methods for improving patient outcomes. By leveraging technological innovation, dental informatics aims to enhance treatment and diagnostic processes. Early diagnosis of dental problems is crucial, as it can substantially reduce dental disease incidence by ensuring timely and appropriate treatment. The use of artificial intelligence (AI) within dental informatics is a pivotal tool that has applications across all dental specialties. This systematic literature review aims to comprehensively summarize existing research on AI implementation in dentistry. It explores various techniques used for detecting oral features such as teeth, fillings, caries, prostheses, crowns, implants, and endodontic treatments. AI plays a vital role in the diagnosis of dental diseases by enabling precise and quick identification of issues that may be difficult to detect through traditional methods. Its ability to analyze large volumes of data enhances diagnostic accuracy and efficiency, leading to better patient outcomes. METHODS An extensive search was conducted across a number of databases, including Science Direct, PubMed (MEDLINE), arXiv.org, MDPI, Nature, Web of Science, Google Scholar, Scopus, and Wiley Online Library. RESULTS The studies included in this review employed a wide range of neural networks, showcasing their versatility in detecting the dental categories mentioned above. Additionally, the use of diverse datasets underscores the adaptability of these AI models to different clinical scenarios. This study highlights the compatibility, robustness, and heterogeneity among the reviewed studies. This indicates that AI technologies can be effectively integrated into current dental practices. The review also discusses potential challenges and future directions for AI in dentistry. It emphasizes the need for further research to optimize these technologies for broader clinical applications. CONCLUSIONS By providing a detailed overview of AI's role in dentistry, this review aims to inform practitioners and researchers about the current capabilities and future potential of AI-driven dental care, ultimately contributing to improved patient outcomes and more efficient dental practices.
Collapse
Affiliation(s)
- Shuaa S. Alharbi
- Department of Information Technology, College of Computer, Qassim University, Buraydah 52571, Saudi Arabia;
| | | |
Collapse
|
14
|
Keser G, Pekiner FN, Bayrakdar İŞ, Çelik Ö, Orhan K. A deep learning approach to detection of oral cancer lesions from intra oral patient images: A preliminary retrospective study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101975. [PMID: 39043293 DOI: 10.1016/j.jormas.2024.101975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 07/10/2024] [Accepted: 07/20/2024] [Indexed: 07/25/2024]
Abstract
INTRODUCTION Oral squamous cell carcinomas (OSCC) seen in the oral cavity are a category of diseases for which dentists may diagnose and even cure. This study evaluated the performance of diagnostic computer software developed to detect oral cancer lesions in intra-oral retrospective patient images. MATERIALS AND METHODS Oral cancer lesions were labeled with CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) and polygonal type labeling method on a total of 65 anonymous retrospective intraoral patient images of oral mucosa that were diagnosed with oral cancer histopathologically by incisional biopsy from individuals in our clinic. All images have been rechecked and verified by experienced experts. This data set was divided into training (n = 53), validation (n = 6) and test (n = 6) sets. Artificial intelligence model was developed using YOLOv5 architecture, which is a deep learning approach. Model success was evaluated with confusion matrix. RESULTS When the success rate in estimating the images reserved for the test not used in education was evaluated, the F1, sensitivity and precision results of the artificial intelligence model obtained using the YOLOv5 architecture were found to be 0.667, 0.667 and 0.667, respectively. CONCLUSIONS Our study reveals that OCSCC lesions carry discriminative visual appearances, which can be identified by deep learning algorithm. Artificial intelligence shows promise in the prediagnosis of oral cancer lesions. The success rates will increase in the training models of the data set that will be formed with more images.
Collapse
Affiliation(s)
- Gaye Keser
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey.
| | - Filiz Namdar Pekiner
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Department of Oral Diagnosis and Başıbüyük Sağlık, Marmara University, Yerleşkesi Başıbüyük Yolu 9/3 34854, Maltepe, İstanbul, Turkey
| | - İbrahim Şevki Bayrakdar
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer, Faculty of Science and Letters, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey; Ankara University Medical Design Application and Research Center (MEDITAM), Ankara, Turkey
| |
Collapse
|
15
|
Kocakaya DNC, Özel MB, Kartbak SBA, Çakmak M, Sinanoğlu EA. Profile Photograph Classification Performance of Deep Learning Algorithms Trained Using Cephalometric Measurements: A Preliminary Study. Diagnostics (Basel) 2024; 14:1916. [PMID: 39272701 PMCID: PMC11394270 DOI: 10.3390/diagnostics14171916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 08/22/2024] [Accepted: 08/28/2024] [Indexed: 09/15/2024] Open
Abstract
Extraoral profile photographs are crucial for orthodontic diagnosis, documentation, and treatment planning. The purpose of this study was to evaluate classifications made on extraoral patient photographs by deep learning algorithms trained using grouped patient pictures based on cephalometric measurements. Cephalometric radiographs and profile photographs of 990 patients from the archives of Kocaeli University Faculty of Dentistry Department of Orthodontics were used for the study. FH-NA, FH-NPog, FMA and N-A-Pog measurements on patient cephalometric radiographs were carried out utilizing Webceph. 3 groups for every parameter were formed according to cephalometric values. Deep learning algorithms were trained using extraoral photographs of the patients which were grouped according to respective cephalometric measurements. 14 deep learning models were trained and tested for accuracy of prediction in classifying patient images. Accuracy rates of up to 96.67% for FH-NA groups, 97.33% for FH-NPog groups, 97.67% for FMA groups and 97.00% for N-A-Pog groups were obtained. This is a pioneering study where an attempt was made to classify clinical photographs using artificial intelligence architectures that were trained according to actual cephalometric values, thus eliminating or reducing the need for cephalometric X-rays in future applications for orthodontic diagnosis.
Collapse
Affiliation(s)
| | - Mehmet Birol Özel
- Department of Orthodontics, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Türkiye
| | - Sultan Büşra Ay Kartbak
- Department of Orthodontics, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Türkiye
| | - Muhammet Çakmak
- Department of Computer Engineering, Faculty of Engineering and Architecture, Sinop University, Sinop 57000, Türkiye
| | - Enver Alper Sinanoğlu
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli 41190, Türkiye
| |
Collapse
|
16
|
Li J, Kot WY, McGrath CP, Chan BWA, Ho JWK, Zheng LW. Diagnostic accuracy of artificial intelligence assisted clinical imaging in the detection of oral potentially malignant disorders and oral cancer: a systematic review and meta-analysis. Int J Surg 2024; 110:5034-5046. [PMID: 38652301 PMCID: PMC11325952 DOI: 10.1097/js9.0000000000001469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/30/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND The objective of this study is to examine the application of artificial intelligence (AI) algorithms in detecting oral potentially malignant disorders (OPMD) and oral cancerous lesions, and to evaluate the accuracy variations among different imaging tools employed in these diagnostic processes. MATERIALS AND METHODS A systematic search was conducted in four databases: Embase, Web of Science, PubMed, and Scopus. The inclusion criteria included studies using machine learning algorithms to provide diagnostic information on specific oral lesions, prospective or retrospective design, and inclusion of OPMD. Sensitivity and specificity analyses were also required. Forest plots were generated to display overall diagnostic odds ratio (DOR), sensitivity, specificity, negative predictive values, and summary receiver operating characteristic (SROC) curves. Meta-regression analysis was conducted to examine potential differences among different imaging tools. RESULTS The overall DOR for AI-based screening of OPMD and oral mucosal cancerous lesions from normal mucosa was 68.438 (95% CI= [39.484-118.623], I2 =86%). The area under the SROC curve was 0.938, indicating excellent diagnostic performance. AI-assisted screening showed a sensitivity of 89.9% (95% CI= [0.866-0.925]; I2 =81%), specificity of 89.2% (95% CI= [0.851-0.922], I2 =79%), and a high negative predictive value of 89.5% (95% CI= [0.851-0.927], I2 =96%). Meta-regression analysis revealed no significant difference among the three image tools. After generating a GOSH plot, the DOR was calculated to be 49.30, and the area under the SROC curve was 0.877. Additionally, sensitivity, specificity, and negative predictive value were 90.5% (95% CI [0.873-0.929], I2 =4%), 87.0% (95% CI [0.813-0.912], I2 =49%) and 90.1% (95% CI [0.860-0.931], I2 =57%), respectively. Subgroup analysis showed that clinical photography had the highest diagnostic accuracy. CONCLUSIONS AI-based detection using clinical photography shows a high DOR and is easily accessible in the current era with billions of phone subscribers globally. This indicates that there is significant potential for AI to enhance the diagnostic capabilities of general practitioners to the level of specialists by utilizing clinical photographs, without the need for expensive specialized imaging equipment.
Collapse
Affiliation(s)
- JingWen Li
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| | - Wai Ying Kot
- Faculty of Dentistry, The University of Hong Kong
| | - Colman Patrick McGrath
- Division of Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong
| | - Bik Wan Amy Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, The Chinese University of Hong Kong
| | - Joshua Wing Kei Ho
- School of Biomedical Sciences, Li Ka Shing Faculty of Medicine, The University of Hong Kong
- Laboratory of Data Discovery for Health Limited (D24H), Hong Kong Science Park, Hong Kong SAR, People’s Republic of China
| | - Li Wu Zheng
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, The University of Hong Kong
| |
Collapse
|
17
|
Ye YJ, Han Y, Liu Y, Guo ZL, Huang MW. Utilizing deep learning for automated detection of oral lesions: A multicenter study. Oral Oncol 2024; 155:106873. [PMID: 38833826 DOI: 10.1016/j.oraloncology.2024.106873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 05/23/2024] [Accepted: 05/27/2024] [Indexed: 06/06/2024]
Abstract
OBJECTIVES We aim to develop a YOLOX-based convolutional neural network model for the precise detection of multiple oral lesions, including OLP, OLK, and OSCC, in patient photos. MATERIALS AND METHODS We collected 1419 photos for model development and evaluation, conducting both a comparative analysis to gauge the model's capabilities and a multicenter evaluation to assess its diagnostic aid, where 24 participants from 14 centers across the nation were invited. We further integrated this model into a mobile application for rapid and accurate diagnostics. RESULTS In the comparative analysis, our model overperformed the senior group (comprising three most experienced experts with more than 10 years of experience) in macro-average recall (85 % vs 77.5 %), precision (87.02 % vs 80.29 %), and specificity (95 % vs 92.5 %). In the multicenter model-assisted diagnosis evaluation, the dental, general, and community hospital groups showed significant improvement when aided by the model, reaching a level comparable to the senior group, with all macro-average metrics closely aligning or even surpassing with those of the latter (recall of 78.67 %, 74.72 %, 83.54 % vs 77.5 %, precision of 80.56 %, 76.42 %, 85.15 % vs 80.29 %, specificity of 92.89 %, 91.57 %, 94.51 % vs 92.5 %). CONCLUSION Our model exhibited a high proficiency in detection of oral lesions, surpassing the performance of highly experienced specialists. The model can also help specialists and general dentists from dental and community hospitals in diagnosing oral lesions, reaching the level of highly experienced specialists. Moreover, our model's integration into a mobile application facilitated swift and precise diagnostic procedures.
Collapse
Affiliation(s)
- Yong-Jin Ye
- Division of Mechanics, Beijing Computational Science Research Center, Building 9, East Zone, No.10 East Xibeiwang Road, Haidian District, Beijing 100193, China
| | - Ying Han
- Department of Oral Medicine, Peking University School and Hospital of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, No.22, Zhongguancun South Avenue, Haidian District, Beijing 100081, China
| | - Yang Liu
- Department of Oral Medicine, Peking University School and Hospital of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, No.22, Zhongguancun South Avenue, Haidian District, Beijing 100081, China
| | - Zhen-Lin Guo
- Division of Mechanics, Beijing Computational Science Research Center, Building 9, East Zone, No.10 East Xibeiwang Road, Haidian District, Beijing 100193, China.
| | - Ming-Wei Huang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, National Center for Stomatology, National Clinical Research Center for Oral Diseases, National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, No.22, Zhongguancun South Avenue, Haidian District, Beijing 100081, China.
| |
Collapse
|
18
|
Hsu Y, Chou CY, Huang YC, Liu YC, Lin YL, Zhong ZP, Liao JK, Lee JC, Chen HY, Lee JJ, Chen SJ. Oral mucosal lesions triage via YOLOv7 models. J Formos Med Assoc 2024:S0929-6646(24)00313-9. [PMID: 39003230 DOI: 10.1016/j.jfma.2024.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/25/2024] [Accepted: 07/09/2024] [Indexed: 07/15/2024] Open
Abstract
BACKGROUND/PURPOSE The global incidence of lip and oral cavity cancer continues to rise, necessitating improved early detection methods. This study leverages the capabilities of computer vision and deep learning to enhance the early detection and classification of oral mucosal lesions. METHODS A dataset initially consisting of 6903 white-light macroscopic images collected from 2006 to 2013 was expanded to over 50,000 images to train the YOLOv7 deep learning model. Lesions were categorized into three referral grades: benign (green), potentially malignant (yellow), and malignant (red), facilitating efficient triage. RESULTS The YOLOv7 models, particularly the YOLOv7-E6, demonstrated high precision and recall across all lesion categories. The YOLOv7-D6 model excelled at identifying malignant lesions with notable precision, recall, and F1 scores. Enhancements, including the integration of coordinate attention in the YOLOv7-D6-CA model, significantly improved the accuracy of lesion classification. CONCLUSION The study underscores the robust comparison of various YOLOv7 model configurations in the classification to triage oral lesions. The overall results highlight the potential of deep learning models to contribute to the early detection of oral cancers, offering valuable tools for both clinical settings and remote screening applications.
Collapse
Affiliation(s)
- Yu Hsu
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Cheng-Ying Chou
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Yu-Cheng Huang
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan
| | - Yu-Chieh Liu
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Yong-Long Lin
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Zi-Ping Zhong
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Jun-Kai Liao
- Department of Biomechatronics Engineering, National Taiwan University, Taipei, Taiwan
| | - Jun-Ching Lee
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
| | - Hsin-Yu Chen
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
| | - Jang-Jaer Lee
- Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan; Department of Dentistry, College of Medicine, National Taiwan University, Taipei, Taiwan.
| | - Shyh-Jye Chen
- Department of Medical Imaging, National Taiwan University Hospital, Taipei, Taiwan; Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, Taiwan; Department of Radiology, College of Medicine, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
19
|
Zhang L, Shi R, Youssefi N. Oral cancer diagnosis based on gated recurrent unit networks optimized by an improved version of Northern Goshawk optimization algorithm. Heliyon 2024; 10:e32077. [PMID: 38912510 PMCID: PMC11190545 DOI: 10.1016/j.heliyon.2024.e32077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/12/2024] [Accepted: 05/28/2024] [Indexed: 06/25/2024] Open
Abstract
Oral cancer early diagnosis is a critical task in the field of medical science, and one of the most necessary things is to develop sound and effective strategies for early detection. The current research investigates a new strategy to diagnose an oral cancer based upon combination of effective learning and medical imaging. The current research investigates a new strategy to diagnose an oral cancer using Gated Recurrent Unit (GRU) networks optimized by an improved model of the NGO (Northern Goshawk Optimization) algorithm. The proposed approach has several advantages over existing methods, including its ability to analyze large and complex datasets, its high accuracy, as well as its capacity to detect oral cancer at the very beginning stage. The improved NGO algorithm is utilized to improve the GRU network that helps to improve the performance of the network and increase the accuracy of the diagnosis. The paper describes the proposed approach and evaluates its performance using a dataset of oral cancer patients. The findings of the study demonstrate the efficiency of the suggested approach in accurately diagnosing oral cancer.
Collapse
Affiliation(s)
- Lei Zhang
- Department of Stomatology, The Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250033, Shandong, China
| | - Rongji Shi
- Department of Stomatology, The Second Hospital, Cheeloo College of Medicine, Shandong University, Jinan, 250033, Shandong, China
| | - Naser Youssefi
- Islamic Azad University, Science and Research Branch, Tehran, Iran
- College of Technical Engineering, The Islamic University, Najaf, Iraq
| |
Collapse
|
20
|
Mali SB. Screening of head neck cancer. ORAL ONCOLOGY REPORTS 2024; 9:100142. [DOI: 10.1016/j.oor.2023.100142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
21
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
22
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
23
|
Rokhshad R, Mohammad-Rahimi H, Price JB, Shoorgashti R, Abbasiparashkouh Z, Esmaeili M, Sarfaraz B, Rokhshad A, Motamedian SR, Soltani P, Schwendicke F. Artificial intelligence for classification and detection of oral mucosa lesions on photographs: a systematic review and meta-analysis. Clin Oral Investig 2024; 28:88. [PMID: 38217733 DOI: 10.1007/s00784-023-05475-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 12/21/2023] [Indexed: 01/15/2024]
Abstract
OBJECTIVE This study aimed to review and synthesize studies using artificial intelligence (AI) for classifying, detecting, or segmenting oral mucosal lesions on photographs. MATERIALS AND METHOD Inclusion criteria were (1) studies employing AI to (2) classify, detect, or segment oral mucosa lesions, (3) on oral photographs of human subjects. Included studies were assessed for risk of bias using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). A PubMed, Scopus, Embase, Web of Science, IEEE, arXiv, medRxiv, and grey literature (Google Scholar) search was conducted until June 2023, without language limitation. RESULTS After initial searching, 36 eligible studies (from 8734 identified records) were included. Based on QUADAS-2, only 7% of studies were at low risk of bias for all domains. Studies employed different AI models and reported a wide range of outcomes and metrics. The accuracy of AI for detecting oral mucosal lesions ranged from 74 to 100%, while that for clinicians un-aided by AI ranged from 61 to 98%. Pooled diagnostic odds ratio for studies which evaluated AI for diagnosing or discriminating potentially malignant lesions was 155 (95% confidence interval 23-1019), while that for cancerous lesions was 114 (59-221). CONCLUSIONS AI may assist in oral mucosa lesion screening while the expected accuracy gains or further health benefits remain unclear so far. CLINICAL RELEVANCE Artificial intelligence assists oral mucosa lesion screening and may foster more targeted testing and referral in the hands of non-specialist providers, for example. So far, it remains unclear if accuracy gains compared with specialized can be realized.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Jeffery B Price
- Department of Oncology and Diagnostic Sciences, University of Maryland, School of Dentistry, Baltimore, Maryland 650 W Baltimore St, Baltimore, MD, 21201, USA
| | - Reyhaneh Shoorgashti
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | | | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Bita Sarfaraz
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Arad Rokhshad
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Saeed Reza Motamedian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran.
| | - Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Salamat Blv, Isfahan Dental School, Isfahan, Iran
- Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples Federico II, Nepales, Italy
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Charitépl. 1, 10117, Berlin, Germany
| |
Collapse
|
24
|
Rahim A, Khatoon R, Khan TA, Syed K, Khan I, Khalid T, Khalid B. Artificial intelligence-powered dentistry: Probing the potential, challenges, and ethicality of artificial intelligence in dentistry. Digit Health 2024; 10:20552076241291345. [PMID: 39539720 PMCID: PMC11558748 DOI: 10.1177/20552076241291345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 09/27/2024] [Indexed: 11/16/2024] Open
Abstract
Introduction Healthcare amelioration is exponential to technological advancement. In the recent era of automation, the consolidation of artificial intelligence (AI) in dentistry has rendered transformation in oral healthcare from a hardware-centric approach to a software-centric approach, leading to enhanced efficiency and improved educational and clinical outcomes. Objectives The aim of this narrative overview is to extend the succinct of the major events and innovations that led to the creation of modern-day AI and dentistry and the applicability of the former in dentistry. This article also prompts oral healthcare workers to endeavor a liable and optimal approach for effective incorporation of AI technology into their practice to promote oral health by exploring the potentials, constraints, and ethical considerations of AI in dentistry. Methods A comprehensive approach for searching the white and grey literature was carried out to collect and assess the data on AI, its use in dentistry, and the associated challenges and ethical concerns. Results AI in dentistry is still in its evolving phase with paramount applicabilities relevant to risk prediction, diagnosis, decision-making, prognosis, tailored treatment plans, patient management, and academia as well as the associated challenges and ethical concerns in its implementation. Conclusion The upsurging advancements in AI have resulted in transformations and promising outcomes across all domains of dentistry. In futurity, AI may be capable of executing a multitude of tasks in the domain of oral healthcare, at the level of or surpassing the ability of mankind. However, AI could be of significant benefit to oral health only if it is utilized under responsibility, ethicality and universality.
Collapse
Affiliation(s)
- Abid Rahim
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Rabia Khatoon
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Tahir Ali Khan
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Kawish Syed
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Ibrahim Khan
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Tamsal Khalid
- Sardar Begum Dental College, Gandhara University, Peshawar, Pakistan
| | - Balaj Khalid
- Syed Babar Ali School of Science and Engineering, Lahore University of Management Sciences, Lahore, Pakistan
| |
Collapse
|
25
|
Rochefort J, Radoi L, Campana F, Fricain JC, Lescaille G. [Oral cavity cancer: A distinct entity]. Med Sci (Paris) 2024; 40:57-63. [PMID: 38299904 DOI: 10.1051/medsci/2023196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024] Open
Abstract
Oral Squamous cell carcinoma represent the 17th most frequent cancer in the world. The main risk factors are alcohol and tobacco consumption but dietary, familial, genetic, or oral diseases may be involved in oral carcinogenesis. Diagnosis is made on biopsy, but detection remains late, leading to a poor prognosis. New technologies could reduce these delays, notably Artificial Intelligence and the quantitative evaluation of salivary biological markers. Currently, management of oral cancer consists in surgery, which can be mutilating despite possible reconstructions. In the future, immunotherapies could become a therapeutic alternative and the immune microenvironment could constitute a source of prognostic markers.
Collapse
Affiliation(s)
- Juliette Rochefort
- Assistance Publique-Hôpitaux de Paris (AP-HP), Groupe hospitalier Pitié-Salpêtrière, Service de médecine bucco-dentaire, Paris, France - Faculté d'odontologie, université Paris Cité, Paris, France - Sorbonne université, Inserm U.1135, Centre d'immunologie et des maladies infectieuses, CIMI-Paris, Paris, France
| | - Lorédana Radoi
- Faculté d'odontologie, université Paris Cité, Paris, France - Centre de recherche en épidémiologie et santé des populations, Inserm U1018, université Paris Saclay
| | - Fabrice Campana
- Aix Marseille Univ, Assistance Publique-Hôpitaux de Marseille (AP-HM), Timone Hospital, Oral Surgery Department, Marseille, France
| | - Jean-Christophe Fricain
- CHU Bordeaux, Dentistry and Oral Health Department, F-33404 Bordeaux, France - Inserm U1026, université de Bordeaux, Tissue Bioengineering (BioTis), F-33076 Bordeaux, France
| | - Géraldine Lescaille
- Assistance Publique-Hôpitaux de Paris (AP-HP), Groupe hospitalier Pitié-Salpêtrière, Service de médecine bucco-dentaire, Paris, France - Faculté d'odontologie, université Paris Cité, Paris, France - Sorbonne université, Inserm U.1135, Centre d'immunologie et des maladies infectieuses, CIMI-Paris, Paris, France
| |
Collapse
|
26
|
Zhou M, Jie W, Tang F, Zhang S, Mao Q, Liu C, Hao Y. Deep learning algorithms for classification and detection of recurrent aphthous ulcerations using oral clinical photographic images. J Dent Sci 2024; 19:254-260. [PMID: 38303872 PMCID: PMC10829559 DOI: 10.1016/j.jds.2023.04.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 04/19/2023] [Indexed: 02/03/2024] Open
Abstract
Background/purpose The application of artificial intelligence diagnosis based on deep learning in the medical field has been widely accepted. We aimed to evaluate convolutional neural networks (CNNs) for automated classification and detection of recurrent aphthous ulcerations (RAU), normal oral mucosa, and other common oral mucosal diseases in clinical oral photographs. Materials and methods The study included 785 clinical oral photographs, which was divided into 251 images of RAU, 271 images of the normal oral mucosa, and 263 images of other common oral mucosal diseases. Four and three CNN models were used for the classification and detection tasks, respectively. 628 images were randomly selected as training data. In addition, 78 and 79 images were assigned as validating and testing data. Main outcome measures included precision, recall, F1, specificity, sensitivity and area under the receiver operating characteristics curve (AUC). Results In the classification task, the Pretrained ResNet50 model had the best performance with a precision of 92.86%, a recall of 91.84%, an F1 score of 92.24%, a specificity of 96.41%, a sensitivity of 91.84% and an AUC of 98.95%. In the detection task, the Pretrained YOLOV5 model had the best performance with a precision of 98.70%, a recall of 79.51%, an F1 score of 88.07% and an AUC of Precision-Recall curve 90.89%. Conclusion The Pretrained ResNet50 and the Pretrained YOLOV5 algorithms were shown to have superior performance and acceptable potential in the classification and detection of RAU lesions based on non-invasive oral images, which may prove useful in clinical practice.
Collapse
Affiliation(s)
- Mimi Zhou
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Weiping Jie
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Fan Tang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Shangjun Zhang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Qinghua Mao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Chuanxia Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Yilong Hao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
27
|
Nagarajan B, Chakravarthy S, Venkatesan VK, Ramakrishna MT, Khan SB, Basheer S, Albalawi E. A Deep Learning Framework with an Intermediate Layer Using the Swarm Intelligence Optimizer for Diagnosing Oral Squamous Cell Carcinoma. Diagnostics (Basel) 2023; 13:3461. [PMID: 37998597 PMCID: PMC10670914 DOI: 10.3390/diagnostics13223461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 11/07/2023] [Accepted: 11/08/2023] [Indexed: 11/25/2023] Open
Abstract
One of the most prevalent cancers is oral squamous cell carcinoma, and preventing mortality from this disease primarily depends on early detection. Clinicians will greatly benefit from automated diagnostic techniques that analyze a patient's histopathology images to identify abnormal oral lesions. A deep learning framework was designed with an intermediate layer between feature extraction layers and classification layers for classifying the histopathological images into two categories, namely, normal and oral squamous cell carcinoma. The intermediate layer is constructed using the proposed swarm intelligence technique called the Modified Gorilla Troops Optimizer. While there are many optimization algorithms used in the literature for feature selection, weight updating, and optimal parameter identification in deep learning models, this work focuses on using optimization algorithms as an intermediate layer to convert extracted features into features that are better suited for classification. Three datasets comprising 2784 normal and 3632 oral squamous cell carcinoma subjects are considered in this work. Three popular CNN architectures, namely, InceptionV2, MobileNetV3, and EfficientNetB3, are investigated as feature extraction layers. Two fully connected Neural Network layers, batch normalization, and dropout are used as classification layers. With the best accuracy of 0.89 among the examined feature extraction models, MobileNetV3 exhibits good performance. This accuracy is increased to 0.95 when the suggested Modified Gorilla Troops Optimizer is used as an intermediary layer.
Collapse
Affiliation(s)
- Bharanidharan Nagarajan
- School of Computer Science Engineering and Information Systems (SCORE), Vellore Institute of Technology, Vellore 632014, India; (B.N.); (V.K.V.)
| | - Sannasi Chakravarthy
- Department of ECE, Bannari Amman Institute of Technology, Sathyamangalam 638401, India;
| | - Vinoth Kumar Venkatesan
- School of Computer Science Engineering and Information Systems (SCORE), Vellore Institute of Technology, Vellore 632014, India; (B.N.); (V.K.V.)
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-Be University), Bangalore 562112, India
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester M5 4WT, UK
- Department of Engineering and Environment, University of Religions and Denominations, Qom 13357, Iran
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| | - Shakila Basheer
- Department of Information Systems, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| | - Eid Albalawi
- Department of Computer Science, School of Computer Science and Information Technology, King Faisal University, Al-Ahsa 31982, Saudi Arabia;
| |
Collapse
|
28
|
Islam MM, Alam KMR, Uddin J, Ashraf I, Samad MA. Benign and Malignant Oral Lesion Image Classification Using Fine-Tuned Transfer Learning Techniques. Diagnostics (Basel) 2023; 13:3360. [PMID: 37958257 PMCID: PMC10650377 DOI: 10.3390/diagnostics13213360] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 10/23/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Oral lesions are a prevalent manifestation of oral disease, and the timely identification of oral lesions is imperative for effective intervention. Fortunately, deep learning algorithms have shown great potential for automated lesion detection. The primary aim of this study was to employ deep learning-based image classification algorithms to identify oral lesions. We used three deep learning models, namely VGG19, DeIT, and MobileNet, to assess the efficacy of various categorization methods. To evaluate the accuracy and reliability of the models, we employed a dataset consisting of oral pictures encompassing two distinct categories: benign and malignant lesions. The experimental findings indicate that VGG19 and MobileNet attained an almost perfect accuracy rate of 100%, while DeIT achieved a slightly lower accuracy rate of 98.73%. The results of this study indicate that deep learning algorithms for picture classification demonstrate a high level of effectiveness in detecting oral lesions by achieving 100% for VGG19 and MobileNet and 98.73% for DeIT. Specifically, the VGG19 and MobileNet models exhibit notable suitability for this particular task.
Collapse
Affiliation(s)
- Md. Monirul Islam
- Department of Software Engineering, Daffodil International University, Daffodil Smart City (DSC), Birulia, Savar, Dhaka 1216, Bangladesh
| | - K. M. Rafiqul Alam
- Department of Statistics, Jahangirnagar University, Dhaka 1342, Bangladesh
| | - Jia Uddin
- AI and Big Data Department, Endicott College, Woosong University, Daejeon 34606, Republic of Korea
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| | - Md Abdus Samad
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan-si 38541, Republic of Korea
| |
Collapse
|
29
|
Achararit P, Manaspon C, Jongwannasiri C, Phattarataratip E, Osathanon T, Sappayatosok K. Artificial Intelligence-Based Diagnosis of Oral Lichen Planus Using Deep Convolutional Neural Networks. Eur J Dent 2023; 17:1275-1282. [PMID: 36669652 PMCID: PMC10756816 DOI: 10.1055/s-0042-1760300] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
OBJECTIVE The aim of this study was to employ artificial intelligence (AI) via convolutional neural network (CNN) for the separation of oral lichen planus (OLP) and non-OLP in biopsy-proven clinical cases of OLP and non-OLP. MATERIALS AND METHODS Data comprised of clinical photographs of 609 OLP and 480 non-OLP which diagnosis has been confirmed histopathologically. Fifty-five photographs from the OLP and non-OLP groups were randomly selected for use as the test dataset, while the remaining were used as training and validation datasets. Data augmentation was performed on the training dataset to increase the number and variation of photographs. Performance metrics for the CNN model performance included accuracy, positive predictive value, negative predictive value, sensitivity, specificity, and F1-score. Gradient-weighted class activation mapping was also used to visualize the important regions associated with discriminative clinical features on which the model relies. RESULTS All the selected CNN models were able to diagnose OLP and non-OLP lesions using photographs. The performance of the Xception model was significantly higher than that of the other models in terms of overall accuracy and F1-score. CONCLUSIONS Our demonstration shows that CNN models can achieve an accuracy of 82 to 88%. Xception model performed the best in terms of both accuracy and F1-score.
Collapse
Affiliation(s)
- Paniti Achararit
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Chawan Manaspon
- Biomedical Engineering Institute, Chiang Mai University, Chiang Mai, Thailand
| | - Chavin Jongwannasiri
- Princess Srisavangavadhana College of Medicine, Chulabhorn Royal Academy, Bangkok, Thailand
| | - Ekarat Phattarataratip
- Department of Oral Pathology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | - Thanaphum Osathanon
- Dental Stem Cell Biology Research Unit, Department of Anatomy, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| | | |
Collapse
|
30
|
Talwar V, Singh P, Mukhia N, Shetty A, Birur P, Desai KM, Sunkavalli C, Varma KS, Sethuraman R, Jawahar CV, Vinod PK. AI-Assisted Screening of Oral Potentially Malignant Disorders Using Smartphone-Based Photographic Images. Cancers (Basel) 2023; 15:4120. [PMID: 37627148 PMCID: PMC10452422 DOI: 10.3390/cancers15164120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
The prevalence of oral potentially malignant disorders (OPMDs) and oral cancer is surging in low- and middle-income countries. A lack of resources for population screening in remote locations delays the detection of these lesions in the early stages and contributes to higher mortality and a poor quality of life. Digital imaging and artificial intelligence (AI) are promising tools for cancer screening. This study aimed to evaluate the utility of AI-based techniques for detecting OPMDs in the Indian population using photographic images of oral cavities captured using a smartphone. A dataset comprising 1120 suspicious and 1058 non-suspicious oral cavity photographic images taken by trained front-line healthcare workers (FHWs) was used for evaluating the performance of different deep learning models based on convolution (DenseNets) and Transformer (Swin) architectures. The best-performing model was also tested on an additional independent test set comprising 440 photographic images taken by untrained FHWs (set I). DenseNet201 and Swin Transformer (base) models show high classification performance with an F1-score of 0.84 (CI 0.79-0.89) and 0.83 (CI 0.78-0.88) on the internal test set, respectively. However, the performance of models decreases on test set I, which has considerable variation in the image quality, with the best F1-score of 0.73 (CI 0.67-0.78) obtained using DenseNet201. The proposed AI model has the potential to identify suspicious and non-suspicious oral lesions using photographic images. This simplified image-based AI solution can assist in screening, early detection, and prompt referral for OPMDs.
Collapse
Affiliation(s)
- Vivek Talwar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - Pragya Singh
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
| | - Nirza Mukhia
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | | | - Praveen Birur
- Department of Oral Medicine and Radiology, KLE Society’s Institute of Dental Sciences, Bengaluru 560022, India; (N.M.); (P.B.)
| | - Karishma M. Desai
- iHUB-Data, International Institute of Information Technology, Hyderabad 500032, India;
| | | | - Konala S. Varma
- INAI, International Institute of Information Technology, Hyderabad 500032, India; (P.S.); (K.S.V.)
- Intel Technology India Private Limited, Bengaluru, India;
| | | | - C. V. Jawahar
- CVIT, International Institute of Information Technology, Hyderabad 500032, India; (V.T.); (C.V.J.)
| | - P. K. Vinod
- CCNSB, International Institute of Information Technology, Hyderabad 500032, India
| |
Collapse
|
31
|
Song HJ, Park YJ, Jeong HY, Kim BG, Kim JH, Im YG. Detection of Abnormal Changes on the Dorsal Tongue Surface Using Deep Learning. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1293. [PMID: 37512104 PMCID: PMC10385577 DOI: 10.3390/medicina59071293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/09/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023]
Abstract
Background and Objective: The tongue mucosa often changes due to various local and systemic diseases or conditions. This study aimed to investigate whether deep learning can help detect abnormal regions on the dorsal tongue surface in patients and healthy adults. Materials and Methods: The study collected 175 clinical photographic images of the dorsal tongue surface, which were divided into 7782 cropped images classified into normal, abnormal, and non-tongue regions and trained using the VGG16 deep learning model. The 80 photographic images of the entire dorsal tongue surface were used for the segmentation of abnormal regions using point mapping segmentation. Results: The F1-scores of the abnormal and normal classes were 0.960 (precision: 0.935, recall: 0.986) and 0.968 (precision: 0.987, recall: 0.950), respectively, in the prediction of the VGG16 model. As a result of evaluation using point mapping segmentation, the average F1-scores were 0.727 (precision: 0.717, recall: 0.737) and 0.645 (precision: 0.650, recall: 0.641), the average intersection of union was 0.695 and 0.590, and the average precision was 0.940 and 0.890, respectively, for abnormal and normal classes. Conclusions: The deep learning algorithm used in this study can accurately determine abnormal areas on the dorsal tongue surface, which can assist in diagnosing specific diseases or conditions of the tongue mucosa.
Collapse
Affiliation(s)
- Ho-Jun Song
- Department of Dental Materials, Dental Science Research Institute, School of Dentistry, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Yeong-Joon Park
- Department of Dental Materials, Dental Science Research Institute, School of Dentistry, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Hie-Yong Jeong
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Byung-Gook Kim
- Department of Oral Medicine, Dental Science Research Institute, School of Dentistry, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Jae-Hyung Kim
- Department of Oral Medicine, Dental Science Research Institute, School of Dentistry, Chonnam National University, Gwangju 61186, Republic of Korea
| | - Yeong-Gwan Im
- Department of Oral Medicine, Dental Science Research Institute, School of Dentistry, Chonnam National University, Gwangju 61186, Republic of Korea
| |
Collapse
|
32
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
33
|
Adeoye J, Hui L, Su YX. Data-centric artificial intelligence in oncology: a systematic review assessing data quality in machine learning models for head and neck cancer. JOURNAL OF BIG DATA 2023; 10:28. [DOI: 10.1186/s40537-023-00703-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 02/23/2023] [Indexed: 01/03/2025]
Abstract
AbstractMachine learning models have been increasingly considered to model head and neck cancer outcomes for improved screening, diagnosis, treatment, and prognostication of the disease. As the concept of data-centric artificial intelligence is still incipient in healthcare systems, little is known about the data quality of the models proposed for clinical utility. This is important as it supports the generalizability of the models and data standardization. Therefore, this study overviews the quality of structured and unstructured data used for machine learning model construction in head and neck cancer. Relevant studies reporting on the use of machine learning models based on structured and unstructured custom datasets between January 2016 and June 2022 were sourced from PubMed, EMBASE, Scopus, and Web of Science electronic databases. Prediction model Risk of Bias Assessment (PROBAST) tool was used to assess the quality of individual studies before comprehensive data quality parameters were assessed according to the type of dataset used for model construction. A total of 159 studies were included in the review; 106 utilized structured datasets while 53 utilized unstructured datasets. Data quality assessments were deliberately performed for 14.2% of structured datasets and 11.3% of unstructured datasets before model construction. Class imbalance and data fairness were the most common limitations in data quality for both types of datasets while outlier detection and lack of representative outcome classes were common in structured and unstructured datasets respectively. Furthermore, this review found that class imbalance reduced the discriminatory performance for models based on structured datasets while higher image resolution and good class overlap resulted in better model performance using unstructured datasets during internal validation. Overall, data quality was infrequently assessed before the construction of ML models in head and neck cancer irrespective of the use of structured or unstructured datasets. To improve model generalizability, the assessments discussed in this study should be introduced during model construction to achieve data-centric intelligent systems for head and neck cancer management.
Collapse
|
34
|
Interpretable and Reliable Oral Cancer Classifier with Attention Mechanism and Expert Knowledge Embedding via Attention Map. Cancers (Basel) 2023; 15:cancers15051421. [PMID: 36900210 PMCID: PMC10001266 DOI: 10.3390/cancers15051421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 03/12/2023] Open
Abstract
Convolutional neural networks have demonstrated excellent performance in oral cancer detection and classification. However, the end-to-end learning strategy makes CNNs hard to interpret, and it can be challenging to fully understand the decision-making procedure. Additionally, reliability is also a significant challenge for CNN based approaches. In this study, we proposed a neural network called the attention branch network (ABN), which combines the visual explanation and attention mechanisms to improve the recognition performance and interpret the decision-making simultaneously. We also embedded expert knowledge into the network by having human experts manually edit the attention maps for the attention mechanism. Our experiments have shown that ABN performs better than the original baseline network. By introducing the Squeeze-and-Excitation (SE) blocks to the network, the cross-validation accuracy increased further. Furthermore, we observed that some previously misclassified cases were correctly recognized after updating by manually editing the attention maps. The cross-validation accuracy increased from 0.846 to 0.875 with the ABN (Resnet18 as baseline), 0.877 with SE-ABN, and 0.903 after embedding expert knowledge. The proposed method provides an accurate, interpretable, and reliable oral cancer computer-aided diagnosis system through visual explanation, attention mechanisms, and expert knowledge embedding.
Collapse
|
35
|
Thanathornwong B, Suebnukarn S, Ouivirach K. Clinical Decision Support System for Geriatric Dental Treatment Using a Bayesian Network and a Convolutional Neural Network. Healthc Inform Res 2023; 29:23-30. [PMID: 36792098 PMCID: PMC9932303 DOI: 10.4258/hir.2023.29.1.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/18/2022] [Accepted: 10/30/2022] [Indexed: 02/10/2023] Open
Abstract
OBJECTIVES The aim of this study was to evaluate the performance of a clinical decision support system (CDSS) for therapeutic plans in geriatric dentistry. The information that needs to be considered in a therapeutic plan includes not only the patient's oral health status obtained from an oral examination, but also other related factors such as underlying diseases, socioeconomic characteristics, and functional dependency. METHODS A Bayesian network (BN) was used as a framework to construct a model of contributing factors and their causal relationships based on clinical knowledge and data. The faster R-CNN (regional convolutional neural network) algorithm was used to detect oral health status, which was part of the BN structure. The study was conducted using retrospective data from 400 patients receiving geriatric dental care at a university hospital between January 2020 and June 2021. RESULTS The model showed an F1-score of 89.31%, precision of 86.69%, and recall of 82.14% for the detection of periodontally compromised teeth. A receiver operating characteristic curve analysis showed that the BN model was highly accurate for recommending therapeutic plans (area under the curve = 0.902). The model performance was compared to that of experts in geriatric dentistry, and the experts and the system strongly agreed on the recommended therapeutic plans (kappa value = 0.905). CONCLUSIONS This research was the first phase of the development of a CDSS to recommend geriatric dental treatment. The proposed system, when integrated into the clinical workflow, is expected to provide general practitioners with expert-level decision support in geriatric dental care.
Collapse
|
36
|
Bansal K, Bathla RK, Kumar Y. Deep transfer learning techniques with hybrid optimization in early prediction and diagnosis of different types of oral cancer. Soft comput 2022. [DOI: 10.1007/s00500-022-07246-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
37
|
Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P, Vicharueang S. AI-based analysis of oral lesions using novel deep convolutional neural networks for early detection of oral cancer. PLoS One 2022; 17:e0273508. [PMID: 36001628 PMCID: PMC9401150 DOI: 10.1371/journal.pone.0273508] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 08/09/2022] [Indexed: 11/18/2022] Open
Abstract
Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Wasit Limprasert
- College of Interdisciplinary Studies, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Siriwan Suebnukarn
- Faculty of Dentistry, Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | | | | | | |
Collapse
|
38
|
Machine learning in point-of-care automated classification of oral potentially malignant and malignant disorders: a systematic review and meta-analysis. Sci Rep 2022; 12:13797. [PMID: 35963880 PMCID: PMC9376104 DOI: 10.1038/s41598-022-17489-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 07/26/2022] [Indexed: 11/08/2022] Open
Abstract
Machine learning (ML) algorithms are becoming increasingly pervasive in the domains of medical diagnostics and prognostication, afforded by complex deep learning architectures that overcome the limitations of manual feature extraction. In this systematic review and meta-analysis, we provide an update on current progress of ML algorithms in point-of-care (POC) automated diagnostic classification systems for lesions of the oral cavity. Studies reporting performance metrics on ML algorithms used in automatic classification of oral regions of interest were identified and screened by 2 independent reviewers from 4 databases. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. 35 studies were suitable for qualitative synthesis, and 31 for quantitative analysis. Outcomes were assessed using a bivariate random-effects model following an assessment of bias and heterogeneity. 4 distinct methodologies were identified for POC diagnosis: (1) clinical photography; (2) optical imaging; (3) thermal imaging; (4) analysis of volatile organic compounds. Estimated AUROC across all studies was 0.935, and no difference in performance was identified between methodologies. We discuss the various classical and modern approaches to ML employed within identified studies, and highlight issues that will need to be addressed for implementation of automated classification systems in screening and early detection.
Collapse
|
39
|
Elmakaty I, Elmarasi M, Amarah A, Abdo R, Malki MI. Accuracy of artificial intelligence-assisted detection of Oral Squamous Cell Carcinoma: A systematic review and meta-analysis. Crit Rev Oncol Hematol 2022; 178:103777. [PMID: 35931404 DOI: 10.1016/j.critrevonc.2022.103777] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 07/21/2022] [Accepted: 08/01/2022] [Indexed: 10/16/2022] Open
Abstract
Oral Squamous Cell Carcinoma (OSCC) is an aggressive tumor with a poor prognosis. Accurate and timely diagnosis is therefore essential for reducing the burden of advanced disease and improving outcomes. In this meta-analysis, we evaluated the accuracy of artificial intelligence (AI)-assisted technologies in detecting OSCC. We included studies that validated any diagnostic modality that used AI to detect OSCC. A search was performed in six databases: PubMed, Embase, Scopus, Cochrane Library, ProQuest, and Web of Science up to 15 Mar 2022. The Quality Assessment Tool for Diagnostic Accuracy Studies was used to evaluate the included studies' quality, while the Split Component Synthesis method was utilized to quantitatively synthesize the pooled diagnostic efficacy estimates. We considered 16 out of the 566 yielded studies, which included twelve different AI models with a total of 6606 samples. The summary sensitivity, summary specificity, positive and negative likelihood ratios as well as the pooled diagnostic odds ratio were 92.0 % (95 % confidence interval [CI] 86.7-95.4 %), 91.9 % (95 % CI 86.5-95.3 %), 11.4 (95 % CI 6.74-19.2), 0.087 (95 % CI 0.051-0.146) and 132 (95 % CI 62.6-277), respectively. Our findings support the capability of AI-assisted systems to detect OSCC with high accuracy, potentially aiding the histopathological examination in early diagnosis, yet more prospective studies are needed to justify their use in the real population.
Collapse
Affiliation(s)
| | | | - Ahmed Amarah
- College of Medicine, QU Health, Qatar University, Doha, Qatar.
| | - Ruba Abdo
- College of Medicine, QU Health, Qatar University, Doha, Qatar.
| | - Mohammed Imad Malki
- Pathology Unit, Department of Basic Medical Sciences, College of Medicine, QU Health, Qatar University, Doha, Qatar.
| |
Collapse
|
40
|
Hegde S, Ajila V, Zhu W, Zeng C. Review of the Use of Artificial Intelligence in Early Diagnosis and Prevention of Oral Cancer. Asia Pac J Oncol Nurs 2022; 9:100133. [PMID: 36389623 PMCID: PMC9664349 DOI: 10.1016/j.apjon.2022.100133] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/12/2022] [Indexed: 11/30/2022] Open
Abstract
The global occurrence of oral cancer (OC) has increased in recent years. OC that is diagnosed in its advanced stages results in morbidity and mortality. The use of technology may be beneficial for early detection and diagnosis and thus help the clinician with better patient management. The advent of artificial intelligence (AI) has the potential to improve OC screening. AI can precisely analyze an enormous dataset from various imaging modalities and provide assistance in the field of oncology. This review focused on the applications of AI in the early diagnosis and prevention of OC. A literature search was conducted in the PubMed and Scopus databases using the search terminology “oral cancer” and “artificial intelligence.” Further information regarding the topic was collected by scrutinizing the reference lists of selected articles. Based on the information obtained, this article reviews and discusses the applications and advantages of AI in OC screening, early diagnosis, disease prediction, treatment planning, and prognosis. Limitations and the future scope of AI in OC research are also highlighted.
Collapse
|
41
|
Kim JS, Kim BG, Hwang SH. Efficacy of Artificial Intelligence-Assisted Discrimination of Oral Cancerous Lesions from Normal Mucosa Based on the Oral Mucosal Image: A Systematic Review and Meta-Analysis. Cancers (Basel) 2022; 14:cancers14143499. [PMID: 35884560 PMCID: PMC9320189 DOI: 10.3390/cancers14143499] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/16/2022] [Accepted: 07/17/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Early detection of oral cancer is important to increase the survival rate and reduce morbidity. For the past few years, the early detection of oral cancer using artificial intelligence (AI) technology based on autofluorescence imaging, photographic imaging, and optical coherence tomography imaging has been an important research area. In this study, diagnostic values including sensitivity and specificity data were comprehensively confirmed in various studies that performed AI analysis of images. The diagnostic sensitivity of AI-assisted screening was 0.92. In subgroup analysis, there was no statistically significant difference in the diagnostic rate according to each image tool. AI shows good diagnostic performance with high sensitivity for oral cancer. Image analysis using AI is expected to be used as a clinical tool for early detection and evaluation of treatment efficacy for oral cancer. Abstract The accuracy of artificial intelligence (AI)-assisted discrimination of oral cancerous lesions from normal mucosa based on mucosal images was evaluated. Two authors independently reviewed the database until June 2022. Oral mucosal disorder, as recorded by photographic images, autofluorescence, and optical coherence tomography (OCT), was compared with the reference results by histology findings. True-positive, true-negative, false-positive, and false-negative data were extracted. Seven studies were included for discriminating oral cancerous lesions from normal mucosa. The diagnostic odds ratio (DOR) of AI-assisted screening was 121.66 (95% confidence interval [CI], 29.60; 500.05). Twelve studies were included for discriminating all oral precancerous lesions from normal mucosa. The DOR of screening was 63.02 (95% CI, 40.32; 98.49). Subgroup analysis showed that OCT was more diagnostically accurate (324.33 vs. 66.81 and 27.63) and more negatively predictive (0.94 vs. 0.93 and 0.84) than photographic images and autofluorescence on the screening for all oral precancerous lesions from normal mucosa. Automated detection of oral cancerous lesions by AI would be a rapid, non-invasive diagnostic tool that could provide immediate results on the diagnostic work-up of oral cancer. This method has the potential to be used as a clinical tool for the early diagnosis of pathological lesions.
Collapse
Affiliation(s)
- Ji-Sun Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Byung Guk Kim
- Department of Otolaryngology-Head and Neck Surgery, Eunpyeong St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Seoul 03312, Korea; (J.-S.K.); (B.G.K.)
| | - Se Hwan Hwang
- Department of Otolaryngology-Head and Neck Surgery, Bucheon St. Mary’s Hospital, College of Medicine, Catholic University of Korea, Bucheon 14647, Korea
- Correspondence: ; Tel.: +82-32-340-7044
| |
Collapse
|
42
|
Al-Rawi N, Sultan A, Rajai B, Shuaeeb H, Alnajjar M, Alketbi M, Mohammad Y, Shetty SR, Mashrah MA. The Effectiveness of Artificial Intelligence in Detection of Oral Cancer. Int Dent J 2022; 72:436-447. [PMID: 35581039 PMCID: PMC9381387 DOI: 10.1016/j.identj.2022.03.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/03/2022] [Accepted: 03/04/2022] [Indexed: 02/07/2023] Open
Abstract
Aim The early detection of oral cancer (OC) at the earliest stage significantly increases survival rates. Recently, there has been an increasing interest in the use of artificial intelligence (AI) technologies in diagnostic medicine. This study aimed to critically analyse the available evidence concerning the utility of AI in the diagnosis of OC. Special consideration was given to the diagnostic accuracy of AI and its ability to identify the early stages of OC. Materials and methods From the date of inception to December 2021, 4 databases (PubMed, Scopus, EBSCO, and OVID) were searched. Three independent authors selected studies on the basis of strict inclusion criteria. The risk of bias and applicability were assessed using the prediction model risk of bias assessment tool. Of the 606 initial records, 17 studies with a total of 7245 patients and 69,425 images were included. Ten statistical methods were used to assess AI performance in the included studies. Six studies used supervised machine learning, whilst 11 used deep learning. The results of deep learning ranged with an accuracy of 81% to 99.7%, sensitivity 79% to 98.75%, specificity 82% to 100%, and area under the curve (AUC) 79% to 99.5%. Results Results obtained from supervised machine learning demonstrated an accuracy ranging from 43.5% to 100%, sensitivity of 94% to 100%, specificity 16% to 100%, and AUC of 93%. Conclusions There is no clear consensus regarding the best AI method for OC detection. AI is a valuable diagnostic tool that represents a large evolutionary leap in the detection of OC in its early stages. Based on the evidence, deep learning, such as a deep convolutional neural network, is more accurate in the early detection of OC compared to supervised machine learning.
Collapse
Affiliation(s)
- Natheer Al-Rawi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Afrah Sultan
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Batool Rajai
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Haneen Shuaeeb
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Mariam Alnajjar
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Maryam Alketbi
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Yara Mohammad
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates
| | - Shishir Ram Shetty
- Department of Oral and Craniofacial Health Sciences, College of Dental Medicine, University of Sharjah, United Arab Emirates.
| | | |
Collapse
|
43
|
Sengupta N, Sarode SC, Sarode GS, Ghone U. Scarcity of publicly available oral cancer image datasets for machine learning research. Oral Oncol 2022; 126:105737. [PMID: 35114612 DOI: 10.1016/j.oraloncology.2022.105737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 01/20/2022] [Accepted: 01/21/2022] [Indexed: 12/25/2022]
Abstract
Publicly available image datasets of pathologies have easy accessibility and thus, are increasingly being used in the field of machine learning and medical diagnosis. As oral cancer is the most common head and neck cancer with an increasing incidence rate, it is of paramount importance to know the status of publicly available datasets. We designed a systematic search (PubMed, Google Scholar, Google Dataset Search, and Google) to identify the publicly available oral cancer image datasets. After carefully screening 332 articles/datasets, only one met the selection criteria and was available publicly. However, it contained images of cancerous lesions of only lips and tongue. This first-of-its-kind analysis made realize a dire need for publicly available datasets in oral cancer. It will help researchers in the development of effective machine learning algorithms for oral cancer.
Collapse
Affiliation(s)
- Namrata Sengupta
- Department of Oral Pathology and Microbiology, Dr. D.Y. Patil Dental College and Hospital, Dr. D.Y. Patil Vidyapeeth, Sant-Tukaram Nagar, Pimpri, Pune: 411018, MH, India
| | - Sachin C Sarode
- Department of Oral Pathology and Microbiology, Dr. D.Y. Patil Dental College and Hospital, Dr. D.Y. Patil Vidyapeeth, Sant-Tukaram Nagar, Pimpri, Pune: 411018, MH, India.
| | - Gargi S Sarode
- Department of Oral Pathology and Microbiology, Dr. D.Y. Patil Dental College and Hospital, Dr. D.Y. Patil Vidyapeeth, Sant-Tukaram Nagar, Pimpri, Pune: 411018, MH, India
| | - Urmi Ghone
- Department of Oral Pathology and Microbiology, Dr. D.Y. Patil Dental College and Hospital, Dr. D.Y. Patil Vidyapeeth, Sant-Tukaram Nagar, Pimpri, Pune: 411018, MH, India
| |
Collapse
|
44
|
Nath S, Raveendran R, Perumbure S. Artificial Intelligence and Its Application in the Early Detection of Oral Cancers. CLINICAL CANCER INVESTIGATION JOURNAL 2022. [DOI: 10.51847/h7wa0uhoif] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
45
|
Yan KX, Liu L, Li H. Application of machine learning in oral and maxillofacial surgery. Artif Intell Med Imaging 2021; 2:104-114. [DOI: 10.35711/aimi.v2.i6.104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 12/20/2021] [Accepted: 12/28/2021] [Indexed: 02/06/2023] Open
Abstract
Oral and maxillofacial anatomy is extremely complex, and medical imaging is critical in the diagnosis and treatment of soft and bone tissue lesions. Hence, there exists accumulating imaging data without being properly utilized over the last decades. As a result, problems are emerging regarding how to integrate and interpret a large amount of medical data and alleviate clinicians’ workload. Recently, artificial intelligence has been developing rapidly to analyze complex medical data, and machine learning is one of the specific methods of achieving this goal, which is based on a set of algorithms and previous results. Machine learning has been considered useful in assisting early diagnosis, treatment planning, and prognostic estimation through extracting key features and building mathematical models by computers. Over the past decade, machine learning techniques have been applied to the field of oral and maxillofacial surgery and increasingly achieved expert-level performance. Thus, we hold a positive attitude towards developing machine learning for reducing the number of medical errors, improving the quality of patient care, and optimizing clinical decision-making in oral and maxillofacial surgery. In this review, we explore the clinical application of machine learning in maxillofacial cysts and tumors, maxillofacial defect reconstruction, orthognathic surgery, and dental implant and discuss its current problems and solutions.
Collapse
Affiliation(s)
- Kai-Xin Yan
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Lei Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Hui Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|