1
|
Silva AB, Martins AS, Tosta TAA, Loyola AM, Cardoso SV, Neves LA, de Faria PR, do Nascimento MZ. OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1691-1710. [PMID: 38409608 DOI: 10.1007/s10278-024-01041-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/28/2024]
Abstract
Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.
Collapse
Affiliation(s)
- Adriano Barbosa Silva
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil.
| | - Alessandro Santana Martins
- Federal Institute of Triângulo Mineiro (IFTM), R. Belarmino Vilela Junqueira, S/N, 38305-200, Ituiutaba, MG, Brazil
| | - Thaína Aparecida Azevedo Tosta
- Science and Technology Institute, Federal University of São Paulo (UNIFESP), Av. Cesare Mansueto Giulio Lattes, 1201, 12247-014, São José dos Campos, SP, Brazil
| | - Adriano Mota Loyola
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Sérgio Vitorino Cardoso
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Leandro Alves Neves
- Department of Computer Science and Statistics (DCCE), São Paulo State University (UNESP), R. Cristóvão Colombo, 2265, 38305-200, São José do Rio Preto, SP, Brazil
| | - Paulo Rogério de Faria
- Department of Histology and Morphology, Institute of Biomedical Science, Federal University of Uberlândia (UFU), Av. Amazonas, S/N, 38405-320, Uberlândia, MG, Brazil
| | - Marcelo Zanchetta do Nascimento
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil
| |
Collapse
|
2
|
Alajaji SA, Khoury ZH, Jessri M, Sciubba JJ, Sultan AS. An Update on the Use of Artificial Intelligence in Digital Pathology for Oral Epithelial Dysplasia Research. Head Neck Pathol 2024; 18:38. [PMID: 38727841 PMCID: PMC11087425 DOI: 10.1007/s12105-024-01643-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/30/2024] [Indexed: 05/13/2024]
Abstract
INTRODUCTION Oral epithelial dysplasia (OED) is a precancerous histopathological finding which is considered the most important prognostic indicator for determining the risk of malignant transformation into oral squamous cell carcinoma (OSCC). The gold standard for diagnosis and grading of OED is through histopathological examination, which is subject to inter- and intra-observer variability, impacting accurate diagnosis and prognosis. The aim of this review article is to examine the current advances in digital pathology for artificial intelligence (AI) applications used for OED diagnosis. MATERIALS AND METHODS We included studies that used AI for diagnosis, grading, or prognosis of OED on histopathology images or intraoral clinical images. Studies utilizing imaging modalities other than routine light microscopy (e.g., scanning electron microscopy), or immunohistochemistry-stained histology slides, or immunofluorescence were excluded from the study. Studies not focusing on oral dysplasia grading and diagnosis, e.g., to discriminate OSCC from normal epithelial tissue were also excluded. RESULTS A total of 24 studies were included in this review. Nineteen studies utilized deep learning (DL) convolutional neural networks for histopathological OED analysis, and 4 used machine learning (ML) models. Studies were summarized by AI method, main study outcomes, predictive value for malignant transformation, strengths, and limitations. CONCLUSION ML/DL studies for OED grading and prediction of malignant transformation are emerging as promising adjunctive tools in the field of digital pathology. These adjunctive objective tools can ultimately aid the pathologist in more accurate diagnosis and prognosis prediction. However, further supportive studies that focus on generalization, explainable decisions, and prognosis prediction are needed.
Collapse
Affiliation(s)
- Shahd A Alajaji
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA
- Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, Meharry Medical College School of Dentistry, Nashville, TN, USA
| | - Maryam Jessri
- Oral Medicine and Pathology Department, School of Dentistry, University of Queensland, Herston, QLD, Australia
- Oral Medicine Department, Metro North Hospital and Health Services, Queensland Health, Brisbane, QLD, Australia
| | - James J Sciubba
- Department of Otolaryngology, Head & Neck Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Ahmed S Sultan
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA.
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA.
- University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, MD, USA.
| |
Collapse
|
3
|
Chudobiński C, Świderski B, Antoniuk I, Kurek J. Enhancements in Radiological Detection of Metastatic Lymph Nodes Utilizing AI-Assisted Ultrasound Imaging Data and the Lymph Node Reporting and Data System Scale. Cancers (Basel) 2024; 16:1564. [PMID: 38672646 PMCID: PMC11048706 DOI: 10.3390/cancers16081564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/11/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The paper presents a novel approach for the automatic detection of neoplastic lesions in lymph nodes (LNs). It leverages the latest advances in machine learning (ML) with the LN Reporting and Data System (LN-RADS) scale. By integrating diverse datasets and network structures, the research investigates the effectiveness of ML algorithms in improving diagnostic accuracy and automation potential. Both Multinominal Logistic Regression (MLR)-integrated and fully connected neuron layers are included in the analysis. The methods were trained using three variants of combinations of histopathological data and LN-RADS scale labels to assess their utility. The findings demonstrate that the LN-RADS scale improves prediction accuracy. MLR integration is shown to achieve higher accuracy, while the fully connected neuron approach excels in AUC performance. All of the above suggests a possibility for significant improvement in the early detection and prognosis of cancer using AI techniques. The study underlines the importance of further exploration into combined datasets and network architectures, which could potentially lead to even greater improvements in the diagnostic process.
Collapse
Affiliation(s)
- Cezary Chudobiński
- Copernicus Regional Multi-Specialty Oncology and Trauma Centre, 93-513 Lódź, Poland;
| | - Bartosz Świderski
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| | - Izabella Antoniuk
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| | - Jarosław Kurek
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| |
Collapse
|
4
|
Mhaske S, Ramalingam K, Nair P, Patel S, Menon P A, Malik N, Mhaske S. Automated Analysis of Nuclear Parameters in Oral Exfoliative Cytology Using Machine Learning. Cureus 2024; 16:e58744. [PMID: 38779230 PMCID: PMC11110917 DOI: 10.7759/cureus.58744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND As oral cancer remains a major worldwide health concern, sophisticated diagnostic tools are needed to aid in early diagnosis. Non-invasive methods like exfoliative cytology, albeit with the help of artificial intelligence (AI), have drawn additional interest. AIM The study aimed to harness the power of machine learning algorithms for the automated analysis of nuclear parameters in oral exfoliative cytology. Further, the analysis of two different AI systems, namely convoluted neural networks (CNN) and support vector machine (SVM), were compared for accuracy. METHODS A comparative diagnostic study was performed in two groups of patients (n=60). The control group without evidence of lesions (n=30) and the other group with clinically suspicious oral malignancy (n=30) were evaluated. All patients underwent cytological smears using an exfoliative cytology brush, followed by routine Hematoxylin and Eosin staining. Image preprocessing, data splitting, machine learning, model development, feature extraction, and model evaluation were done. An independent t-test was run on each nuclear characteristic, and Pearson's correlation coefficient test was performed with Statistical Package for the Social Sciences (SPSS) software (IBM SPSS Statistics for Windows, Version 28.0. IBM Corp, Armonk, NY, USA). RESULTS The study found substantial variations between the study and control groups in nuclear size (p<0.05), nuclear shape (p<0.01), and chromatin distribution (p<0.001). The Pearson correlation coefficient of SVM was 0.6472, and CNN was 0.7790, showing that SVM had more accuracy. CONCLUSION The availability of multidimensional datasets, combined with breakthroughs in high-performance computers and new deep-learning architectures, has resulted in an explosion of AI use in numerous areas of oncology research. The discerned diagnostic accuracy exhibited by the SVM and CNN models suggests prospective improvements in early detection rates, potentially improving patient outcomes and enhancing healthcare practices.
Collapse
Affiliation(s)
- Shubhangi Mhaske
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
- Oral and Maxillofacial Pathology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Karthikeyan Ramalingam
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
| | - Preeti Nair
- Oral Medicine and Radiology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Shubham Patel
- Oral and Maxillofacial Pathology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Arathi Menon P
- Dentistry, Indian Council of Medical Research, Bhopal, IND
| | - Nida Malik
- Periodontics, Kamala Nehru Hospital, Bhopal, IND
| | - Sumedh Mhaske
- Medicine, Government Medical College & Hospital, Aurangabad, IND
| |
Collapse
|
5
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
6
|
Albalawi E, Thakur A, Ramakrishna MT, Bhatia Khan S, SankaraNarayanan S, Almarri B, Hadi TH. Oral squamous cell carcinoma detection using EfficientNet on histopathological images. Front Med (Lausanne) 2024; 10:1349336. [PMID: 38348235 PMCID: PMC10859441 DOI: 10.3389/fmed.2023.1349336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/28/2023] [Indexed: 02/15/2024] Open
Abstract
Introduction Oral Squamous Cell Carcinoma (OSCC) poses a significant challenge in oncology due to the absence of precise diagnostic tools, leading to delays in identifying the condition. Current diagnostic methods for OSCC have limitations in accuracy and efficiency, highlighting the need for more reliable approaches. This study aims to explore the discriminative potential of histopathological images of oral epithelium and OSCC. By utilizing a database containing 1224 images from 230 patients, captured at varying magnifications and publicly available, a customized deep learning model based on EfficientNetB3 was developed. The model's objective was to differentiate between normal epithelium and OSCC tissues by employing advanced techniques such as data augmentation, regularization, and optimization. Methods The research utilized a histopathological imaging database for Oral Cancer analysis, incorporating 1224 images from 230 patients. These images, taken at various magnifications, formed the basis for training a specialized deep learning model built upon the EfficientNetB3 architecture. The model underwent training to distinguish between normal epithelium and OSCC tissues, employing sophisticated methodologies including data augmentation, regularization techniques, and optimization strategies. Results The customized deep learning model achieved significant success, showcasing a remarkable 99% accuracy when tested on the dataset. This high accuracy underscores the model's efficacy in effectively discerning between normal epithelium and OSCC tissues. Furthermore, the model exhibited impressive precision, recall, and F1-score metrics, reinforcing its potential as a robust diagnostic tool for OSCC. Discussion This research demonstrates the promising potential of employing deep learning models to address the diagnostic challenges associated with OSCC. The model's ability to achieve a 99% accuracy rate on the test dataset signifies a considerable leap forward in earlier and more accurate detection of OSCC. Leveraging advanced techniques in machine learning, such as data augmentation and optimization, has shown promising results in improving patient outcomes through timely and precise identification of OSCC.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science, Engineering and Environment, University of Salford, Salford, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - Suresh SankaraNarayanan
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Badar Almarri
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Theyazn Hassn Hadi
- Applied College in Abqaiq, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
7
|
Shamsan A, Senan EM, Ahmad Shatnawi HS. Predicting of diabetic retinopathy development stages of fundus images using deep learning based on combined features. PLoS One 2023; 18:e0289555. [PMID: 37862328 PMCID: PMC10588832 DOI: 10.1371/journal.pone.0289555] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/20/2023] [Indexed: 10/22/2023] Open
Abstract
The number of diabetic retinopathy (DR) patients is increasing every year, and this causes a public health problem. Therefore, regular diagnosis of diabetes patients is necessary to avoid the progression of DR stages to advanced stages that lead to blindness. Manual diagnosis requires effort and expertise and is prone to errors and differing expert diagnoses. Therefore, artificial intelligence techniques help doctors make a proper diagnosis and resolve different opinions. This study developed three approaches, each with two systems, for early diagnosis of DR disease progression. All colour fundus images have been subjected to image enhancement and increasing contrast ROI through filters. All features extracted by the DenseNet-121 and AlexNet (Dense-121 and Alex) were fed to the Principal Component Analysis (PCA) method to select important features and reduce their dimensions. The first approach is to DR image analysis for early prediction of DR disease progression by Artificial Neural Network (ANN) with selected, low-dimensional features of Dense-121 and Alex models. The second approach is to DR image analysis for early prediction of DR disease progression is by integrating important and low-dimensional features of Dense-121 and Alex models before and after PCA. The third approach is to DR image analysis for early prediction of DR disease progression by ANN with the radiomic features. The radiomic features are a combination of the features of the CNN models (Dense-121 and Alex) separately with the handcrafted features extracted by Discrete Wavelet Transform (DWT), Local Binary Pattern (LBP), Fuzzy colour histogram (FCH), and Gray Level Co-occurrence Matrix (GLCM) methods. With the radiomic features of the Alex model and the handcrafted features, ANN reached a sensitivity of 97.92%, an AUC of 99.56%, an accuracy of 99.1%, a specificity of 99.4% and a precision of 99.06%.
Collapse
Affiliation(s)
- Ahlam Shamsan
- Computer Department, Applied College, Najran University, Najran, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | |
Collapse
|
8
|
Pošta P, Kolk A, Pivovarčíková K, Liška J, Genčur J, Moztarzadeh O, Micopulos C, Pěnkava A, Frolo M, Bissinger O, Hauer L. Clinical Experience with Autofluorescence Guided Oral Squamous Cell Carcinoma Surgery. Diagnostics (Basel) 2023; 13:3161. [PMID: 37891982 PMCID: PMC10605623 DOI: 10.3390/diagnostics13203161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/02/2023] [Accepted: 10/05/2023] [Indexed: 10/29/2023] Open
Abstract
In our study, the effect of the use of autofluorescence (Visually Enhanced Lesion Scope-VELscope) on increasing the success rate of surgical treatment in oral squamous carcinoma (OSCC) was investigated. Our hypothesis was tested on a group of 122 patients suffering from OSCC, randomized into a study and a control group enrolled in our study after meeting the inclusion criteria. The preoperative checkup via VELscope, accompanied by the marking of the range of a loss of fluorescence in the study group, was performed before the surgery. We developed a unique mucosal tattoo marking technique for this purpose. The histopathological results after surgical treatment, i.e., the margin status, were then compared. In the study group, we achieved pathological free margin (pFM) in 55 patients, pathological close margin (pCM) in 6 cases, and we encountered no cases of pathological positive margin (pPM) in the mucosal layer. In comparison, the control group results revealed pPM in 7 cases, pCM in 14 cases, and pFM in 40 of all cases in the mucosal layer. This study demonstrated that preoperative autofluorescence assessment of the mucosal surroundings of OSCC increased the ability to achieve pFM resection 4.8 times in terms of lateral margins.
Collapse
Affiliation(s)
- Petr Pošta
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Andreas Kolk
- Department of Oral and Maxillofacial Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria; (A.K.); (O.B.)
| | - Kristýna Pivovarčíková
- Sikl’s Department of Pathology, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic;
- Bioptic Laboratory Ltd., 32600 Pilsen, Czech Republic
| | - Jan Liška
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Jiří Genčur
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Omid Moztarzadeh
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
- Department of Anatomy, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic
| | - Christos Micopulos
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Adam Pěnkava
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Maria Frolo
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Oliver Bissinger
- Department of Oral and Maxillofacial Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria; (A.K.); (O.B.)
| | - Lukáš Hauer
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| |
Collapse
|
9
|
Song S, Ren X, He J, Gao M, Wang J, Wang B. An Optimal Hierarchical Approach for Oral Cancer Diagnosis Using Rough Set Theory and an Amended Version of the Competitive Search Algorithm. Diagnostics (Basel) 2023; 13:2454. [PMID: 37510198 PMCID: PMC10377835 DOI: 10.3390/diagnostics13142454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
Oral cancer is introduced as the uncontrolled cells' growth that causes destruction and damage to nearby tissues. This occurs when a sore or lump grows in the mouth that does not disappear. Cancers of the cheeks, lips, floor of the mouth, tongue, sinuses, hard and soft palate, and lungs (throat) are types of this cancer that will be deadly if not detected and cured in the beginning stages. The present study proposes a new pipeline procedure for providing an efficient diagnosis system for oral cancer images. In this procedure, after preprocessing and segmenting the area of interest of the inputted images, the useful characteristics are achieved. Then, some number of useful features are selected, and the others are removed to simplify the method complexity. Finally, the selected features move into a support vector machine (SVM) to classify the images by selected characteristics. The feature selection and classification steps are optimized by an amended version of the competitive search optimizer. The technique is finally implemented on the Oral Cancer (Lips and Tongue) images (OCI) dataset, and its achievements are confirmed by the comparison of it with some other latest techniques, which are weight balancing, a support vector machine, a gray-level co-occurrence matrix (GLCM), the deep method, transfer learning, mobile microscopy, and quadratic discriminant analysis. The simulation results were authenticated by four indicators and indicated the suggested method's efficiency in relation to the others in diagnosing the oral cancer cases.
Collapse
Affiliation(s)
- Simin Song
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Xiaojing Ren
- The First Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100853, China
| | - Jing He
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Meng Gao
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Jia'nan Wang
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Bin Wang
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| |
Collapse
|
10
|
Hamdi M, Senan EM, Jadhav ME, Olayah F, Awaji B, Alalayah KM. Hybrid Models Based on Fusion Features of a CNN and Handcrafted Features for Accurate Histopathological Image Analysis for Diagnosing Malignant Lymphomas. Diagnostics (Basel) 2023; 13:2258. [PMID: 37443652 DOI: 10.3390/diagnostics13132258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/10/2023] [Accepted: 06/28/2023] [Indexed: 07/15/2023] Open
Abstract
Malignant lymphoma is one of the most severe types of disease that leads to death as a result of exposure of lymphocytes to malignant tumors. The transformation of cells from indolent B-cell lymphoma to B-cell lymphoma (DBCL) is life-threatening. Biopsies taken from the patient are the gold standard for lymphoma analysis. Glass slides under a microscope are converted into whole slide images (WSI) to be analyzed by AI techniques through biomedical image processing. Because of the multiplicity of types of malignant lymphomas, manual diagnosis by pathologists is difficult, tedious, and subject to disagreement among physicians. The importance of artificial intelligence (AI) in the early diagnosis of malignant lymphoma is significant and has revolutionized the field of oncology. The use of AI in the early diagnosis of malignant lymphoma offers numerous benefits, including improved accuracy, faster diagnosis, and risk stratification. This study developed several strategies based on hybrid systems to analyze histopathological images of malignant lymphomas. For all proposed models, the images and extraction of malignant lymphocytes were optimized by the gradient vector flow (GVF) algorithm. The first strategy for diagnosing malignant lymphoma images relied on a hybrid system between three types of deep learning (DL) networks, XGBoost algorithms, and decision tree (DT) algorithms based on the GVF algorithm. The second strategy for diagnosing malignant lymphoma images was based on fusing the features of the MobileNet-VGG16, VGG16-AlexNet, and MobileNet-AlexNet models and classifying them by XGBoost and DT algorithms based on the ant colony optimization (ACO) algorithm. The color, shape, and texture features, which are called handcrafted features, were extracted by four traditional feature extraction algorithms. Because of the similarity in the biological characteristics of early-stage malignant lymphomas, the features of the fused MobileNet-VGG16, VGG16-AlexNet, and MobileNet-AlexNet models were combined with the handcrafted features and classified by the XGBoost and DT algorithms based on the ACO algorithm. We concluded that the performance of the two networks XGBoost and DT, with fused features between DL networks and handcrafted, achieved the best performance. The XGBoost network based on the fused features of MobileNet-VGG16 and handcrafted features resulted in an AUC of 99.43%, accuracy of 99.8%, precision of 99.77%, sensitivity of 99.7%, and specificity of 99.8%. This highlights the significant role of AI in the early diagnosis of malignant lymphoma, offering improved accuracy, expedited diagnosis, and enhanced risk stratification. This study highlights leveraging AI techniques and biomedical image processing; the analysis of whole slide images (WSI) converted from biopsies allows for improved accuracy, faster diagnosis, and risk stratification. The developed strategies based on hybrid systems, combining deep learning networks, XGBoost and decision tree algorithms, demonstrated promising results in diagnosing malignant lymphoma images. Furthermore, the fusion of handcrafted features with features extracted from DL networks enhanced the performance of the classification models.
Collapse
Affiliation(s)
- Mohammed Hamdi
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | - Mukti E Jadhav
- Shri Shivaji Science & Arts College, Chikhli Dist., Buldana 443201, India
| | - Fekry Olayah
- Department of Information System, Faculty Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Khaled M Alalayah
- Department of Computer Science, Faculty of Science and Arts, Sharurah, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
11
|
Khanagar SB, Alkadi L, Alghilan MA, Kalagi S, Awawdeh M, Bijai LK, Vishwanathaiah S, Aldhebaib A, Singh OG. Application and Performance of Artificial Intelligence (AI) in Oral Cancer Diagnosis and Prediction Using Histopathological Images: A Systematic Review. Biomedicines 2023; 11:1612. [PMID: 37371706 DOI: 10.3390/biomedicines11061612] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/29/2023] Open
Abstract
Oral cancer (OC) is one of the most common forms of head and neck cancer and continues to have the lowest survival rates worldwide, even with advancements in research and therapy. The prognosis of OC has not significantly improved in recent years, presenting a persistent challenge in the biomedical field. In the field of oncology, artificial intelligence (AI) has seen rapid development, with notable successes being reported in recent times. This systematic review aimed to critically appraise the available evidence regarding the utilization of AI in the diagnosis, classification, and prediction of oral cancer (OC) using histopathological images. An electronic search of several databases, including PubMed, Scopus, Embase, the Cochrane Library, Web of Science, Google Scholar, and the Saudi Digital Library, was conducted for articles published between January 2000 and January 2023. Nineteen articles that met the inclusion criteria were then subjected to critical analysis utilizing QUADAS-2, and the certainty of the evidence was assessed using the GRADE approach. AI models have been widely applied in diagnosing oral cancer, differentiating normal and malignant regions, predicting the survival of OC patients, and grading OC. The AI models used in these studies displayed an accuracy in a range from 89.47% to 100%, sensitivity from 97.76% to 99.26%, and specificity ranging from 92% to 99.42%. The models' abilities to diagnose, classify, and predict the occurrence of OC outperform existing clinical approaches. This demonstrates the potential for AI to deliver a superior level of precision and accuracy, helping pathologists significantly improve their diagnostic outcomes and reduce the probability of errors. Considering these advantages, regulatory bodies and policymakers should expedite the process of approval and marketing of these products for application in clinical scenarios.
Collapse
Affiliation(s)
- Sanjeev B Khanagar
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lubna Alkadi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Maryam A Alghilan
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Sara Kalagi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Mohammed Awawdeh
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lalitytha Kumar Bijai
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Maxillofacial Surgery and Diagnostic Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Satish Vishwanathaiah
- Department of Preventive Dental Sciences, Division of Pediatric Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
| | - Ali Aldhebaib
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Oinam Gokulchandra Singh
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| |
Collapse
|
12
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
13
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features. Bioengineering (Basel) 2023; 10:bioengineering10030383. [PMID: 36978774 PMCID: PMC10045080 DOI: 10.3390/bioengineering10030383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/05/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
14
|
Ananthakrishnan B, Shaik A, Kumar S, Narendran SO, Mattu K, Kavitha MS. Automated Detection and Classification of Oral Squamous Cell Carcinoma Using Deep Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13050918. [PMID: 36900062 PMCID: PMC10001077 DOI: 10.3390/diagnostics13050918] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/02/2023] [Accepted: 02/14/2023] [Indexed: 03/05/2023] Open
Abstract
This work aims to classify normal and carcinogenic cells in the oral cavity using two different approaches with an eye towards achieving high accuracy. The first approach extracts local binary patterns and metrics derived from a histogram from the dataset and is fed to several machine-learning models. The second approach uses a combination of neural networks as a backbone feature extractor and a random forest for classification. The results show that information can be learnt effectively from limited training images using these approaches. Some approaches use deep learning algorithms to generate a bounding box that can locate the suspected lesion. Other approaches use handcrafted textural feature extraction techniques and feed the resultant feature vectors to a classification model. The proposed method will extract the features pertaining to the images using pre-trained convolution neural networks (CNN) and train a classification model using the resulting feature vectors. By using the extracted features from a pre-trained CNN model to train a random forest, the problem of requiring a large amount of data to train deep learning models is bypassed. The study selected a dataset consisting of 1224 images, which were divided into two sets with varying resolutions.The performance of the model is calculated based on accuracy, specificity, sensitivity, and the area under curve (AUC). The proposed work is able to produce a highest test accuracy of 96.94% and an AUC of 0.976 using 696 images of 400× magnification and a highest test accuracy of 99.65% and an AUC of 0.9983 using only 528 images of 100× magnification images.
Collapse
Affiliation(s)
- Balasundaram Ananthakrishnan
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Ayesha Shaik
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Soham Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - S. O. Narendran
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Khushi Mattu
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan
| |
Collapse
|