1
|
Priya J, Raja SKS, Kiruthika SU. State-of-art technologies, challenges, and emerging trends of computer vision in dental images. Comput Biol Med 2024; 178:108800. [PMID: 38917534 DOI: 10.1016/j.compbiomed.2024.108800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 06/20/2024] [Accepted: 06/20/2024] [Indexed: 06/27/2024]
Abstract
Computer vision falls under the broad umbrella of artificial intelligence that mimics human vision and plays a vital role in dental imaging. Dental practitioners visualize and interpret teeth, and the structure surrounding the teeth and detect abnormalities by manually examining various dental imaging modalities. Due to the complexity and cognitive difficulty of comprehending medical data, human error makes correct diagnosis difficult. Automated diagnosis may be able to help alleviate delays, hasten practitioners' interpretation of positive cases, and lighten their workload. Several medical imaging modalities like X-rays, CT scans, color images, etc. that are employed in dentistry are briefly described in this survey. Dentists employ dental imaging as a diagnostic tool in several specialties, including orthodontics, endodontics, periodontics, etc. In the discipline of dentistry, computer vision has progressed from classic image processing to machine learning with mathematical approaches and robust deep learning techniques. Here conventional image processing techniques solely as well as in conjunction with intelligent machine learning algorithms, and sophisticated architectures of dental radiograph analysis employ deep learning techniques. This study provides a detailed summary of several tasks, including anatomical segmentation, identification, and categorization of different dental anomalies with their shortfalls as well as future perspectives in this field.
Collapse
Affiliation(s)
- J Priya
- ECE Department, Easwari Engineering College, Ramapuram, Chennai, Tamilnadu, India.
| | - S Kanaga Suba Raja
- CSE Department, SRM Institute of Science and Technology, Tiruchirappalli, Tamilnadu, India.
| | - S Usha Kiruthika
- CSE Department, National Institute of Technology, Tiruchirappalli, Tamilnadu, India.
| |
Collapse
|
2
|
Xiong Y, Zhang H, Zhou S, Lu M, Huang J, Huang Q, Huang B, Ding J. Simultaneous detection of dental caries and fissure sealant in intraoral photos by deep learning: a pilot study. BMC Oral Health 2024; 24:553. [PMID: 38735954 PMCID: PMC11089789 DOI: 10.1186/s12903-024-04254-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 04/11/2024] [Indexed: 05/14/2024] Open
Abstract
BACKGROUND Deep learning, as an artificial intelligence method has been proved to be powerful in analyzing images. The purpose of this study is to construct a deep learning-based model (ToothNet) for the simultaneous detection of dental caries and fissure sealants in intraoral photos. METHODS A total of 1020 intraoral photos were collected from 762 volunteers. Teeth, caries and sealants were annotated by two endodontists using the LabelMe tool. ToothNet was developed by modifying the YOLOX framework for simultaneous detection of caries and fissure sealants. The area under curve (AUC) in the receiver operating characteristic curve (ROC) and free-response ROC (FROC) curves were used to evaluate model performance in the following aspects: (i) classification accuracy of detecting dental caries and fissure sealants from a photograph (image-level); and (ii) localization accuracy of the locations of predicted dental caries and fissure sealants (tooth-level). The performance of ToothNet and dentist with 1year of experience (1-year dentist) were compared at tooth-level and image-level using Wilcoxon test and DeLong test. RESULTS At the image level, ToothNet achieved an AUC of 0.925 (95% CI, 0.880-0.958) for caries detection and 0.902 (95% CI, 0.853-0.940) for sealant detection. At the tooth level, with a confidence threshold of 0.5, the sensitivity, precision, and F1-score for caries detection were 0.807, 0.814, and 0.810, respectively. For fissure sealant detection, the values were 0.714, 0.750, and 0.731. Compared with ToothNet, the 1-year dentist had a lower F1 value (0.599, p < 0.0001) and AUC (0.749, p < 0.0001) in caries detection, and a lower F1 value (0.727, p = 0.023) and similar AUC (0.829, p = 0.154) in sealant detection. CONCLUSIONS The proposed deep learning model achieved multi-task simultaneous detection in intraoral photos and showed good performance in the detection of dental caries and fissure sealants. Compared with 1-year dentist, the model has advantages in caries detection and is equivalent in fissure sealants detection.
Collapse
Affiliation(s)
- Yanshan Xiong
- Department of Endodontics, Shenzhen Stomatology Hospital, Shenzhen, Guangdong, China
| | - Hongyuan Zhang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, Guangdong, China
| | - Shiyong Zhou
- Department of Endodontics, Shenzhen Stomatology Hospital, Shenzhen, Guangdong, China
| | - Minhua Lu
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, Guangdong, China
| | - Jiahui Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, Guangdong, China
| | - Qiangtai Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, Guangdong, China
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, Guangdong, China.
| | - Jiangfeng Ding
- Department of Endodontics, Shenzhen Stomatology Hospital, Shenzhen, Guangdong, China.
- Department of Pediatric Stomatology, Shenzhen Stomatology Hospital, Shenzhen, Guangdong, China.
| |
Collapse
|
3
|
Pérez de Frutos J, Holden Helland R, Desai S, Nymoen LC, Langø T, Remman T, Sen A. AI-Dentify: deep learning for proximal caries detection on bitewing x-ray - HUNT4 Oral Health Study. BMC Oral Health 2024; 24:344. [PMID: 38494481 PMCID: PMC10946166 DOI: 10.1186/s12903-024-04120-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 03/07/2024] [Indexed: 03/19/2024] Open
Abstract
BACKGROUND Dental caries diagnosis requires the manual inspection of diagnostic bitewing images of the patient, followed by a visual inspection and probing of the identified dental pieces with potential lesions. Yet the use of artificial intelligence, and in particular deep-learning, has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images. METHODS A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were annotated individually by six different experts, and used to train three different object detection deep-learning architectures: RetinaNet (ResNet50), YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197 images, annotated jointly by the same six dental clinicians, was used for evaluation. A five-fold cross validation scheme was used to evaluate the performance of the AI models. RESULTS The trained models show an increase in average precision and F1-score, and decrease of false negative rate, with respect to the dental clinicians. When compared against the dental clinicians, the YOLOv5 model shows the largest improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and 0.149 mean false negative rate. Whereas the best annotators on each of these metrics reported 0.299, 0.495, and 0.164 respectively. CONCLUSION Deep-learning models have shown the potential to assist dental professionals in the diagnosis of caries. Yet, the task remains challenging due to the artifacts natural to the bitewing images.
Collapse
Affiliation(s)
- Javier Pérez de Frutos
- Department of Health Research, SINTEF Digital, Professor Brochs gate 2, Trondheim, 7030, Norway.
| | - Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, Professor Brochs gate 2, Trondheim, 7030, Norway
| | | | - Line Cathrine Nymoen
- Department of public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
- Kompetansesenteret Tannhelse Midt (TkMidt), Trondheim, Norway
| | - Thomas Langø
- Department of Health Research, SINTEF Digital, Professor Brochs gate 2, Trondheim, 7030, Norway
| | | | - Abhijit Sen
- Department of public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway
- Kompetansesenteret Tannhelse Midt (TkMidt), Trondheim, Norway
| |
Collapse
|
4
|
Rashid F, Farook TH, Dudley J. Digital Shade Matching in Dentistry: A Systematic Review. Dent J (Basel) 2023; 11:250. [PMID: 37999014 PMCID: PMC10670912 DOI: 10.3390/dj11110250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/25/2023] [Accepted: 10/25/2023] [Indexed: 11/25/2023] Open
Abstract
The pursuit of aesthetic excellence in dentistry, shaped by societal trends and digital advancements, highlights the critical role of precise shade matching in restorative procedures. Although conventional methods are prevalent, challenges such as shade guide variability and subjective interpretation necessitate a re-evaluation in the face of emerging non-proximity digital instruments. This systematic review employs PRISMA protocols and keyword-based search strategies spanning the Scopus®, PubMed.gov, and Web of ScienceTM databases, with the last updated search carried out in October 2023. The study aimed to synthesise literature that identified digital non-proximity recording instruments and associated colour spaces in dentistry and compare the clinical outcomes of digital systems with spectrophotometers and conventional visual methods. Utilising predefined criteria and resolving disagreements between two reviewers through Cohen's kappa calculator, the review assessed 85 articles, with 33 included in a PICO model for clinical comparisons. The results reveal that 42% of studies employed the CIELAB colour space. Despite the challenges in study quality, non-proximity digital instruments demonstrated more consistent clinical outcomes than visual methods, akin to spectrophotometers, emphasising their efficacy in controlled conditions. The review underscores the evolving landscape of dental shade matching, recognising technological advancements and advocating for methodological rigor in dental research.
Collapse
Affiliation(s)
- Farah Rashid
- Adelaide Dental School, The University of Adelaide, Adelaide, SA 5000, Australia; (T.H.F.); (J.D.)
| | | | | |
Collapse
|
5
|
Altukroni A, Alsaeedi A, Gonzalez-Losada C, Lee JH, Alabudh M, Mirah M, El-Amri S, Ezz El-Deen O. Detection of the pathological exposure of pulp using an artificial intelligence tool: a multicentric study over periapical radiographs. BMC Oral Health 2023; 23:553. [PMID: 37563659 PMCID: PMC10416487 DOI: 10.1186/s12903-023-03251-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 07/25/2023] [Indexed: 08/12/2023] Open
Abstract
BACKGROUND Introducing artificial intelligence (AI) into the medical field proved beneficial in automating tasks and streamlining the practitioners' lives. Hence, this study was conducted to design and evaluate an AI tool called Make Sure Caries Detector and Classifier (MSc) for detecting pathological exposure of pulp on digital periapical radiographs and to compare its performance with dentists. METHODS This study was a diagnostic, multi-centric study, with 3461 digital periapical radiographs from three countries and seven centers. MSc was built using Yolov5-x model, and it was used for exposed and unexposed pulp detection. The dataset was split into a train, validate, and test dataset; the ratio was 8-1-1 to prevent overfitting. 345 images with 752 labels were randomly allocated to test MSc. The performance metrics used to test MSc performance included mean average precision (mAP), precision, F1 score, recall, and area under receiver operating characteristic curve (AUC). The metrics used to compare the performance with that of 10 certified dentists were: right diagnosis exposed (RDE), right diagnosis not exposed (RDNE), false diagnosis exposed (FDE), false diagnosis not exposed (FDNE), missed diagnosis (MD), and over diagnosis (OD). RESULTS MSc achieved a performance of more than 90% in all metrics examined: an average precision of 0.928, recall of 0.918, F1-score of 0.922, and AUC of 0.956 (P<.05). The results showed a higher mean of 1.94 for all right (correct) diagnosis parameters in MSc group, while a higher mean of 0.64 for all wrong diagnosis parameters in the dentists group (P<.05). CONCLUSIONS The designed MSc tool proved itself reliable in the detection and differentiating between exposed and unexposed pulp in the internally validated model. It also showed a better performance for the detection of exposed and unexposed pulp when compared to the 10 dentists' consensus.
Collapse
Affiliation(s)
| | - A Alsaeedi
- Department of Computer Science, College of Computer Science and Engineering, Taibah University, Medina, Saudi Arabia
| | - C Gonzalez-Losada
- School of Dentistry, Complutense University of Madrid, Madrid, Spain
| | - J H Lee
- Department of Periodontology, College of Dentistry and Institute of Oral Bioscience, Jeonbuk National University, Jeonju, Korea
| | - M Alabudh
- Ministry of Health, Medina, Saudi Arabia
| | - M Mirah
- Department of Dental Materials, Taibah University, Medina, Saudi Arabia
| | | | | |
Collapse
|
6
|
Shafi I, Sajad M, Fatima A, Aray DG, Lipari V, Diez IDLT, Ashraf I. Teeth Lesion Detection Using Deep Learning and the Internet of Things Post-COVID-19. SENSORS (BASEL, SWITZERLAND) 2023; 23:6837. [PMID: 37571620 PMCID: PMC10422255 DOI: 10.3390/s23156837] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 07/26/2023] [Accepted: 07/29/2023] [Indexed: 08/13/2023]
Abstract
With a view of the post-COVID-19 world and probable future pandemics, this paper presents an Internet of Things (IoT)-based automated healthcare diagnosis model that employs a mixed approach using data augmentation, transfer learning, and deep learning techniques and does not require physical interaction between the patient and physician. Through a user-friendly graphic user interface and availability of suitable computing power on smart devices, the embedded artificial intelligence allows the proposed model to be effectively used by a layperson without the need for a dental expert by indicating any issues with the teeth and subsequent treatment options. The proposed method involves multiple processes, including data acquisition using IoT devices, data preprocessing, deep learning-based feature extraction, and classification through an unsupervised neural network. The dataset contains multiple periapical X-rays of five different types of lesions obtained through an IoT device mounted within the mouth guard. A pretrained AlexNet, a fast GPU implementation of a convolutional neural network (CNN), is fine-tuned using data augmentation and transfer learning and employed to extract the suitable feature set. The data augmentation avoids overtraining, whereas accuracy is improved by transfer learning. Later, support vector machine (SVM) and the K-nearest neighbors (KNN) classifiers are trained for lesion classification. It was found that the proposed automated model based on the AlexNet extraction mechanism followed by the SVM classifier achieved an accuracy of 98%, showing the effectiveness of the presented approach.
Collapse
Affiliation(s)
- Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan; (I.S.); (A.F.)
| | - Muhammad Sajad
- Abasyn University Islamabad Campus, Islamabad 44000, Pakistan;
| | - Anum Fatima
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan; (I.S.); (A.F.)
| | - Daniel Gavilanes Aray
- Higher Polytechnic School, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; (D.G.A.); (V.L.)
- Universidad Internacional Iberoamericana, Campeche 24560, Mexico
- Fundación Universitaria Internacional de Colombia Bogotá, Bogotá 11131, Colombia
| | - Vivían Lipari
- Higher Polytechnic School, Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; (D.G.A.); (V.L.)
- Universidad Internacional Iberoamericana Arecibo, Puerto Rico, PR 00613, USA
- Universidade Internacional do Cuanza, Cuito EN250, Bié, Angola
| | - Isabel de la Torre Diez
- Department of Signal Theory, Communications and Telematics Engineering, Unviersity of Valladolid, Paseo de Belén, 15, 47011 Valladolid, Spain
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
7
|
Tareq A, Faisal MI, Islam MS, Rafa NS, Chowdhury T, Ahmed S, Farook TH, Mohammed N, Dudley J. Visual Diagnostics of Dental Caries through Deep Learning of Non-Standardised Photographs Using a Hybrid YOLO Ensemble and Transfer Learning Model. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5351. [PMID: 37047966 PMCID: PMC10094335 DOI: 10.3390/ijerph20075351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/16/2023] [Accepted: 03/29/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND Access to oral healthcare is not uniform globally, particularly in rural areas with limited resources, which limits the potential of automated diagnostics and advanced tele-dentistry applications. The use of digital caries detection and progression monitoring through photographic communication, is influenced by multiple variables that are difficult to standardize in such settings. The objective of this study was to develop a novel and cost-effective virtual computer vision AI system to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. METHODS A set of 1703 augmented images was obtained from 233 de-identified teeth specimens. Images were acquired using a consumer smartphone, without any standardised apparatus applied. The study utilised state-of-the-art ensemble modeling, test-time augmentation, and transfer learning processes. The "you only look once" algorithm (YOLO) derivatives, v5s, v5m, v5l, and v5x, were independently evaluated, and an ensemble of the best results was augmented, and transfer learned with ResNet50, ResNet101, VGG16, AlexNet, and DenseNet. The outcomes were evaluated using precision, recall, and mean average precision (mAP). RESULTS The YOLO model ensemble achieved a mean average precision (mAP) of 0.732, an accuracy of 0.789, and a recall of 0.701. When transferred to VGG16, the final model demonstrated a diagnostic accuracy of 86.96%, precision of 0.89, and recall of 0.88. This surpassed all other base methods of object detection from free-hand non-standardised smartphone photographs. CONCLUSION A virtual computer vision AI system, blending a model ensemble, test-time augmentation, and transferred deep learning processes, was developed to predict dental cavitations from non-standardised photographs with reasonable clinical accuracy. This model can improve access to oral healthcare in rural areas with limited resources, and has the potential to aid in automated diagnostics and advanced tele-dentistry applications.
Collapse
Affiliation(s)
- Abu Tareq
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Mohammad Imtiaz Faisal
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Md. Shahidul Islam
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Nafisa Shamim Rafa
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Tashin Chowdhury
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Saif Ahmed
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - Taseef Hasan Farook
- Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
| | - Nabeel Mohammed
- Department of Electrical and Computer Engineering, North South University, Dhaka 1229, Bangladesh (S.A.)
| | - James Dudley
- Adelaide Dental School, The University of Adelaide, Adelaide, SA 5005, Australia
| |
Collapse
|
8
|
Evaluation of the Diagnostic and Prognostic Accuracy of Artificial Intelligence in Endodontic Dentistry: A Comprehensive Review of Literature. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7049360. [PMID: 36761829 PMCID: PMC9904932 DOI: 10.1155/2023/7049360] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 10/23/2022] [Accepted: 11/26/2022] [Indexed: 02/01/2023]
Abstract
Aim This comprehensive review is aimed at evaluating the diagnostic and prognostic accuracy of artificial intelligence in endodontic dentistry. Introduction Artificial intelligence (AI) is a relatively new technology that has widespread use in dentistry. The AI technologies have primarily been used in dentistry to diagnose dental diseases, plan treatment, make clinical decisions, and predict the prognosis. AI models like convolutional neural networks (CNN) and artificial neural networks (ANN) have been used in endodontics to study root canal system anatomy, determine working length measurements, detect periapical lesions and root fractures, predict the success of retreatment procedures, and predict the viability of dental pulp stem cells. Methodology. The literature was searched in electronic databases such as Google Scholar, Medline, PubMed, Embase, Web of Science, and Scopus, published over the last four decades (January 1980 to September 15, 2021) by using keywords such as artificial intelligence, machine learning, deep learning, application, endodontics, and dentistry. Results The preliminary search yielded 2560 articles relevant enough to the paper's purpose. A total of 88 articles met the eligibility criteria. The majority of research on AI application in endodontics has concentrated on tracing apical foramen, verifying the working length, projection of periapical pathologies, root morphologies, and retreatment predictions and discovering the vertical root fractures. Conclusion In endodontics, AI displayed accuracy in terms of diagnostic and prognostic evaluations. The use of AI can help enhance the treatment plan, which in turn can lead to an increase in the success rate of endodontic treatment outcomes. The AI is used extensively in endodontics and could help in clinical applications, such as detecting root fractures, periapical pathologies, determining working length, tracing apical foramen, the morphology of root, and disease prediction.
Collapse
|
9
|
Automatic Segmentation of Periapical Radiograph Using Color Histogram and Machine Learning for Osteoporosis Detection. Int J Dent 2023; 2023:6662911. [PMID: 36896411 PMCID: PMC9991474 DOI: 10.1155/2023/6662911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 02/12/2023] [Accepted: 02/18/2023] [Indexed: 03/03/2023] Open
Abstract
Osteoporosis leads to the loss of cortical thickness, a decrease in bone mineral density (BMD), deterioration in the size of trabeculae, and an increased risk of fractures. Changes in trabecular bone due to osteoporosis can be observed on periapical radiographs, which are widely used in dental practice. This study proposes an automatic trabecular bone segmentation method for detecting osteoporosis using a color histogram and machine learning (ML), based on 120 regions of interest (ROI) on periapical radiographs, and divided into 60 training and 42 testing datasets. The diagnosis of osteoporosis is based on BMD as evaluated by dual X-ray absorptiometry. The proposed method comprises five stages: the obtaining of ROI images, conversion to grayscale, color histogram segmentation, extraction of pixel distribution, and performance evaluation of the ML classifier. For trabecular bone segmentation, we compare K-means and Fuzzy C-means. The distribution of pixels obtained from the K-means and Fuzzy C-means segmentation was used to detect osteoporosis using three ML methods: decision tree, naive Bayes, and multilayer perceptron. The testing dataset was used to obtain the results in this study. Based on the performance evaluation of the K-means and Fuzzy C-means segmentation methods combined with 3 ML, the osteoporosis detection method with the best diagnostic performance was K-means segmentation combined with a multilayer perceptron classifier, with accuracy, specificity, and sensitivity of 90.48%, 90.90%, and 90.00%, respectively. The high accuracy of this study indicates that the proposed method provides a significant contribution to the detection of osteoporosis in the field of medical and dental image analysis.
Collapse
|
10
|
Revilla-León M, Gómez-Polo M, Vyas S, Barmak AB, Özcan M, Att W, Krishnamurthy VR. Artificial intelligence applications in restorative dentistry: A systematic review. J Prosthet Dent 2022; 128:867-875. [PMID: 33840515 DOI: 10.1016/j.prosdent.2021.02.010] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 02/03/2021] [Accepted: 02/04/2021] [Indexed: 11/17/2022]
Abstract
STATEMENT OF PROBLEM Artificial intelligence (AI) applications are increasing in restorative procedures. However, the current development and performance of AI in restorative dentistry applications has not yet been systematically documented and analyzed. PURPOSE The purpose of this systematic review was to identify and evaluate the ability of AI models in restorative dentistry to diagnose dental caries and vertical tooth fracture, detect tooth preparation margins, and predict restoration failure. MATERIAL AND METHODS An electronic systematic review was performed in 5 databases: MEDLINE/PubMed, EMBASE, World of Science, Cochrane, and Scopus. A manual search was also conducted. Studies with AI models were selected based on 4 criteria: diagnosis of dental caries, diagnosis of vertical tooth fracture, detection of the tooth preparation finishing line, and prediction of restoration failure. Two investigators independently evaluated the quality assessment of the studies by applying the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies (nonrandomized experimental studies). A third investigator was consulted to resolve lack of consensus. RESULTS A total of 34 articles were included in the review: 29 studies included AI techniques for the diagnosis of dental caries or the elaboration of caries and postsensitivity prediction models, 2 for the diagnosis of vertical tooth fracture, 1 for the tooth preparation finishing line location, and 2 for the prediction of the restoration failure. Among the studies reviewed, the AI models tested obtained a caries diagnosis accuracy ranging from 76% to 88.3%, sensitivity ranging from 73% to 90%, and specificity ranging from 61.5% to 93%. The caries prediction accuracy among the studies ranged from 83.6% to 97.1%. The studies reported an accuracy for the vertical tooth fracture diagnosis ranging from 88.3% to 95.7%. The article using AI models to locate the finishing line reported an accuracy ranging from 90.6% to 97.4%. CONCLUSIONS AI models have the potential to provide a powerful tool for assisting in the diagnosis of caries and vertical tooth fracture, detecting the tooth preparation margin, and predicting restoration failure. However, the dental applications of AI models are still in development. Further studies are required to assess the clinical performance of AI models in restorative dentistry.
Collapse
Affiliation(s)
- Marta Revilla-León
- Assistant Professor and Assistant Program Director AEGD Residency, Department of Comprehensive Dentistry, College of Dentistry, Texas A&M University, Dallas, Texas; Affiliate Faculty Graduate Prosthodontics, Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, Wash; Researcher at Revilla Research Center, Madrid, Spain
| | - Miguel Gómez-Polo
- Associate Professor, Department of Conservative Dentistry and Prosthodontics, School of Dentistry, Complutense University of Madrid, Madrid, Spain.
| | - Shantanu Vyas
- Graduate Research Assistant, J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, Dallas, Texas
| | - Abdul Basir Barmak
- Assistant Professor Clinical Research and Biostatistics, Eastman Institute of Oral Health, University of Rochester Medical Center, Rochester, NY
| | - Mutlu Özcan
- Professor and Head, Division of Dental Biomaterials, Clinic for Reconstructive Dentistry, Center for Dental and Oral Medicine, University of Zürich, Zürich, Switzerland
| | - Wael Att
- Professor and Chair, Department of Prosthodontics, Tufts University School of Dental Medicine, Boston, Mass
| | - Vinayak R Krishnamurthy
- Assistant Professor, J. Mike Walker '66 Department of Mechanical Engineering, Texas A&M University, College Station, Texas
| |
Collapse
|
11
|
Fatima A, Shafi I, Afzal H, Díez IDLT, Lourdes DRSM, Breñosa J, Espinosa JCM, Ashraf I. Advancements in Dentistry with Artificial Intelligence: Current Clinical Applications and Future Perspectives. Healthcare (Basel) 2022; 10:2188. [PMID: 36360529 PMCID: PMC9690084 DOI: 10.3390/healthcare10112188] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/11/2022] [Accepted: 10/26/2022] [Indexed: 08/31/2023] Open
Abstract
Artificial intelligence has been widely used in the field of dentistry in recent years. The present study highlights current advances and limitations in integrating artificial intelligence, machine learning, and deep learning in subfields of dentistry including periodontology, endodontics, orthodontics, restorative dentistry, and oral pathology. This article aims to provide a systematic review of current clinical applications of artificial intelligence within different fields of dentistry. The preferred reporting items for systematic reviews (PRISMA) statement was used as a formal guideline for data collection. Data was obtained from research studies for 2009-2022. The analysis included a total of 55 papers from Google Scholar, IEEE, PubMed, and Scopus databases. Results show that artificial intelligence has the potential to improve dental care, disease diagnosis and prognosis, treatment planning, and risk assessment. Finally, this study highlights the limitations of the analyzed studies and provides future directions to improve dental care.
Collapse
Affiliation(s)
- Anum Fatima
- National Centre for Robotics, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Hammad Afzal
- Military College of Signals (MCS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan
| | - Isabel De La Torre Díez
- Department of Signal Theory and Communications and Telematic Engineering, University of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Del Rio-Solá M. Lourdes
- Department of Vascular Surgery, University Hospital of Valladolid, Paseo de Belén 15, 47011 Valladolid, Spain
| | - Jose Breñosa
- Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
- Universidade Internacional do Cuanza, Estrada Nacional 250, Bairro Kaluapanda Cuito- Bié, Angola
| | - Julio César Martínez Espinosa
- Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain
- Universidad Internacional Iberoamericana, Campeche 24560, Mexico
- Fundación Universitaria Internacional de Colombia, Calle 39A #19-18 Bogotá D.C, Colombia
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Korea
| |
Collapse
|
12
|
Rashid U, Javid A, Khan AR, Liu L, Ahmed A, Khalid O, Saleem K, Meraj S, Iqbal U, Nawaz R. A hybrid mask RCNN-based tool to localize dental cavities from real-time mixed photographic images. PeerJ Comput Sci 2022; 8:e888. [PMID: 35494840 PMCID: PMC9044255 DOI: 10.7717/peerj-cs.888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Nearly 3.5 billion humans have oral health issues, including dental caries, which requires dentist-patient exposure in oral examinations. The automated approaches identify and locate carious regions from dental images by localizing and processing either colored photographs or X-ray images taken via specialized dental photography cameras. The dentists' interpretation of carious regions is difficult since the detected regions are masked using solid coloring and limited to a particular dental image type. The software-based automated tools to localize caries from dental images taken via ordinary cameras requires further investigation. This research provided a mixed dataset of dental photographic (colored or X-ray) images, instantiated a deep learning approach to enhance the existing dental image carious regions' localization procedure, and implemented a full-fledged tool to present carious regions via simple dental images automatically. The instantiation mainly exploits the mixed dataset of dental images (colored photographs or X-rays) collected from multiple sources and pre-trained hybrid Mask RCNN to localize dental carious regions. The evaluations performed by the dentists showed that the correctness of annotated datasets is up to 96%, and the accuracy of the proposed system is between 78% and 92%. Moreover, the system achieved the overall satisfaction level of dentists above 80%.
Collapse
Affiliation(s)
- Umer Rashid
- Department of Computer Science, Quaid-e-Azam University, Islamabad, Pakistan
| | - Aiman Javid
- Department of Computer Science, Quaid-e-Azam University, Islamabad, Pakistan
| | - Abdur Rehman Khan
- Department of Computer Science, Quaid-e-Azam University, Islamabad, Pakistan
| | - Leo Liu
- School of Business and Law, The Manchester Metropolitan University, Manchester, United Kingdom
| | - Adeel Ahmed
- Department of Computer Science, Quaid-e-Azam University, Islamabad, Pakistan
| | - Osman Khalid
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Khalid Saleem
- Department of Computer Science, Quaid-e-Azam University, Islamabad, Pakistan
| | - Shaista Meraj
- Department of Radiology, Bolton NHS Foundation Trust, Bolton, United Kingdom
| | - Uzair Iqbal
- Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad Chiniot-Faisalabad, Pakistan
| | - Raheel Nawaz
- School of Business and Law, The Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
13
|
Carrillo-Perez F, Pecho OE, Morales JC, Paravina RD, Della Bona A, Ghinea R, Pulgar R, Pérez MDM, Herrera LJ. Applications of artificial intelligence in dentistry: A comprehensive review. J ESTHET RESTOR DENT 2021; 34:259-280. [PMID: 34842324 DOI: 10.1111/jerd.12844] [Citation(s) in RCA: 53] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 09/30/2021] [Accepted: 11/09/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE To perform a comprehensive review of the use of artificial intelligence (AI) and machine learning (ML) in dentistry, providing the community with a broad insight on the different advances that these technologies and tools have produced, paying special attention to the area of esthetic dentistry and color research. MATERIALS AND METHODS The comprehensive review was conducted in MEDLINE/PubMed, Web of Science, and Scopus databases, for papers published in English language in the last 20 years. RESULTS Out of 3871 eligible papers, 120 were included for final appraisal. Study methodologies included deep learning (DL; n = 76), fuzzy logic (FL; n = 12), and other ML techniques (n = 32), which were mainly applied to disease identification, image segmentation, image correction, and biomimetic color analysis and modeling. CONCLUSIONS The insight provided by the present work has reported outstanding results in the design of high-performance decision support systems for the aforementioned areas. The future of digital dentistry goes through the design of integrated approaches providing personalized treatments to patients. In addition, esthetic dentistry can benefit from those advances by developing models allowing a complete characterization of tooth color, enhancing the accuracy of dental restorations. CLINICAL SIGNIFICANCE The use of AI and ML has an increasing impact on the dental profession and is complementing the development of digital technologies and tools, with a wide application in treatment planning and esthetic dentistry procedures.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Oscar E Pecho
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Juan Carlos Morales
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| | - Rade D Paravina
- Department of Restorative Dentistry and Prosthodontics, School of Dentistry, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Alvaro Della Bona
- Post-Graduate Program in Dentistry, Dental School, University of Passo Fundo, Passo Fundo, Brazil
| | - Razvan Ghinea
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Rosa Pulgar
- Department of Stomatology, Campus Cartuja, University of Granada, Granada, Spain
| | - María Del Mar Pérez
- Department of Optics, Faculty of Science, University of Granada, Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, E.T.S.I.I.T.-C.I.T.I.C. University of Granada, Granada, Spain
| |
Collapse
|
14
|
Automated caries detection in vivo using a 3D intraoral scanner. Sci Rep 2021; 11:21276. [PMID: 34711853 PMCID: PMC8553860 DOI: 10.1038/s41598-021-00259-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022] Open
Abstract
The use of 3D intraoral scanners (IOS) and software that can support automated detection and objective monitoring of oral diseases such as caries, tooth wear or periodontal diseases, is increasingly receiving attention from researchers and industry. This study clinically validates an automated caries scoring system for occlusal caries detection and classification, previously defined for an IOS system featuring fluorescence (TRIOS 4, 3Shape TRIOS A/S, Denmark). Four algorithms (ALG1, ALG2, ALG3, ALG4) are assessed for the IOS; the first three are based only on fluorescence information, while ALG4 also takes into account the tooth color information. The diagnostic performance of these automated algorithms is compared with the diagnostic performance of the clinical visual examination, while histological assessment is used as reference. Additionally, possible differences between in vitro and in vivo diagnostic performance of the IOS system are investigated. The algorithms show comparable in vivo diagnostic performance to the visual examination with no significant difference in the area under the ROC curves (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p>0.05$$\end{document}p>0.05). Only minor differences between their in vitro and in vivo diagnostic performance are noted but no significant differences in the area under the ROC curves, (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p>0.05$$\end{document}p>0.05). This novel IOS system exhibits encouraging performance for clinical application on occlusal caries detection and classification. Different approaches can be investigated for possible optimization of the system.
Collapse
|
15
|
Kumar A, Bhadauria HS, Singh A. Descriptive analysis of dental X-ray images using various practical methods: A review. PeerJ Comput Sci 2021; 7:e620. [PMID: 34616881 PMCID: PMC8459782 DOI: 10.7717/peerj-cs.620] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/09/2021] [Indexed: 06/13/2023]
Abstract
In dentistry, practitioners interpret various dental X-ray imaging modalities to identify tooth-related problems, abnormalities, or teeth structure changes. Another aspect of dental imaging is that it can be helpful in the field of biometrics. Human dental image analysis is a challenging and time-consuming process due to the unspecified and uneven structures of various teeth, and hence the manual investigation of dental abnormalities is at par excellence. However, automation in the domain of dental image segmentation and examination is essentially the need of the hour in order to ensure error-free diagnosis and better treatment planning. In this article, we have provided a comprehensive survey of dental image segmentation and analysis by investigating more than 130 research works conducted through various dental imaging modalities, such as various modes of X-ray, CT (Computed Tomography), CBCT (Cone Beam Computed Tomography), etc. Overall state-of-the-art research works have been classified into three major categories, i.e., image processing, machine learning, and deep learning approaches, and their respective advantages and limitations are identified and discussed. The survey presents extensive details of the state-of-the-art methods, including image modalities, pre-processing applied for image enhancement, performance measures, and datasets utilized.
Collapse
|
16
|
Duong DL, Kabir MH, Kuo RF. Automated caries detection with smartphone color photography using machine learning. Health Informatics J 2021; 27:14604582211007530. [PMID: 33863251 DOI: 10.1177/14604582211007530] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Untreated caries is significant problem that affected billion people over the world. Therefore, the appropriate method and accuracy of caries detection in clinical decision-making in dental practices as well as in oral epidemiology or caries research, are required urgently. The aim of this study was to introduce a computational algorithm that can automate recognize carious lesions on tooth occlusal surfaces in smartphone images according to International Caries Detection and Assessment System (ICDAS). From a group of extracted teeth, 620 unrestored molars/premolars were photographed using smartphone. The obtained images were evaluated for caries diagnosis with the ICDAS II codes, and were labeled into three classes: "No Surface Change" (NSC); "Visually Non-Cavitated" (VNC); "Cavitated" (C). Then, a two steps detection scheme using Support Vector Machine (SVM) has been proposed: "C versus (VNC + NSC)" classification, and "VNC versus NSC" classification. The accuracy, sensitivity, and specificity of best model were 92.37%, 88.1%, and 96.6% for "C versus (VNC + NSC)," whereas they were 83.33%, 82.2%, and 66.7% for "VNC versus NSC." Although the proposed SVM system required further improvement and verification, with the data only imaged from the smartphone, it performed an auspicious potential for clinical diagnostics with reasonable accuracy and minimal cost.
Collapse
Affiliation(s)
| | | | - Rong Fu Kuo
- Department of Biomedical Engineering, National Cheng Kung University.,Medical Device Innovation Center, National Cheng Kung University
| |
Collapse
|
17
|
Duong DL, Nguyen QDN, Tong MS, Vu MT, Lim JD, Kuo RF. Proof-of-Concept Study on an Automatic Computational System in Detecting and Classifying Occlusal Caries Lesions from Smartphone Color Images of Unrestored Extracted Teeth. Diagnostics (Basel) 2021; 11:diagnostics11071136. [PMID: 34206549 PMCID: PMC8307588 DOI: 10.3390/diagnostics11071136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 06/16/2021] [Accepted: 06/18/2021] [Indexed: 11/16/2022] Open
Abstract
Dental caries has been considered the heaviest worldwide oral health burden affecting a significant proportion of the population. To prevent dental caries, an appropriate and accurate early detection method is demanded. This proof-of-concept study aims to develop a two-stage computational system that can detect early occlusal caries from smartphone color images of unrestored extracted teeth according to modified International Caries Detection and Assessment System (ICDAS) criteria (3 classes: Code 0; Code 1-2; Code 3-6): in the first stage, carious lesion areas were identified and extracted from sound tooth regions. Then, five characteristic features of these areas were intendedly selected and calculated to be inputted into the classification stage, where five classifiers (Support Vector Machine, Random Forests, K-Nearest Neighbors, Gradient Boosted Tree, Logistic Regression) were evaluated to determine the best one among them. On a set of 587 smartphone images of extracted teeth, our system achieved accuracy, sensitivity, and specificity that were 87.39%, 89.88%, and 68.86% in the detection stage when compared to modified visual and image-based ICDAS criteria. For the classification stage, the Support Vector Machine model was recorded as the best model with accuracy, sensitivity, and specificity at 88.76%, 92.31%, and 85.21%. As the first step in developing the technology, our present findings confirm the feasibility of using smartphone color images to employ Artificial Intelligence algorithms in caries detection. To improve the performance of the proposed system, there is a need for further development in both in vitro and in vivo modeling. Besides that, an applicable system for accurately taking intra-oral images that can capture entire dental arches including the occlusal surfaces of premolars and molars also needs to be developed.
Collapse
Affiliation(s)
- Duc Long Duong
- Department of Biomedical Engineering, National Cheng Kung University, Dasyue Rd, Tainan 701, Taiwan; (Q.D.N.N.); (R.F.K.)
- School of Odonto-Stomatology, Hanoi Medical University, Ton That Tung St, Hanoi City 10000, Vietnam; (M.S.T.); (M.T.V.)
- Correspondence: ; Tel.: +886-968-685-225 or +84-935-759-669
| | - Quoc Duy Nam Nguyen
- Department of Biomedical Engineering, National Cheng Kung University, Dasyue Rd, Tainan 701, Taiwan; (Q.D.N.N.); (R.F.K.)
| | - Minh Son Tong
- School of Odonto-Stomatology, Hanoi Medical University, Ton That Tung St, Hanoi City 10000, Vietnam; (M.S.T.); (M.T.V.)
| | - Manh Tuan Vu
- School of Odonto-Stomatology, Hanoi Medical University, Ton That Tung St, Hanoi City 10000, Vietnam; (M.S.T.); (M.T.V.)
| | - Joseph Dy Lim
- Center of Dentistry, COAHS, University of Makati, J.P. Rizal Ext, Makati, Metro Manila 1215, Philippines;
| | - Rong Fu Kuo
- Department of Biomedical Engineering, National Cheng Kung University, Dasyue Rd, Tainan 701, Taiwan; (Q.D.N.N.); (R.F.K.)
- Medical Device Innovation Center, National Cheng Kung University, Shengli Rd, Tainan 704, Taiwan
| |
Collapse
|
18
|
Machine Learning and Intelligent Diagnostics in Dental and Orofacial Pain Management: A Systematic Review. Pain Res Manag 2021; 2021:6659133. [PMID: 33986900 PMCID: PMC8093041 DOI: 10.1155/2021/6659133] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/11/2021] [Accepted: 04/17/2021] [Indexed: 02/07/2023]
Abstract
Purpose The study explored the clinical influence, effectiveness, limitations, and human comparison outcomes of machine learning in diagnosing (1) dental diseases, (2) periodontal diseases, (3) trauma and neuralgias, (4) cysts and tumors, (5) glandular disorders, and (6) bone and temporomandibular joint as possible causes of dental and orofacial pain. Method Scopus, PubMed, and Web of Science (all databases) were searched by 2 reviewers until 29th October 2020. Articles were screened and narratively synthesized according to PRISMA-DTA guidelines based on predefined eligibility criteria. Articles that made direct reference test comparisons to human clinicians were evaluated using the MI-CLAIM checklist. The risk of bias was assessed by JBI-DTA critical appraisal, and certainty of the evidence was evaluated using the GRADE approach. Information regarding the quantification method of dental pain and disease, the conditional characteristics of both training and test data cohort in the machine learning, diagnostic outcomes, and diagnostic test comparisons with clinicians, where applicable, were extracted. Results 34 eligible articles were found for data synthesis, of which 8 articles made direct reference comparisons to human clinicians. 7 papers scored over 13 (out of the evaluated 15 points) in the MI-CLAIM approach with all papers scoring 5+ (out of 7) in JBI-DTA appraisals. GRADE approach revealed serious risks of bias and inconsistencies with most studies containing more positive cases than their true prevalence in order to facilitate machine learning. Patient-perceived symptoms and clinical history were generally found to be less reliable than radiographs or histology for training accurate machine learning models. A low agreement level between clinicians training the models was suggested to have a negative impact on the prediction accuracy. Reference comparisons found nonspecialized clinicians with less than 3 years of experience to be disadvantaged against trained models. Conclusion Machine learning in dental and orofacial healthcare has shown respectable results in diagnosing diseases with symptomatic pain and with improved future iterations and can be used as a diagnostic aid in the clinics. The current review did not internally analyze the machine learning models and their respective algorithms, nor consider the confounding variables and factors responsible for shaping the orofacial disorders responsible for eliciting pain.
Collapse
|
19
|
Prados-Privado M, García Villalón J, Martínez-Martínez CH, Ivorra C, Prados-Frutos JC. Dental Caries Diagnosis and Detection Using Neural Networks: A Systematic Review. J Clin Med 2020; 9:E3579. [PMID: 33172056 PMCID: PMC7694692 DOI: 10.3390/jcm9113579] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/30/2020] [Accepted: 11/03/2020] [Indexed: 11/16/2022] Open
Abstract
Dental caries is the most prevalent dental disease worldwide, and neural networks and artificial intelligence are increasingly being used in the field of dentistry. This systematic review aims to identify the state of the art of neural networks in caries detection and diagnosis. A search was conducted in PubMed, Institute of Electrical and Electronics Engineers (IEEE) Xplore, and ScienceDirect. Data extraction was performed independently by two reviewers. The quality of the selected studies was assessed using the Cochrane Handbook tool. Thirteen studies were included. Most of the included studies employed periapical, near-infrared light transillumination, and bitewing radiography. The image databases ranged from 87 to 3000 images, with a mean of 669 images. Seven of the included studies labeled the dental caries in each image by experienced dentists. Not all of the studies detailed how caries was defined, and not all detailed the type of carious lesion detected. Each study included in this review used a different neural network and different outcome metrics. All this variability complicates the conclusions that can be made about the reliability or not of a neural network to detect and diagnose caries. A comparison between neural network and dentist results is also necessary.
Collapse
Affiliation(s)
- María Prados-Privado
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (C.I.)
- Department of Signal Theory and Communications, Higher Polytechnic School, Universidad de Alcala de Henares, Ctra, Madrid-Barcelona, Km. 33,600, 28805 Alcala de Henares, Spain
- IDIBO GROUP (Group of High-Performance Research, Development and Innovation in Dental Biomaterials of Rey Juan Carlos University), Avenida de Atenas s/n, 28922 Alcorcon, Spain;
| | - Javier García Villalón
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (C.I.)
| | | | - Carlos Ivorra
- Asisa Dental, Research Department, C/José Abascal, 32, 28003 Madrid, Spain; (J.G.V.); (C.I.)
| | - Juan Carlos Prados-Frutos
- IDIBO GROUP (Group of High-Performance Research, Development and Innovation in Dental Biomaterials of Rey Juan Carlos University), Avenida de Atenas s/n, 28922 Alcorcon, Spain;
- Department of Medical Specialties and Public Health, Faculty of Health Sciences, Universidad Rey Juan Carlos, Avenida de Atenas, 28922 Alcorcon, Spain
| |
Collapse
|
20
|
Setzer FC, Shi KJ, Zhang Z, Yan H, Yoon H, Mupparapu M, Li J. Artificial Intelligence for the Computer-aided Detection of Periapical Lesions in Cone-beam Computed Tomographic Images. J Endod 2020; 46:987-993. [PMID: 32402466 DOI: 10.1016/j.joen.2020.03.025] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 03/19/2020] [Accepted: 03/24/2020] [Indexed: 12/21/2022]
Abstract
INTRODUCTION The aim of this study was to use a Deep Learning (DL) algorithm for the automated segmentation of cone-beam computed tomographic (CBCT) images and the detection of periapical lesions. METHODS Limited field of view CBCT volumes (n = 20) containing 61 roots with and without lesions were segmented clinician dependent versus using the DL approach based on a U-Net architecture. Segmentation labeled each voxel as 1 of 5 categories: "lesion" (periapical lesion), "tooth structure," "bone," "restorative materials," and "background." Repeated splits of all images into a training set and a validation set based on 5-fold cross validation were performed using Deep Learning segmentation (DLS), and the results were averaged. DLS versus clinical-dependent segmentation was assessed by dichotomized lesion detection accuracy evaluating sensitivity, specificity, positive predictive value, negative predictive value, and voxel-matching accuracy using the DICE index for each of the 5 labels. RESULTS DLS lesion detection accuracy was 0.93 with specificity of 0.88, positive predictive value of 0.87, and negative predictive value of 0.93. The overall cumulative DICE indexes for the individual labels were lesion = 0.52, tooth structure = 0.74, bone = 0.78, restorative materials = 0.58, and background = 0.95. The cumulative DICE index for all actual true lesions was 0.67. CONCLUSIONS This DL algorithm trained in a limited CBCT environment showed excellent results in lesion detection accuracy. Overall voxel-matching accuracy may be benefited by enhanced versions of artificial intelligence.
Collapse
Affiliation(s)
- Frank C Setzer
- Department of Endodontics, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania.
| | - Katherine J Shi
- Private Practice, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Zhiyang Zhang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Hao Yan
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Hyunsoo Yoon
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| | - Mel Mupparapu
- Department of Oral Medicine, School of Dental Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jing Li
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, Arizona
| |
Collapse
|
21
|
Moutselos K, Berdouses E, Oulis C, Maglogiannis I. Recognizing Occlusal Caries in Dental Intraoral Images Using Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:1617-1620. [PMID: 31946206 DOI: 10.1109/embc.2019.8856553] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Based on an image dataset of 88 in-vivo dental images taken with an intra-oral camera, we show that a Deep Learning model (Mask R-CNN) can detect and classify dental caries on occlusal surfaces across the whole 7-class ICDAS (International Caries Detection and Assessment System) scale. This is accomplished without any image pre-processing method and by utilizing superpixels segmentation for the experts' annotations and the evaluation of the classifier. In the proposed methodology, transfer learning and data augmentation are employed during the training of the model. The paper discusses technical details, provides initial results and denotes points for further improvement by fine-tuning the classifier along with an extended dataset.
Collapse
|
22
|
Histological validation of the automated caries detection system (ACDS) in classifying occlusal caries with the ICDAS II system in vitro. Eur Arch Paediatr Dent 2018; 20:249-255. [DOI: 10.1007/s40368-018-0389-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Accepted: 10/29/2018] [Indexed: 10/27/2022]
|
23
|
Is it feasible to use smartphone images to perform telediagnosis of different stages of occlusal caries lesions? PLoS One 2018; 13:e0202116. [PMID: 30188900 PMCID: PMC6126822 DOI: 10.1371/journal.pone.0202116] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Accepted: 06/20/2018] [Indexed: 12/02/2022] Open
Abstract
The purpose of this study was to compare the performance of two different models of smartphone and a conventional camera with that of direct clinical examination in detecting caries lesions at different stages of progression in deciduous molars. The photographic equipment consisted of two smartphones (iPhone and Nexus 4) and a conventional macro camera setup. First, in the laboratory phase of the study, we compared the images of 20 exfoliated primary teeth having caries lesions at different stages. Then, in the clinical phase of the study, the images of 119 primary molars from fifteen children (3 to 6 years old) were used. All of the photographic images were taken using the previously described devices. In both groups, two examiners, blinded to the photographic equipment used, assessed the images independently on a computer screen, and classified them according to the International Caries Detection and Assessment System (ICDAS). The teeth were then examined directly by two other experienced examiners, and the consensus reached was considered the reference standard. Parameters of validity, such as percentage of correct answers, agreement with the reference standard, sensitivity, specificity and inter-examiner agreement (using the weighted kappa test) were calculated. The examiners performed similarly in both in vitro and in vivo studies. Inter-examiner reliability was approximately 0.7 for all the devices in the laboratory setting, and for the macro camera photography system in the clinical setting, but it was approximately 0.9 for the iPhone and Nexus images taken in vivo. With regard to the percentage of correct answers, the highest values were observed for sound and extensive caries lesions in both laboratory and clinical settings. The percentage of correct answers for initial and moderate lesions was particularly low in the clinical evaluation, irrespective of the camera devices used. Therefore, we concluded that photographic diagnosis using smartphone images is feasible and accurate for distinguishing sound tooth surfaces from extensive caries lesions; however, photographic images are not a good method for accurately detecting initial and moderate caries lesions.
Collapse
|
24
|
Al-Naji A, Chahl J. Noncontact Heart Activity Measurement System Based on Video Imaging Analysis. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417570014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Vital parameter monitoring systems based on video camera imagery is a growing interest field in clinical and biomedical applications. Heart rate (HR) is one of the most important vital parameters of interest in a clinical diagnostic and monitoring system. This study proposed a noncontact HR and beat length measurement system based on both motion magnification and motion detection at four different regions of interest (ROIs) (wrist, arm, neck and leg). A motion magnification based on a Chebyshev filter was utilized in order to magnify heart pulses in different ROIs that are difficult to see with the naked eye. A new measuring system based on motion detection was used to measure HR and beat length by detecting rapid motion areas in the video frame sequences that represent the heart pulses and converting video frames into a corresponding logical matrix. Video quality metrics were also used to compare our magnification system with standard Eulerian video magnification to select which one has better magnification results and gives better results for the heart pulse. The 99.3% limits of agreement between the proposed system and reference measurement fall within[Formula: see text] beats/min based on Bland and Altman test. The proposed system is expected to produce new options for further noncontact information extraction.
Collapse
Affiliation(s)
- Ali Al-Naji
- School of Engineering, University of South Australia, Mawson Lakes Campus, SA 5095, Australia
| | - Javaan Chahl
- School of Engineering, University of South Australia, Mawson Lakes Campus, SA 5095, Australia
| |
Collapse
|
25
|
Jiang R, You R, Pei XQ, Zou X, Zhang MX, Wang TM, Sun R, Luo DH, Huang PY, Chen QY, Hua YJ, Tang LQ, Guo L, Mo HY, Qian CN, Mai HQ, Hong MH, Cai HM, Chen MY. Development of a ten-signature classifier using a support vector machine integrated approach to subdivide the M1 stage into M1a and M1b stages of nasopharyngeal carcinoma with synchronous metastases to better predict patients' survival. Oncotarget 2016; 7:3645-57. [PMID: 26636646 PMCID: PMC4823134 DOI: 10.18632/oncotarget.6436] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Accepted: 11/16/2015] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to develop a prognostic classifier and subdivided the M1 stage for nasopharyngeal carcinoma patients with synchronous metastases (mNPC). A retrospective cohort of 347 mNPC patients was recruited between January 2000 and December 2010. Thirty hematological markers and 11 clinical characteristics were collected, and the association of these factors with overall survival (OS) was evaluated. Advanced machine learning schemes of a support vector machine (SVM) were used to select a subset of highly informative factors and to construct a prognostic model (mNPC-SVM). The mNPC-SVM classifier identified ten informative variables, including three clinical indexes and seven hematological markers. The median survival time for low-risk patients (M1a) as identified by the mNPC-SVM classifier was 38.0 months, and survival time was dramatically reduced to 13.8 months for high-risk patients (M1b) (P < 0.001). Multivariate adjustment using prognostic factors revealed that the mNPC-SVM classifier remained a powerful predictor of OS (M1a vs. M1b, hazard ratio, 3.45; 95% CI, 2.59 to 4.60, P < 0.001). Moreover, combination treatment of systemic chemotherapy and loco-regional radiotherapy was associated with significantly better survival outcomes than chemotherapy alone (the 5-year OS, 47.0% vs. 10.0%, P < 0.001) in the M1a subgroup but not in the M1b subgroup (12.0% vs. 3.0%, P = 0.101). These findings were validated by a separate cohort. In conclusion, the newly developed mNPC-SVM classifier led to more precise risk definitions that offer a promising subdivision of the M1 stage and individualized selection for future therapeutic regimens in mNPC patients.
Collapse
Affiliation(s)
- Rou Jiang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Department of Cancer Prevention, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Rui You
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Xiao-Qing Pei
- Department of Ultrasonography, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Xiong Zou
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Meng-Xia Zhang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Tong-Min Wang
- Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Rui Sun
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Dong-Hua Luo
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Pei-Yu Huang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Qiu-Yan Chen
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Yi-Jun Hua
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Lin-Quan Tang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Ling Guo
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Hao-Yuan Mo
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Chao-Nan Qian
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Hai-Qiang Mai
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Ming-Huang Hong
- Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Department of Clinical Trials Center, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| | - Hong-Min Cai
- School of Computer and Engineering, South China University of Technology, Guangzhou, P. R. China
| | - Ming-Yuan Chen
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China.,Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, P. R. China
| |
Collapse
|
26
|
Donovan TE, Marzola R, Murphy KR, Cagna DR, Eichmiller F, McKee JR, Metz JE, Albouy JP. Annual review of selected scientific literature: Report of the committee on scientific investigation of the American Academy of Restorative Dentistry. J Prosthet Dent 2016; 116:663-740. [PMID: 28236412 DOI: 10.1016/j.prosdent.2016.09.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Revised: 09/08/2016] [Accepted: 09/08/2016] [Indexed: 02/07/2023]
Abstract
STATEMENT OF PROBLEM It is clear the contemporary dentist is confronted with a blizzard of information regarding materials and techniques from journal articles, advertisements, newsletters, the internet, and continuing education events. While some of that information is sound and helpful, much of it is misleading at best. PURPOSE This review identifies and discusses the most important scientific findings regarding outcomes of dental treatment to assist the practitioner in making evidence-based choices. This review was conducted to assist the busy dentist in keeping abreast of the latest scientific information regarding the clinical practice of dentistry. MATERIAL AND METHODS Each of the authors, who are considered experts in their disciplines, was asked to peruse the scientific literature published in 2015 in their discipline and review the articles for important information that may have an impact on treatment decisions. Comments on experimental methodology, statistical evaluation, and overall validity of the conclusions are included in many of the reviews. RESULTS The reviews are not meant to stand alone but are intended to inform the interested reader about what has been discovered in the past year. The readers are then invited to go to the source if they wish more detail. CONCLUSIONS Analysis of the scientific literature published in 2015 is divided into 7 sections, dental materials, periodontics, prosthodontics, occlusion and temporomandibular disorders, sleep-disordered breathing, cariology, and implant dentistry.
Collapse
Affiliation(s)
- Terence E Donovan
- Professor, Biomaterials, University of North Carolina School of Dentistry, Chapel Hill, N.C.
| | | | | | - David R Cagna
- Professor, Advanced Prosthodontics University of Tennessee Health Sciences Center, Memphis, Tenn
| | | | | | | | | |
Collapse
|