1
|
Revilla-León M, Gómez-Polo M, Barmak AB, Inam W, Kan JYK, Kois JC, Akal O. Artificial intelligence models for diagnosing gingivitis and periodontal disease: A systematic review. J Prosthet Dent 2023; 130:816-824. [PMID: 35300850 DOI: 10.1016/j.prosdent.2022.01.026] [Citation(s) in RCA: 28] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 01/16/2022] [Accepted: 01/19/2022] [Indexed: 11/23/2022]
Abstract
STATEMENT OF PROBLEM Artificial intelligence (AI) models have been developed for periodontal applications, including diagnosing gingivitis and periodontal disease, but their accuracy and maturity of the technology remain unclear. PURPOSE The purpose of this systematic review was to evaluate the performance of the AI models for detecting dental plaque and diagnosing gingivitis and periodontal disease. MATERIAL AND METHODS A review was performed in 4 databases: MEDLINE/PubMed, World of Science, Cochrane, and Scopus. A manual search was also conducted. Studies were classified into 4 groups: detecting dental plaque, diagnosis of gingivitis, diagnosis of periodontal disease from intraoral images, and diagnosis of alveolar bone loss from periapical, bitewing, and panoramic radiographs. Two investigators evaluated the studies independently by applying the Joanna Briggs Institute critical appraisal. A third examiner was consulted to resolve any lack of consensus. RESULTS Twenty-four articles were included: 2 studies developed AI models for detecting plaque, resulting in accuracy ranging from 73.6% to 99%; 7 studies assessed the ability to diagnose gingivitis from intraoral photographs reporting an accuracy between 74% and 78.20%; 1 study used fluorescent intraoral images to diagnose gingivitis reporting 67.7% to 73.72% accuracy; 3 studies assessed the ability to diagnose periodontal disease from intraoral photographs with an accuracy between 47% and 81%, and 11 studies evaluated the performance of AI models for detecting alveolar bone loss from radiographic images reporting an accuracy between 73.4% and 99%. CONCLUSIONS AI models for periodontology applications are still in development but might provide a powerful diagnostic tool.
Collapse
Affiliation(s)
- Marta Revilla-León
- Affiliate Assistant Professor Graduate Prosthodontics, Department of Restorative Dentistry, School of Dentistry, University of Washington, Seattle, Wash; Director of Research and Digital Dentistry, Kois Center, Seattle, Wash; Adjunct Professor Graduate Prosthodontics, Department of Prosthodontics, School of Dental Medicine, Tufts University, Boston, Mass
| | - Miguel Gómez-Polo
- Associate Professor, Department of Conservative Dentistry and Prosthodontics, School of Dentistry, Complutense University of Madrid, Madrid, Spain.
| | - Abdul B Barmak
- Assistant Professor Clinical Research and Biostatistics, Eastman Institute of Oral Health, University of Rochester Medical Center, Rochester, NY
| | | | - Joseph Y K Kan
- Professor, Advanced Education in Implant Dentistry, Loma Linda University School of Dentistry, Loma Linda, Calif
| | - John C Kois
- Founder and Director Kois Center, Seattle, Wash; Affiliate Professor, Graduate Prosthodontics, Department of Restorative Dentistry, University of Washington, Seattle, Wash; Private practice, Seattle, Wash
| | - Orhan Akal
- Machine Learning Scientist, Boston, Mass
| |
Collapse
|
2
|
Del Rey YC, Rikvold PD, Johnsen KK, Schlafer S. A fast and reliable method for semi-automated planimetric quantification of dental plaque in clinical trials. J Clin Periodontol 2023; 50:331-338. [PMID: 36345833 DOI: 10.1111/jcpe.13745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 10/12/2022] [Accepted: 11/04/2022] [Indexed: 11/10/2022]
Abstract
AIM To develop a simple and reproducible method for semi-automated planimetric quantification of dental plaque. MATERIALS AND METHODS Plaque from 20 healthy volunteers was disclosed using erythrosine, and fluorescence images of the first incisors, first premolars, and first molars were recorded after 1, 7, and 14 days of de novo plaque formation. The planimetric plaque index (PPI) was determined using a semi-automated threshold-based image segmentation algorithm and compared with manually determined PPI and the Turesky modification of the Quigley-Hein plaque index (TM-QHPI). The decrease of tooth autofluorescence in plaque-covered areas was quantified as an index of plaque thickness (TI). Data were analysed by analysis of variance (ANOVA) and Pearson correlations. RESULTS The high contrast between teeth, disclosed plaque, and soft tissues in fluorescence images allowed for a fast threshold-based image segmentation. Semi-automated PPI is strongly correlated with manual planimetry (r = 0.92; p < .001) and TM-QHPI recordings (r = 0.88; p < .001), and may exhibit a higher discriminatory power than TM-QHPI due to its continuous scale. TI values corresponded to optically perceived plaque thickness, and no differences were observed over time (p > .05, ANOVA). CONCLUSIONS The proposed semi-automated planimetric analysis based on fluorescence images is a simple and efficient method for dental plaque quantification in multiple images with reduced human input.
Collapse
Affiliation(s)
- Yumi Chokyu Del Rey
- Department of Dentistry and Oral Health, Section for Oral Ecology and Caries Control, Aarhus University, Aarhus, Denmark
| | - Pernille Dukanovic Rikvold
- Department of Dentistry and Oral Health, Section for Oral Ecology and Caries Control, Aarhus University, Aarhus, Denmark
| | - Karina Kambourakis Johnsen
- Department of Dentistry and Oral Health, Section for Oral Ecology and Caries Control, Aarhus University, Aarhus, Denmark
| | - Sebastian Schlafer
- Department of Dentistry and Oral Health, Section for Oral Ecology and Caries Control, Aarhus University, Aarhus, Denmark
| |
Collapse
|
3
|
Arsiwala-Scheppach LT, Chaurasia A, Müller A, Krois J, Schwendicke F. Machine Learning in Dentistry: A Scoping Review. J Clin Med 2023; 12:937. [PMID: 36769585 PMCID: PMC9918184 DOI: 10.3390/jcm12030937] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/06/2023] [Accepted: 01/23/2023] [Indexed: 01/27/2023] Open
Abstract
Machine learning (ML) is being increasingly employed in dental research and application. We aimed to systematically compile studies using ML in dentistry and assess their methodological quality, including the risk of bias and reporting standards. We evaluated studies employing ML in dentistry published from 1 January 2015 to 31 May 2021 on MEDLINE, IEEE Xplore, and arXiv. We assessed publication trends and the distribution of ML tasks (classification, object detection, semantic segmentation, instance segmentation, and generation) in different clinical fields. We appraised the risk of bias and adherence to reporting standards, using the QUADAS-2 and TRIPOD checklists, respectively. Out of 183 identified studies, 168 were included, focusing on various ML tasks and employing a broad range of ML models, input data, data sources, strategies to generate reference tests, and performance metrics. Classification tasks were most common. Forty-two different metrics were used to evaluate model performances, with accuracy, sensitivity, precision, and intersection-over-union being the most common. We observed considerable risk of bias and moderate adherence to reporting standards which hampers replication of results. A minimum (core) set of outcome and outcome metrics is necessary to facilitate comparisons across studies.
Collapse
Affiliation(s)
- Lubaina T. Arsiwala-Scheppach
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Akhilanand Chaurasia
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
- Department of Oral Medicine and Radiology, King George’s Medical University, Lucknow 226003, India
| | - Anne Müller
- Pharmacovigilance Institute (Pharmakovigilanz- und Beratungszentrum, PVZ) for Embryotoxicology, Institute of Clinical Pharmacology and Toxicology, Charité—Universitätsmedizin Berlin, 13353 Berlin, Germany
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, 14197 Berlin, Germany
- ITU/WHO Focus Group AI on Health, Topic Group Dental Diagnostics and Digital Dentistry, CH-1211 Geneva 20, Switzerland
| |
Collapse
|
4
|
Scherl DS, Coffman L, Mansoor A, Rajwa B, Patsekin V, Robinson JP. A Semi-Automated Method for Measuring Biofilm Accumulation on the Teeth Using Quantitative Light-Induced Fluorescence in Dogs and Cats. J Vet Dent 2022; 39:122-132. [PMID: 35257605 DOI: 10.1177/08987564221081991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Oral health conditions (eg, plaque, calculus, gingivitis) cause morbidity and pain in companion animals. Thus, developing technologies that can ameliorate the accumulation of oral biofilm, a critical factor in the progression of these conditions, is vital. Quantitative light-induced fluorescence (QLF) is a method to quantify oral substrate accumulation, and therefore, it can assess biofilm attenuation of different products. New software has recently been developed that automates aspects of the procedure. However, few QLF studies in companion animals have been performed. QLF was used to collect digital images of oral substrate accumulation on the teeth of dogs and cats to demonstrate the ability of QLF to discriminate between foods known to differentially inhibit oral substrate accumulation. Images were taken as a function of time and diet. Software developed by the Cytometry Laboratory, Purdue University quantified biofilm coverage. Intra- and intergrader reproducibility was also assessed, as was a comparison of the results of the QLF software with those of an experienced grader using undisclosed coverage-only metrics similar to those used for the Logan and Boyce index. Quantification of oral substrate accumulation using QLF-derived images demonstrated the ability to distinguish between dental diets known to differentially inhibit oral biofilm accumulation. Little variance in intra- and intergrader reproducibility was observed, and the comparison between the experienced Logan and Boyce grader and the QLF software yielded a concordance correlation coefficient of 0.89 (95% CI = 0.84, 0.92). These results show that QLF is a useful tool that allows the semi-automated quantification of the accumulation of oral biofilm in companion animals.
Collapse
Affiliation(s)
| | | | - Awais Mansoor
- Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD, USA
| | | | | | | |
Collapse
|
5
|
A Sensitive Thresholding Method for Confocal Laser Scanning Microscope Image Stacks of Microbial Biofilms. Sci Rep 2018; 8:13013. [PMID: 30158655 PMCID: PMC6115396 DOI: 10.1038/s41598-018-31012-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 08/02/2018] [Indexed: 11/08/2022] Open
Abstract
Biofilms are surface-attached microbial communities whose architecture can be captured with confocal microscopy. Manual or automatic thresholding of acquired images is often needed to help distinguish biofilm biomass from background noise. However, manual thresholding is subjective and current automatic thresholding methods can lead to loss of meaningful data. Here, we describe an automatic thresholding method designed for confocal fluorescent signal, termed the biovolume elasticity method (BEM). We evaluated BEM using confocal image stacks of oral biofilms grown in pooled human saliva. Image stacks were thresholded manually and automatically with three different methods; Otsu, iterative selection (IS), and BEM. Effects on biovolume, surface area, and number of objects detected indicated that the BEM was the least aggressive at removing signal, and provided the greatest visual and quantitative acuity of single cells. Thus, thresholding with BEM offers a sensitive, automatic, and tunable method to maintain biofilm architectural properties for subsequent analysis.
Collapse
|
6
|
Tatano R, Berkels B, Deserno TM. Mesh-to-raster region-of-interest-based nonrigid registration of multimodal images. J Med Imaging (Bellingham) 2017; 4:044002. [PMID: 29098167 DOI: 10.1117/1.jmi.4.4.044002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 09/26/2017] [Indexed: 11/14/2022] Open
Abstract
Region of interest (RoI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from computed axial tomography scanners as pixel or voxel data. Previously, we presented a 2-D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster framework to register RoIs in multimodal images; (ii) a 3-D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2-D using ground truth (GT) provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem, where the objective consists of a data term, which involves the signed distance function of the RoI from the reference image and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The RoI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit and Elastix.
Collapse
Affiliation(s)
- Rosalia Tatano
- RWTH Aachen University, Aachen Institute for Advanced Study in Computational Engineering Science (AICES), Aachen, Germany
| | - Benjamin Berkels
- RWTH Aachen University, Aachen Institute for Advanced Study in Computational Engineering Science (AICES), Aachen, Germany
| | - Thomas M Deserno
- University of Braunschweig, Peter L. Reichertz Institute for Medical Informatics, Institute of Technology and Hannover Medical School, Braunschweig, Germany
| |
Collapse
|
7
|
Wu CH, Tsai WH, Chen YH, Liu JK, Sun YN. Model-Based Orthodontic Assessments for Dental Panoramic Radiographs. IEEE J Biomed Health Inform 2017; 22:545-551. [PMID: 28141539 DOI: 10.1109/jbhi.2017.2660527] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
For better treatment outcomes, dentists usually use a set of parameters for orthodontic evaluation. In this study, a new method is proposed to assist dentists in obtaining reliable assessment of these parameters. The proposed method is based on dental panoramic radiographs and can be divided into four stages: image preprocessing, model training, tooth segmentation, and assessment of orthodontic parameters. The image is first normalized and enhanced. Then, the model training stage consists of shape and image model training, energy function training, and weight training. Next, we automatically segment the tooth contours in an energy-minimized manner. Finally, the automatic assessment of orthodontic parameters is carried out. The experimental results show that the average of absolute distance, the Dice similarity coefficient, and the average qualitative score ranged between 4.17 and 6.03, 0.87 and 0.90, as well as 2.58 and 3.12, respectively. The orthodontic assessment also is close to the evaluation of orthodontists. It has been shown that the proposed method can obtain accurate and consistent measurement in helping dentists to obtain an objective treatment evaluation.
Collapse
|
8
|
Kashif M, Jonas SM, Deserno TM. Deterioration of R-Wave Detection in Pathology and Noise: A Comprehensive Analysis Using Simultaneous Truth and Performance Level Estimation. IEEE Trans Biomed Eng 2016; 64:2163-2175. [PMID: 27913321 DOI: 10.1109/tbme.2016.2633277] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE For long-term electrocardiography (ECG) recordings, accurate R-wave detection is essential. Several algorithms have been proposed but not yet compared on large, noisy, or pathological data, since manual ground-truth establishment is impossible on such large data. METHODS We apply the simultaneous truth and performance level estimation (STAPLE) method to ECG signals comparing nine R-wave detectors: Pan and Tompkins (1985), Chernenko (2007), Arzeno et al. (2008), Manikandan et al. (2012), Lentini et al. (2013), Sartor et al. (2014), Liu et al. (2014), Arteaga-Falconi et al. (2015), and Khamis et al. (2016). Experiments are performed on the MIT-BIH database, TELE database, PTB database, and 24/7 Holter recordings of 60 multimorbid subjects. RESULTS Existing approaches on R-wave detection perform excellently on healthy subjects (F-measure above 99% for most methods), but performance drops to a range of F = 90.10% (Khamis et al.) to F = 30.10% (Chernenko) when analyzing the 37 million R-waves of multimorbid subjects. STAPLE improves existing approaches (ΔF = 0.04 for the MIT-BIH database and ΔF = 0.95 for the TELE database) and yields a relative (not absolute) scale to compare algorithms' performances. CONCLUSION More robust R-wave detection methods or flexible combinations are required to analyze continuous data captured from pathological subjects or that is recorded with dropouts and noise. SIGNIFICANCE STAPLE algorithm has been adopted from image to signal analysis to compare algorithms on large, incomplete, and noisy data without manual ground truth. Existing approaches on R-wave detection weakly perform on such data.
Collapse
|
9
|
Mansoor A, Bagci U, Foster B, Xu Z, Douglas D, Solomon JM, Udupa JK, Mollura DJ. CIDI-lung-seg: a single-click annotation tool for automatic delineation of lungs from CT scans. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:1087-90. [PMID: 25570151 DOI: 10.1109/embc.2014.6943783] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate and fast extraction of lung volumes from computed tomography (CT) scans remains in a great demand in the clinical environment because the available methods fail to provide a generic solution due to wide anatomical variations of lungs and existence of pathologies. Manual annotation, current gold standard, is time consuming and often subject to human bias. On the other hand, current state-of-the-art fully automated lung segmentation methods fail to make their way into the clinical practice due to their inability to efficiently incorporate human input for handling misclassifications and praxis. This paper presents a lung annotation tool for CT images that is interactive, efficient, and robust. The proposed annotation tool produces an "as accurate as possible" initial annotation based on the fuzzy-connectedness image segmentation, followed by efficient manual fixation of the initial extraction if deemed necessary by the practitioner. To provide maximum flexibility to the users, our annotation tool is supported in three major operating systems (Windows, Linux, and the Mac OS X). The quantitative results comparing our free software with commercially available lung segmentation tools show higher degree of consistency and precision of our software with a considerable potential to enhance the performance of routine clinical tasks.
Collapse
|