1
|
Hendrickx J, Gracea RS, Vanheers M, Winderickx N, Preda F, Shujaat S, Jacobs R. Can artificial intelligence-driven cephalometric analysis replace manual tracing? A systematic review and meta-analysis. Eur J Orthod 2024; 46:cjae029. [PMID: 38895901 PMCID: PMC11185929 DOI: 10.1093/ejo/cjae029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to investigate the accuracy and efficiency of artificial intelligence (AI)-driven automated landmark detection for cephalometric analysis on two-dimensional (2D) lateral cephalograms and three-dimensional (3D) cone-beam computed tomographic (CBCT) images. SEARCH METHODS An electronic search was conducted in the following databases: PubMed, Web of Science, Embase, and grey literature with search timeline extending up to January 2024. SELECTION CRITERIA Studies that employed AI for 2D or 3D cephalometric landmark detection were included. DATA COLLECTION AND ANALYSIS The selection of studies, data extraction, and quality assessment of the included studies were performed independently by two reviewers. The risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. A meta-analysis was conducted to evaluate the accuracy of the 2D landmarks identification based on both mean radial error and standard error. RESULTS Following the removal of duplicates, title and abstract screening, and full-text reading, 34 publications were selected. Amongst these, 27 studies evaluated the accuracy of AI-driven automated landmarking on 2D lateral cephalograms, while 7 studies involved 3D-CBCT images. A meta-analysis, based on the success detection rate of landmark placement on 2D images, revealed that the error was below the clinically acceptable threshold of 2 mm (1.39 mm; 95% confidence interval: 0.85-1.92 mm). For 3D images, meta-analysis could not be conducted due to significant heterogeneity amongst the study designs. However, qualitative synthesis indicated that the mean error of landmark detection on 3D images ranged from 1.0 to 5.8 mm. Both automated 2D and 3D landmarking proved to be time-efficient, taking less than 1 min. Most studies exhibited a high risk of bias in data selection (n = 27) and reference standard (n = 29). CONCLUSION The performance of AI-driven cephalometric landmark detection on both 2D cephalograms and 3D-CBCT images showed potential in terms of accuracy and time efficiency. However, the generalizability and robustness of these AI systems could benefit from further improvement. REGISTRATION PROSPERO: CRD42022328800.
Collapse
Affiliation(s)
- Julie Hendrickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Rellyca Sola Gracea
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta 55281, Indonesia
| | - Michiel Vanheers
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Nicolas Winderickx
- Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
| | - Flavia Preda
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh 14611, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, 3000 Leuven, Belgium
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- Department of Dental Medicine, Karolinska Institutet, 141 04 Stockholm, Sweden
| |
Collapse
|
2
|
Takeshita WM, Silva TP, de Souza LLT, Tenorio JM. State of the art and prospects for artificial intelligence in orthognathic surgery: A systematic review with meta-analysis. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101787. [PMID: 38302057 DOI: 10.1016/j.jormas.2024.101787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 02/03/2024]
Abstract
OBJECTIVE To present a systematic review of the state of the art regarding clinical applications, main features, and outcomes of artificial intelligence (AI) in orthognathic surgery. METHODS The PICOS strategy was performed on a systematic review (SR) to answer the following question: "What are the state of the art, characteristics and outcomes of applications with artificial intelligence for orthognathic surgery?" After registering in PROSPERO (CRD42021270789) a systematic search was performed in the databases: PubMed (including MedLine), Scopus, Embase, LILACS, MEDLINE EBSCOHOST and Cochrane Library. 195 studies were selected, after screening titles and abstracts, of which thirteen manuscripts were included in the qualitative analysis and six in the quantitative analysis. The treatment effects were plotted in a Forest-plot. JBI questionnaire for observational studies was used to asses the risk of bias. The quality of the SR evidence was assessed using the GRADE tool. RESULTS AI studies on 2D cephalometry for orthognathic surgery, the Tau2 = 0.00, Chi2 = 3.78, p = 1.00 and I² of 0 %, indicating low heterogeneity, AI did not differ statistically from control (p = 0.79). AI studies in the diagnosis of the decision of whether or not to perform orthognathic surgery showed heterogeneity, and therefore meta-analysis was not peformed. CONCLUSION The outcome of AI is similar to the control group, with a low degree of bias, highlighting its potential for use in various applications.
Collapse
Affiliation(s)
- Wilton Mitsunari Takeshita
- Department of Diagnosis and Surgery, São Paulo State University (Unesp), School of Dentistry, Araçatuba, 16015-050 Araçatuba, São Paulo, Brazil
| | - Thaísa Pinheiro Silva
- Department of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas (UNICAMP), 13414-903 Piracicaba, Sao Paulo, Brazil.
| | | | - Josceli Maria Tenorio
- Department of Information technology and health, Federal Institute of São Paulo, 01109-010 São Paulo, São Paulo, Brazil
| |
Collapse
|
3
|
Takahashi K, Shimamura Y, Tachiki C, Nishii Y, Hagiwara M. Cephalometric landmark detection without X-rays combining coordinate regression and heatmap regression. Sci Rep 2023; 13:20011. [PMID: 37974018 PMCID: PMC10654665 DOI: 10.1038/s41598-023-46919-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 11/07/2023] [Indexed: 11/19/2023] Open
Abstract
Fully automated techniques using convolutional neural networks for cephalometric landmark detection have recently advanced. However, all existing studies have adopted X-rays. The problem of direct exposure of patients to X-ray radiation remains unsolved. We propose a model for detecting cephalometric landmarks using only facial profile images without X-rays. First, the model estimates the landmark coordinates using the features of facial profile images through high-resolution representation learning. Second, considering the spatial relationship of the landmarks, the model refines the estimated coordinates. The estimated coordinates are input into fully connected networks to improve the accuracy. During the experiment, a total of 2000 facial profile images collected from 2000 female patients were used. Experiments results suggested that the proposed method may perform at a level equal to or potentially better than existing methods using cephalograms. We obtained an MRE of 0.61 mm for the test data and a mean detection rate of 98.20% within 2 mm. Our proposed two-stage learning method enables a highly accurate estimation of the landmark positions using only facial profile images. The results indicate that X-rays may not be required when detecting cephalometric landmarks.
Collapse
Affiliation(s)
- Kaisei Takahashi
- Department of Information and Computer Science, Faculty of Science and Technology, Keio University, Kanagawa, 223-8522, Japan.
| | - Yui Shimamura
- Department of Orthodontics, Tokyo Dental College, Tokyo, 101-0061, Japan
| | - Chie Tachiki
- Department of Orthodontics, Tokyo Dental College, Tokyo, 101-0061, Japan
| | - Yasushi Nishii
- Department of Orthodontics, Tokyo Dental College, Tokyo, 101-0061, Japan
| | - Masafumi Hagiwara
- Department of Information and Computer Science, Faculty of Science and Technology, Keio University, Kanagawa, 223-8522, Japan
| |
Collapse
|
4
|
Hwang IK, Kang SR, Yang S, Kim JM, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ, Kim TI. SinusC-Net for automatic classification of surgical plans for maxillary sinus augmentation using a 3D distance-guided network. Sci Rep 2023; 13:11653. [PMID: 37468515 DOI: 10.1038/s41598-023-38273-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 07/06/2023] [Indexed: 07/21/2023] Open
Abstract
The objective of this study was to automatically classify surgical plans for maxillary sinus floor augmentation in implant placement at the maxillary posterior edentulous region using a 3D distance-guided network on CBCT images. We applied a modified ABC classification method consisting of five surgical approaches for the deep learning model. The proposed deep learning model (SinusC-Net) consisted of two stages of detection and classification according to the modified classification method. In detection, five landmarks on CBCT images were automatically detected using a volumetric regression network; in classification, the CBCT images were automatically classified as to the five surgical approaches using a 3D distance-guided network. The mean MRE for landmark detection was 0.87 mm, and SDR for 2 mm or lower, 95.47%. The mean accuracy, sensitivity, specificity, and AUC for classification by the SinusC-Net were 0.97, 0.92, 0.98, and 0.95, respectively. The deep learning model using 3D distance-guidance demonstrated accurate detection of 3D anatomical landmarks, and automatic and accurate classification of surgical approaches for sinus floor augmentation in implant placement at the maxillary posterior edentulous region.
Collapse
Affiliation(s)
- In-Kyung Hwang
- Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, 03080, Republic of Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea
| | - Jun-Min Kim
- Department of Electronics and Information Engineering, Hansung University, Seoul, 02876, Republic of Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Won-Jin Yi
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, 08826, Republic of Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea.
| | - Tae-Il Kim
- Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, 03080, Republic of Korea.
| |
Collapse
|
5
|
Londono J, Ghasemi S, Hussain Shah A, Fahimipour A, Ghadimi N, Hashemi S, Khurshid Z, Dashti M. Evaluation of deep learning and convolutional neural network algorithms accuracy for detecting and predicting anatomical landmarks on 2D lateral cephalometric images: A systematic review and meta-analysis. Saudi Dent J 2023; 35:487-497. [PMID: 37520606 PMCID: PMC10373073 DOI: 10.1016/j.sdentj.2023.05.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 05/15/2023] [Accepted: 05/17/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction Cephalometry is the study of skull measurements for clinical evaluation, diagnosis, and surgical planning. Machine learning (ML) algorithms have been used to accurately identify cephalometric landmarks and detect irregularities related to orthodontics and dentistry. ML-based cephalometric imaging reduces errors, improves accuracy, and saves time. Method In this study, we conducted a meta-analysis and systematic review to evaluate the accuracy of ML software for detecting and predicting anatomical landmarks on two-dimensional (2D) lateral cephalometric images. The meta-analysis followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines for selecting and screening research articles. The eligibility criteria were established based on the diagnostic accuracy and prediction of ML combined with 2D lateral cephalometric imagery. The search was conducted among English articles in five databases, and data were managed using Review Manager software (v. 5.0). Quality assessment was performed using the diagnostic accuracy studies (QUADAS-2) tool. Result Summary measurements included the mean departure from the 1-4-mm threshold or the percentage of landmarks identified within this threshold with a 95% confidence interval (CI). This meta-analysis included 21 of 577 articles initially collected on the accuracy of ML algorithms for detecting and predicting anatomical landmarks. The studies were conducted in various regions of the world, and 20 of the studies employed convolutional neural networks (CNNs) for detecting cephalometric landmarks. The pooled successful detection rates for the 1-mm, 2-mm, 2.5-mm, 3-mm, and 4-mm ranges were 65%, 81%, 86%, 91%, and 96%, respectively. Heterogeneity was determined using the random effect model. Conclusion In conclusion, ML has shown promise for landmark detection in 2D cephalometric imagery, although the accuracy has varied among studies and clinicians. Consequently, more research is required to determine its effectiveness and reliability in clinical settings.
Collapse
Affiliation(s)
- Jimmy Londono
- FACP, Professor and Director of the Prosthodontics Residency Program and the Ronald Goldstein Center for Esthetics and Implant Dentistry, Dental College of Georgia at Augusta University, Augusta, GA, United States
| | - Shohreh Ghasemi
- Department of Oral and Maxillofacial Surgery, The Dental College of Georgia at Augusta University, Augusta, GA, United States
| | - Altaf Hussain Shah
- Special Care Dentistry Clinics, University Dental Hospital, King Saud University Medical City, Riyadh, Saudi Arabia
| | - Amir Fahimipour
- School of Dentistry, Faculty of Medicine and Health, Westmead Centre for Oral Health, The University of Sydney, NSW 2145, Australia
| | - Niloofar Ghadimi
- Department of Oral and Maxillofacial Radiology, Dental School, Islamic Azad University of Medical Sciences, Tehran, Iran
| | - Sara Hashemi
- School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Zohaib Khurshid
- Department of Prosthodontics and Dental Implantology, King Faisal University, Al-Ahsa 31982, Saudi Arabia
- Center of Excellence for Regenerative Dentistry, Department of Anatomy, Faculty of Dentistry, Chulalongkorn University, Bangkok 10330, Thailand
| | - Mahmood Dashti
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
6
|
Nishimoto S, Saito T, Ishise H, Fujiwara T, Kawai K, Kakibuchi M. Three-Dimensional Craniofacial Landmark Detection in Series of CT Slices Using Multi-Phased Regression Networks. Diagnostics (Basel) 2023; 13:diagnostics13111930. [PMID: 37296782 DOI: 10.3390/diagnostics13111930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/26/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Geometrical assessments of human skulls have been conducted based on anatomical landmarks. If developed, the automatic detection of these landmarks will yield both medical and anthropological benefits. In this study, an automated system with multi-phased deep learning networks was developed to predict the three-dimensional coordinate values of craniofacial landmarks. Computed tomography images of the craniofacial area were obtained from a publicly available database. They were digitally reconstructed into three-dimensional objects. Sixteen anatomical landmarks were plotted on each of the objects, and their coordinate values were recorded. Three-phased regression deep learning networks were trained using ninety training datasets. For the evaluation, 30 testing datasets were employed. The 3D error for the first phase, which tested 30 data, was 11.60 px on average (1 px = 500/512 mm). For the second phase, it was significantly improved to 4.66 px. For the third phase, it was further significantly reduced to 2.88. This was comparable to the gaps between the landmarks, as plotted by two experienced practitioners. Our proposed method of multi-phased prediction, which conducts coarse detection first and narrows down the detection area, may be a possible solution to prediction problems, taking into account the physical limitations of memory and computation.
Collapse
Affiliation(s)
- Soh Nishimoto
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Takuya Saito
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Hisako Ishise
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Toshihiro Fujiwara
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Kenichiro Kawai
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| | - Masao Kakibuchi
- Department of Plastic Surgery, Hyogo Medical University, Nishinomiya 663-8501, Japan
| |
Collapse
|
7
|
de Queiroz Tavares Borges Mesquita G, Vieira WA, Vidigal MTC, Travençolo BAN, Beaini TL, Spin-Neto R, Paranhos LR, de Brito Júnior RB. Artificial Intelligence for Detecting Cephalometric Landmarks: A Systematic Review and Meta-analysis. J Digit Imaging 2023; 36:1158-1179. [PMID: 36604364 PMCID: PMC10287619 DOI: 10.1007/s10278-022-00766-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/19/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023] Open
Abstract
Using computer vision through artificial intelligence (AI) is one of the main technological advances in dentistry. However, the existing literature on the practical application of AI for detecting cephalometric landmarks of orthodontic interest in digital images is heterogeneous, and there is no consensus regarding accuracy and precision. Thus, this review evaluated the use of artificial intelligence for detecting cephalometric landmarks in digital imaging examinations and compared it to manual annotation of landmarks. An electronic search was performed in nine databases to find studies that analyzed the detection of cephalometric landmarks in digital imaging examinations with AI and manual landmarking. Two reviewers selected the studies, extracted the data, and assessed the risk of bias using QUADAS-2. Random-effects meta-analyses determined the agreement and precision of AI compared to manual detection at a 95% confidence interval. The electronic search located 7410 studies, of which 40 were included. Only three studies presented a low risk of bias for all domains evaluated. The meta-analysis showed AI agreement rates of 79% (95% CI: 76-82%, I2 = 99%) and 90% (95% CI: 87-92%, I2 = 99%) for the thresholds of 2 and 3 mm, respectively, with a mean divergence of 2.05 (95% CI: 1.41-2.69, I2 = 10%) compared to manual landmarking. The menton cephalometric landmark showed the lowest divergence between both methods (SMD, 1.17; 95% CI, 0.82; 1.53; I2 = 0%). Based on very low certainty of evidence, the application of AI was promising for automatically detecting cephalometric landmarks, but further studies should focus on testing its strength and validity in different samples.
Collapse
Affiliation(s)
| | - Walbert A Vieira
- Department of Restorative Dentistry, Endodontics Division, School of Dentistry of Piracicaba, State University of Campinas, Piracicaba, São Paulo, Brazil
| | | | | | - Thiago Leite Beaini
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil
| | - Rubens Spin-Neto
- Department of Dentistry and Oral Health, Section for Oral Radiology, Aarhus University, Aarhus C, Denmark
| | - Luiz Renato Paranhos
- Department of Preventive and Community Dentistry, School of Dentistry, Federal University of Uberlândia, Campus Umuarama Av. Pará, 1720, Bloco 2G, sala 1, 38405-320, Uberlândia, Minas Gerais, Brazil.
| | | |
Collapse
|
8
|
Artificial intelligence models for clinical usage in dentistry with a focus on dentomaxillofacial CBCT: a systematic review. Oral Radiol 2023; 39:18-40. [PMID: 36269515 DOI: 10.1007/s11282-022-00660-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 09/29/2022] [Indexed: 01/05/2023]
Abstract
This study aimed at performing a systematic review of the literature on the application of artificial intelligence (AI) in dental and maxillofacial cone beam computed tomography (CBCT) and providing comprehensive descriptions of current technical innovations to assist future researchers and dental professionals. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols (PRISMA) Statement was followed. The study's protocol was prospectively registered. Following databases were searched, based on MeSH and Emtree terms: PubMed/MEDLINE, Embase and Web of Science. The search strategy enrolled 1473 articles. 59 publications were included, which assessed the use of AI on CBCT images in dentistry. According to the PROBAST guidelines for study design, seven papers reported only external validation and 11 reported both model building and validation on an external dataset. 40 studies focused exclusively on model development. The AI models employed mainly used deep learning models (42 studies), while other 17 papers used conventional approaches, such as statistical-shape and active shape models, and traditional machine learning methods, such as thresholding-based methods, support vector machines, k-nearest neighbors, decision trees, and random forests. Supervised or semi-supervised learning was utilized in the majority (96.62%) of studies, and unsupervised learning was used in two (3.38%). 52 publications included studies had a high risk of bias (ROB), two papers had a low ROB, and four papers had an unclear rating. Applications based on AI have the potential to improve oral healthcare quality, promote personalized, predictive, preventative, and participatory dentistry, and expedite dental procedures.
Collapse
|
9
|
Ahn J, Nguyen TP, Kim YJ, Kim T, Yoon J. Automated analysis of three-dimensional CBCT images taken in natural head position that combines facial profile processing and multiple deep-learning models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107123. [PMID: 36156440 DOI: 10.1016/j.cmpb.2022.107123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 08/24/2022] [Accepted: 09/08/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Analyzing three-dimensional cone beam computed tomography (CBCT) images has become an indispensable procedure for diagnosis and treatment planning of orthodontic patients. Artificial intelligence, especially deep-learning techniques for analyzing image data, shows great potential for medical and dental image analysis and diagnosis. To explore the feasibility of automating measurement of 13 geometric parameters from three-dimensional cone beam computed tomography images taken in natural head position (NHP), this study proposed a smart system that combined a facial profile analysis algorithm with deep-learning models. MATERIALS AND METHODS Using multiple views extracted from the cone beam computed tomography data of 170 cases as a dataset, our proposed method automatically calculated 13 dental parameters by partitioning, detecting regions of interest, and extracting the facial profile. Subsequently, Mask-RCNN, a trained decentralized convolutional neural network was applied to detect 23 landmarks. All the techniques were integrated into a software application with a graphical user interface designed for user convenience. To demonstrate the system's ability to replace human experts, 30 CBCT data were selected for validation. Two orthodontists and one advanced general dentist located required landmarks by using a commercial dental program. The differences between manual and developed methods were calculated and reported as the errors. RESULTS The intraclass correlation coefficients (ICCs) and 95% confidence interval (95% CI) for intra-observer reliability were 0.98 (0.97-0.99) for observer 1; 0.95 (0.93-0.97) for observer 2; 0.98 (0.97-0.99) for observer 3 after measuring 13 parameters two times at two weeks interval. The combined ICC for intra-observer reliability was 0.97. The ICCs and 95% CI for inter-observer reliability were 0.94 (0.91-0.97). The mean absolute value of deviation was around 1 mm for the length parameters, and smaller than 2° for angle parameters. Furthermore, ANOVA test demonstrated the consistency between the measurements of the proposed method and those of human experts statistically (Fdis=2.68, ɑ=0.05). CONCLUSIONS The proposed system demonstrated the high consistency with the manual measurements of human experts and its applicability. This method aimed to help human experts save time and efforts for analyzing three-dimensional CBCT images of orthodontic patients.
Collapse
Affiliation(s)
- Janghoon Ahn
- Department of Orthodontics, Kangnam Sacred Heart Hospital, Hallym University, Singil-ro 1 gil, Yeongdeungpo-gu, Seoul 07441, Republic of Korea
| | - Thong Phi Nguyen
- Department of Mechanical Design Engineering/ Major in Materials, Devices, and Equipment, Hanyang University, 222, Wangsimni-ro, Seongdongsu, Seoul 04763, Republic of Korea; BK21 FOUR ERICA-ACE Centre, Hanyang University, Ansan-si, Gyeonggi-do 15588, Republic of Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Centre, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505 Republic of Korea
| | - Taeyong Kim
- Department of Advanced General Dentistry, Kangnam Sacred Heart Hospital, Hallym University, Singil-ro 1-gil, Yeongdeungpo-gu, Seoul 07441, Republic of Korea
| | - Jonghun Yoon
- Department of Mechanical Engineering, Hanyang University, 55, Hanyangdaehak-ro, Sangnok-gu, Ansan-si, Gyeonggi-do 15588, Republic of Korea; BK21 FOUR ERICA-ACE Centre, Hanyang University, Ansan-si, Gyeonggi-do 15588, Republic of Korea.
| |
Collapse
|
10
|
Deep Learning for Caries Detection: A Systematic Review. J Dent 2022; 122:104115. [DOI: 10.1016/j.jdent.2022.104115] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 03/24/2022] [Accepted: 03/28/2022] [Indexed: 12/21/2022] Open
|
11
|
Thurzo A, Kosnáčová HS, Kurilová V, Kosmeľ S, Beňuš R, Moravanský N, Kováč P, Kuracinová KM, Palkovič M, Varga I. Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy. Healthcare (Basel) 2021; 9:1545. [PMID: 34828590 PMCID: PMC8619074 DOI: 10.3390/healthcare9111545] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 11/10/2021] [Accepted: 11/10/2021] [Indexed: 12/11/2022] Open
Abstract
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without any human supervision. 3D CNN is used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures, e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers, emphasizing activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore the current and design customized 3D CNN application methods for particular forensic research in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks.
Collapse
Affiliation(s)
- Andrej Thurzo
- Department of Stomatology and Maxillofacial Surgery, Faculty of Medicine, Comenius University in Bratislava, 81250 Bratislava, Slovakia
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
| | - Helena Svobodová Kosnáčová
- Department of Simulation and Virtual Medical Education, Faculty of Medicine, Comenius University, Sasinkova 4, 81272 Bratislava, Slovakia;
- Department of Genetics, Cancer Research Institute, Biomedical Research Center, Slovak Academy Sciences, Dúbravská Cesta 9, 84505 Bratislava, Slovakia
| | - Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 81219 Bratislava, Slovakia;
| | - Silvester Kosmeľ
- Deep Learning Engineering Department at Cognexa, Faculty of Informatics and Information Technologies, Slovak University of Technology, Ilkovičova 2, 84216 Bratislava, Slovakia;
| | - Radoslav Beňuš
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Anthropology, Faculty of Natural Sciences, Comenius University in Bratislava, Mlynská dolina Ilkovičova 6, 84215 Bratislava, Slovakia
| | - Norbert Moravanský
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Institute of Forensic Medicine, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia
| | - Peter Kováč
- forensic.sk Institute of Forensic Medical Analyses Ltd., Boženy Němcovej 8, 81104 Bratislava, Slovakia; (R.B.); (N.M.); (P.K.)
- Department of Criminal Law and Criminology, Faculty of Law Trnava University, Kollárova 10, 91701 Trnava, Slovakia
| | - Kristína Mikuš Kuracinová
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
| | - Michal Palkovič
- Institute of Pathological Anatomy, Faculty of Medicine, Comenius University in Bratislava, Sasinkova 4, 81108 Bratislava, Slovakia; (K.M.K.); (M.P.)
- Forensic Medicine and Pathological Anatomy Department, Health Care Surveillance Authority (HCSA), Sasinkova 4, 81108 Bratislava, Slovakia
| | - Ivan Varga
- Institute of Histology and Embryology, Faculty of Medicine, Comenius University in Bratislava, 81372 Bratislava, Slovakia;
| |
Collapse
|