1
|
Larkin A, Kim JS, Kim N, Baek SH, Yamada S, Park K, Tai K, Yanagi Y, Park JH. Accuracy of artificial intelligence-assisted growth prediction in skeletal Class I preadolescent patients using serial lateral cephalograms for a 2-year growth interval. Orthod Craniofac Res 2024. [PMID: 38321788 DOI: 10.1111/ocr.12764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2024] [Indexed: 02/08/2024]
Abstract
OBJECTIVE To investigate the accuracy of artificial intelligence-assisted growth prediction using a convolutional neural network (CNN) algorithm and longitudinal lateral cephalograms (Lat-cephs). MATERIALS AND METHODS A total of 198 Japanese preadolescent children, who had skeletal Class I malocclusion and whose Lat-cephs were available at age 8 years (T0) and 10 years (T1), were allocated into the training, validation, and test phases (n = 161, n = 17, n = 20). Orthodontists and the CNN model identified 28 hard-tissue landmarks (HTL) and 19 soft-tissue landmarks (STL). The mean prediction error values were defined as 'excellent,' 'very good,' 'good,' 'acceptable,' and 'unsatisfactory' (criteria: 0.5 mm, 1.0 mm, 1.5 mm, and 2.0 mm, respectively). The degree of accurate prediction percentage (APP) was defined as 'very high,' 'high,' 'medium,' and 'low' (criteria: 90%, 70%, and 50%, respectively) according to the percentage of subjects that showed the error range within 1.5 mm. RESULTS All HTLs showed acceptable-to-excellent mean PE values, while the STLs Pog', Gn', and Me' showed unsatisfactory values, and the rest showed good-to-acceptable values. Regarding the degree of APP, HTLs Ba, ramus posterior, Pm, Pog, B-point, Me, and mandibular first molar root apex exhibited low APPs. The STLs labrale superius, lower embrasure, lower lip, point of lower profile, B', Pog,' Gn' and Me' also exhibited low APPs. The remainder of HTLs and STLs showed medium-to-very high APPs. CONCLUSION Despite the possibility of using the CNN model to predict growth, further studies are needed to improve the prediction accuracy in HTLs and STLs of the chin area.
Collapse
Affiliation(s)
- A Larkin
- Postgraduate Orthodontic Program, Arizona School of Dentistry & Oral Health, A.T. Still University, Mesa, Arizona, USA
| | - J-S Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - N Kim
- Department of Convergence Medicine, University of Ulsan, College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - S-H Baek
- Department of Orthodontics, School of Dentistry, Dental Research Institute, Seoul National University, Seoul, Republic of Korea
| | - S Yamada
- Department of Dental Informatics, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - K Park
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - K Tai
- Postgraduate Orthodontic Program, Arizona School of Dentistry & Oral Health, A.T. Still University, Mesa, Arizona, USA
- Private Practice of Orthodontics, Okayama, Japan
| | - Y Yanagi
- Department of Dental Informatics, Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama, Japan
| | - J H Park
- Postgraduate Orthodontic Program, Arizona School of Dentistry & Oral Health, A.T. Still University, Mesa, Arizona, USA
- Graduate School of Dentistry, Kyung Hee University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Tanikawa C, Oka A, Lim J, Lee C, Yamashiro T. Clinical applicability of automated cephalometric landmark identification: Part II - Number of images needed to re-learn various quality of images. Orthod Craniofac Res 2021; 24 Suppl 2:53-58. [PMID: 34145974 DOI: 10.1111/ocr.12511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 05/19/2021] [Accepted: 05/31/2021] [Indexed: 11/30/2022]
Abstract
AIM To estimate the number of cephalograms needed to re-learn for different quality images, when artificial intelligence (AI) systems are introduced in a clinic. SETTINGS AND SAMPLE POPULATION A total of 2385 digital lateral cephalograms (University data [1785]; Clinic F [300]; Clinic N [300]) were used. Using data from the university and clinics F and N, and combined data from clinics F and N, 50 cephalograms were randomly selected to test the system's performance (Test-data O, F, N, FN). MATERIALS AND METHODS To examine the recognition ability of landmark positions of the AI system developed in Part I (Original System) for other clinical data, test data F, N and FN were applied to the original system, and success rates were calculated. Then, to determine the approximate number of cephalograms needed to re-learn for different quality images, 85 and 170 cephalograms were randomly selected from each group and used for the re-learning (F85, F170, N85, N170, FN85 and FN170) of the original system. To estimate the number of cephalograms needed for re-learning, we examined the changes in the success rate of the re-trained systems and compared them with the original system. Re-trained systems F85 and F170 were evaluated with test data F, N85 and N170 from test data N, and FN85 and FN170 from test data FN. RESULTS For systems using F, N and FN, it was determined that 85, 170 and 85 cephalograms, respectively, were required for re-learning. CONCLUSIONS The number of cephalograms needed to re-learn for images of different quality was estimated.
Collapse
Affiliation(s)
- Chihiro Tanikawa
- Graduate School of Dentistry, Osaka University, Suita, Japan.,Center for Advanced Medical Engineering and Informatics, Osaka University, Suita, Japan.,Institute for Datability Science, Osaka University, Suita, Japan
| | - Ayaka Oka
- Graduate School of Dentistry, Osaka University, Suita, Japan
| | - Jaeyoen Lim
- Graduate School of Dentistry, Osaka University, Suita, Japan
| | - Chonho Lee
- Cybermedia Center, Osaka University, Suita, Japan
| | | |
Collapse
|
3
|
Tanikawa C, Lee C, Lim J, Oka A, Yamashiro T. Clinical applicability of automated cephalometric landmark identification: Part I-Patient-related identification errors. Orthod Craniofac Res 2021; 24 Suppl 2:43-52. [PMID: 34021976 DOI: 10.1111/ocr.12501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 04/29/2021] [Accepted: 05/10/2021] [Indexed: 12/20/2022]
Abstract
OBJECTIVES To determine whether AI systems that recognize cephalometric landmarks can apply to various patient groups and to examine the patient-related factors associated with identification errors. SETTING AND SAMPLE POPULATION The present retrospective cohort study analysed digital lateral cephalograms obtained from 1785 Japanese orthodontic patients. Patients were categorized into eight subgroups according to dental age, cleft lip and/or palate, orthodontic appliance use and overjet. MATERIALS AND METHODS An AI system that automatically recognizes anatomic landmarks on lateral cephalograms was used. Thirty cephalograms in each subgroup were randomly selected and used to test the system's performance. The remaining cephalograms were used for system learning. The success rates in landmark recognition were evaluated using confidence ellipses with α = 0.99 for each landmark. The selection of test samples, learning of the system and evaluation of the system were repeated five times for each subgroup. The mean success rate and identification error were calculated. Factors associated with identification errors were examined using a multiple linear regression model. RESULTS The success rate and error varied among subgroups, ranging from 85% to 91% and 1.32 mm to 1.50 mm, respectively. Cleft lip and/or palate was found to be a factor associated with greater identification errors, whereas dental age, orthodontic appliances and overjet were not significant factors (all, P < .05). CONCLUSION Artificial intelligence systems that recognize cephalometric landmarks could be applied to various patient groups. Patient-oriented errors were found in patients with cleft lip and/or palate.
Collapse
Affiliation(s)
- Chihiro Tanikawa
- Graduate School of Dentistry, Osaka University, Suita, Japan.,Center for Advanced Medical Engineering and Informatics, Osaka University, Suita, Japan.,Institute for Datability Science, Osaka University, Suita, Japan
| | - Chonho Lee
- Cybermedia Center, Osaka University, Suita, Japan
| | - Jaeyoen Lim
- Graduate School of Dentistry, Osaka University, Suita, Japan
| | - Ayaka Oka
- Graduate School of Dentistry, Osaka University, Suita, Japan
| | | |
Collapse
|