1
|
Chang JS, Ma CY, Ko EWC. Prediction of surgery-first approach orthognathic surgery using deep learning models. Int J Oral Maxillofac Surg 2024; 53:942-949. [PMID: 38821731 DOI: 10.1016/j.ijom.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 04/24/2024] [Accepted: 05/08/2024] [Indexed: 06/02/2024]
Abstract
The surgery-first approach (SFA) orthognathic surgery can be beneficial due to reduced overall treatment time and earlier profile improvement. The objective of this study was to utilize deep learning to predict the treatment modality of SFA or the orthodontics-first approach (OFA) in orthognathic surgery patients and assess its clinical accuracy. A supervised deep learning model using three convolutional neural networks (CNNs) was trained based on lateral cephalograms and occlusal views of 3D dental model scans from 228 skeletal Class III malocclusion patients (114 treated by SFA and 114 by OFA). An ablation study of five groups (lateral cephalogram only, mandible image only, maxilla image only, maxilla and mandible images, and all data combined) was conducted to assess the influence of each input type. The results showed the average validation accuracy, precision, recall, F1 score, and AUROC for the five folds were 0.978, 0.980, 0.980, 0.980, and 0.998 ; the average testing results for the five folds were 0.906, 0.986, 0.828, 0.892, and 0.952. The lateral cephalogram only group had the least accuracy, while the maxilla image only group had the best accuracy. Deep learning provides a novel method for an accelerated workflow, automated assisted decision-making, and personalized treatment planning.
Collapse
Affiliation(s)
- J-S Chang
- Graduate Institute of Dental and Craniofacial Science, Chang Gung University, Taoyuan, Taiwan; Department of Craniofacial Orthodontics, Chang Gung Memorial Hospital, Taipei, Taiwan
| | - C-Y Ma
- Department of Artificial Intelligence, Chang Gung University, Taoyuan, Taiwan; Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan; Division of Rheumatology, Allergy and Immunology, Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | - E W-C Ko
- Graduate Institute of Dental and Craniofacial Science, Chang Gung University, Taoyuan, Taiwan; Department of Craniofacial Orthodontics, Chang Gung Memorial Hospital, Taipei, Taiwan; Craniofacial Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan.
| |
Collapse
|
2
|
Lin Q, Xiongbo G, Zhang W, Cai L, Yang R, Chen H, Cai K. A Novel Approach of Surface Texture Mapping for Cone-Beam Computed Tomography in Image-Guided Surgical Navigation. IEEE J Biomed Health Inform 2024; 28:4400-4409. [PMID: 37490371 DOI: 10.1109/jbhi.2023.3298708] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
The demand for cone-beam computed tomography (CBCT) imaging in clinics, particularly in dentistry, is rapidly increasing. Preoperative surgical planning is crucial to achieving desired treatment outcomes for imaging-guided surgical navigation. However, the lack of surface texture hinders effective communication between clinicians and patients, and the accuracy of superimposing a textured surface onto CBCT volume is limited by dissimilarity and registration based on facial features. To address these issues, this study presents a CBCT imaging system integrated with a monocular camera for reconstructing the texture surface by mapping it onto a 3D surface model created from CBCT images. The proposed method utilizes a geometric calibration tool for accurate mapping of the camera-visible surface with the mosaic texture. Additionally, a novel approach using 3D-2D feature mapping and surface parameterization technology is proposed for texture surface reconstruction. Experimental results, obtained from both real and simulation data, validate the effectiveness of the proposed approach with an error reduction to 0.32 mm and automated generation of integrated images. These findings demonstrate the robustness and high accuracy of our approach, improving the performance of texture mapping in CBCT imaging.
Collapse
|
3
|
Fang X, Deng HH, Kuang T, Xu X, Lee J, Gateno J, Yan P. Patient-specific reference model estimation for orthognathic surgical planning. Int J Comput Assist Radiol Surg 2024; 19:1439-1447. [PMID: 38869779 DOI: 10.1007/s11548-024-03123-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 03/22/2024] [Indexed: 06/14/2024]
Abstract
PURPOSE Accurate estimation of reference bony shape models is fundamental for orthognathic surgical planning. Existing methods to derive this model are of two types: one determines the reference model by estimating the deformation field to correct the patient's deformed jaw, often introducing distortions in the predicted reference model; The other derives the reference model using a linear combination of their landmarks/vertices but overlooks the intricate nonlinear relationship between the subjects, compromising the model's precision and quality. METHODS We have created a self-supervised learning framework to estimate the reference model. The core of this framework is a deep query network, which estimates the similarity scores between the patient's midface and those of the normal subjects in a high-dimensional space. Subsequently, it aggregates high-dimensional features of these subjects and projects these features back to 3D structures, ultimately achieving a patient-specific reference model. RESULTS Our approach was trained using a dataset of 51 normal subjects and tested on 30 patient subjects to estimate their reference models. Performance assessment against the actual post-operative bone revealed a mean Chamfer distance error of 2.25 mm and an average surface distance error of 2.30 mm across the patient subjects. CONCLUSION Our proposed method emphasizes the correlation between the patients and the normal subjects in a high-dimensional space, facilitating the generation of the patient-specific reference model. Both qualitative and quantitative results demonstrate its superiority over current state-of-the-art methods in reference model estimation.
Collapse
Affiliation(s)
- Xi Fang
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA
| | - Xuanang Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Jungwook Lee
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, 77030, USA.
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NewYork, NY, 10021, USA.
| | - Pingkun Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| |
Collapse
|
4
|
Bao J, Zhang X, Xiang S, Liu H, Cheng M, Yang Y, Huang X, Xiang W, Cui W, Lai HC, Huang S, Wang Y, Qian D, Yu H. Deep Learning-Based Facial and Skeletal Transformations for Surgical Planning. J Dent Res 2024; 103:809-819. [PMID: 38808566 DOI: 10.1177/00220345241253186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024] Open
Abstract
The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.
Collapse
Affiliation(s)
- J Bao
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - X Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - S Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - M Cheng
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Y Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - X Huang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Xiang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Cui
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - H C Lai
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - S Huang
- Department of Oral and Maxillofacial Surgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Y Wang
- Qingdao Stomatological Hospital Affiliated to Qingdao University, Qingdao, Shandong, China
| | - D Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Yu
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| |
Collapse
|
5
|
Ghamsarian N, El-Shabrawi Y, Nasirihaghighi S, Putzgruber-Adamitsch D, Zinkernagel M, Wolf S, Schoeffmann K, Sznitman R. Cataract-1K Dataset for Deep-Learning-Assisted Analysis of Cataract Surgery Videos. Sci Data 2024; 11:373. [PMID: 38609405 PMCID: PMC11014927 DOI: 10.1038/s41597-024-03193-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 03/28/2024] [Indexed: 04/14/2024] Open
Abstract
In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.
Collapse
Affiliation(s)
- Negin Ghamsarian
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| | - Yosuf El-Shabrawi
- Department of Ophthalmology, Klinikum Klagenfurt, Klagenfurt, Austria
| | - Sahar Nasirihaghighi
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria
| | | | | | - Sebastian Wolf
- Department of Ophthalmology, Inselspital, Bern, Switzerland
| | - Klaus Schoeffmann
- Department of Information Technology, University of Klagenfurt, Klagenfurt, Austria.
| | - Raphael Sznitman
- Center for Artificial Intelligence in Medicine (CAIM), Department of Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
6
|
Du H, Wu G, Hu Y, He Y, Zhang P. Experimental research based on robot-assisted surgery: Lower limb fracture reduction surgery planning navigation system. Health Sci Rep 2024; 7:e2033. [PMID: 38655421 PMCID: PMC11035755 DOI: 10.1002/hsr2.2033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/16/2024] [Accepted: 03/15/2024] [Indexed: 04/26/2024] Open
Abstract
Background and Aims Lower extremity fracture reduction surgery is a key step in the treatment of lower extremity fractures. How to ensure high precision of fracture reduction while reducing secondary trauma during reduction is a difficult problem in current surgery. Methods First, segmentation and three-dimensional reconstruction are performed based on fracture computed tomography images. A cross-sectional point cloud extraction algorithm based on the normal filtering of the long axis of the bone is designed to obtain the cross-sectional point clouds of the distal bone and the proximal bone, and the optimal reset target pose of the broken bone is obtained by using the iterative closest point algorithm. Then, the optimal reset sequence of reset parameters was determined, combined with the broken bone collision detection algorithm, a surgical planning algorithm for lower limb fracture reset was proposed, which can effectively reduce the reset force while ensuring the accuracy of the reset process without collision. Results The average error of the reduction of the model bone was within 1.0 mm. The reduction operation using the planning and navigation system of lower extremity fracture reduction surgery can effectively reduce the reduction force. At the same time, it can better ensure the smooth change of the reduction force. Conclusion Planning and navigation system of lower extremity fracture reduction surgery is feasible and effective.
Collapse
Affiliation(s)
- Hanwen Du
- Shenzhen Institute of Advanced TechnologyChinese Academy of SciencesShenzhenChina
- University of Chinese Academy of SciencesBeijingChina
| | - Geyang Wu
- Shenzhen Institute of Advanced TechnologyChinese Academy of SciencesShenzhenChina
- Harbin Institute of Technology, ShenzhenShenzhenChina
| | - Ying Hu
- Shenzhen Institute of Advanced TechnologyChinese Academy of SciencesShenzhenChina
| | - Yucheng He
- Shenzhen Institute of Advanced TechnologyChinese Academy of SciencesShenzhenChina
- Guangzhou Medical UniversityGuangzhouChina
| | - Peng Zhang
- Shenzhen Institute of Advanced TechnologyChinese Academy of SciencesShenzhenChina
| |
Collapse
|
7
|
Fang X, Kim D, Xu X, Kuang T, Lampen N, Lee J, Deng HH, Liebschner MAK, Xia JJ, Gateno J, Yan P. Correspondence attention for facial appearance simulation. Med Image Anal 2024; 93:103094. [PMID: 38306802 PMCID: PMC11265218 DOI: 10.1016/j.media.2024.103094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/02/2023] [Accepted: 01/22/2024] [Indexed: 02/04/2024]
Abstract
In orthognathic surgical planning for patients with jaw deformities, it is crucial to accurately simulate the changes in facial appearance that follow the bony movement. Compared with the traditional biomechanics-based methods like the finite-element method (FEM), which are both labor-intensive and computationally inefficient, deep learning-based methods offer an efficient and robust modeling alternative. However, current methods do not account for the physical relationship between facial soft tissue and bony structure, causing them to fall short in accuracy compared to FEM. In this work, we propose an Attentive Correspondence assisted Movement Transformation network (ACMT-Net) to predict facial changes by correlating facial soft tissue changes with bony movement through a point-to-point attentive correspondence matrix. To ensure efficient training, we also introduce a contrastive loss for self-supervised pre-training of the ACMT-Net with a k-Nearest Neighbors (k-NN) based clustering. Experimental results on patients with jaw deformities show that our proposed solution can achieve significantly improved computational efficiency over the state-of-the-art FEM-based method with comparable facial change prediction accuracy.
Collapse
Affiliation(s)
- Xi Fang
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Xuanang Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Nathan Lampen
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Jungwook Lee
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | | | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA; Weill Medical College, Cornell University, New York, NY, 10021, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA; Weill Medical College, Cornell University, New York, NY, 10021, USA.
| | - Pingkun Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
8
|
Du W, Bi W, Liu Y, Zhu Z, Tai Y, Luo E. Machine learning-based decision support system for orthognathic diagnosis and treatment planning. BMC Oral Health 2024; 24:286. [PMID: 38419015 PMCID: PMC10902963 DOI: 10.1186/s12903-024-04063-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 02/22/2024] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND Dento-maxillofacial deformities are common problems. Orthodontic-orthognathic surgery is the primary treatment but accurate diagnosis and careful surgical planning are essential for optimum outcomes. This study aimed to establish and verify a machine learning-based decision support system for treatment of dento-maxillofacial malformations. METHODS Patients (n = 574) with dento-maxillofacial deformities undergoing spiral CT during January 2015 to August 2020 were enrolled to train diagnostic models based on five different machine learning algorithms; the diagnostic performances were compared with expert diagnoses. Accuracy, sensitivity, specificity, and area under the curve (AUC) were calculated. The adaptive artificial bee colony algorithm was employed to formulate the orthognathic surgical plan, and subsequently evaluated by maxillofacial surgeons in a cohort of 50 patients. The objective evaluation included the difference in bone position between the artificial intelligence (AI) generated and actual surgical plans for the patient, along with discrepancies in postoperative cephalometric analysis outcomes. RESULTS The binary relevance extreme gradient boosting model performed best, with diagnostic success rates > 90% for six different kinds of dento-maxillofacial deformities; the exception was maxillary overdevelopment (89.27%). AUC was > 0.88 for all diagnostic types. Median score for the surgical plans was 9, and was improved after human-computer interaction. There was no statistically significant difference between the actual and AI- groups. CONCLUSIONS Machine learning algorithms are effective for diagnosis and surgical planning of dento-maxillofacial deformities and help improve diagnostic efficiency, especially in lower medical centers.
Collapse
Affiliation(s)
- Wen Du
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Wenjun Bi
- School of Electric Power Engineering, Nanjing Institute of Technology, Nanjing, China
| | - Yao Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Zhaokun Zhu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Yue Tai
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China
| | - En Luo
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
9
|
Gu Z, Wu Z, Dai N. Image generation technology for functional occlusal pits and fissures based on a conditional generative adversarial network. PLoS One 2023; 18:e0291728. [PMID: 37725620 PMCID: PMC10508633 DOI: 10.1371/journal.pone.0291728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Accepted: 09/02/2023] [Indexed: 09/21/2023] Open
Abstract
The occlusal surfaces of natural teeth have complex features of functional pits and fissures. These morphological features directly affect the occlusal state of the upper and lower teeth. An image generation technology for functional occlusal pits and fissures is proposed to address the lack of local detailed crown surface features in existing dental restoration methods. First, tooth depth image datasets were constructed using an orthogonal projection method. Second, the optimization and improvement of the model parameters were guided by introducing the jaw position spatial constraint, the L1 loss and the perceptual loss functions. Finally, two image quality evaluation metrics were applied to evaluate the quality of the generated images, and deform the dental crown by using the generated occlusal pits and fissures as constraints to compare with expert data. The results showed that the images generated using the network constructed in this study had high quality, and the detailed pit and fissure features on the crown were effectively restored, with a standard deviation of 0.1802mm compared to the expert-designed tooth crown models.
Collapse
Affiliation(s)
- Zhaodan Gu
- Jiangsu Automation Research Institute, Lianyungang, P.R. China
| | - Zhilei Wu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, P.R. China
| |
Collapse
|
10
|
Cheng M, Zhang X, Wang J, Yang Y, Li M, Zhao H, Huang J, Zhang C, Qian D, Yu H. Prediction of orthognathic surgery plan from 3D cephalometric analysis via deep learning. BMC Oral Health 2023; 23:161. [PMID: 36934241 PMCID: PMC10024836 DOI: 10.1186/s12903-023-02844-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 02/27/2023] [Indexed: 03/20/2023] Open
Abstract
BACKGROUND Preoperative planning of orthognathic surgery is indispensable for achieving ideal surgical outcome regarding the occlusion and jaws' position. However, orthognathic surgery planning is sophisticated and highly experience-dependent, which requires comprehensive consideration of facial morphology and occlusal function. This study aimed to investigate a robust and automatic method based on deep learning to predict reposition vectors of jawbones in orthognathic surgery plan. METHODS A regression neural network named VSP transformer was developed based on Transformer architecture. Firstly, 3D cephalometric analysis was employed to quantify skeletal-facial morphology as input features. Next, input features were weighted using pretrained results to minimize bias resulted from multicollinearity. Through encoder-decoder blocks, ten landmark-based reposition vectors of jawbones were predicted. Permutation importance (PI) method was used to calculate contributions of each feature to final prediction to reveal interpretability of the proposed model. RESULTS VSP transformer model was developed with 383 samples and clinically tested with 49 prospectively collected samples. Our proposed model outperformed other four classic regression models in prediction accuracy. Mean absolute errors (MAE) of prediction were 1.41 mm in validation set and 1.34 mm in clinical test set. The interpretability results of the model were highly consistent with clinical knowledge and experience. CONCLUSIONS The developed model can predict reposition vectors of orthognathic surgery plan with high accuracy and good clinically practical-effectiveness. Moreover, the model was proved reliable because of its good interpretability.
Collapse
Affiliation(s)
- Mengjia Cheng
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Xu Zhang
- Mechanical College, Shanghai Dianji University, Shanghai, 201306, China
| | - Jun Wang
- School of Computer & Computing Science, Hangzhou City University, Hangzhou, 310000, China
| | - Yang Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, 200333, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Meng Li
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Hanjiang Zhao
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Jingyang Huang
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Chenglong Zhang
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China
| | - Dahong Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China.
| | - Hongbo Yu
- Department of Oral and Cranio-Maxillofacial Surgery, Shanghai Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Shanghai, 200011, China.
- Shanghai Key Laboratory of Stomatology &, Shanghai Research Institute of Stomatology, Shanghai, 200011, China.
| |
Collapse
|
11
|
Ma L, Lian C, Kim D, Xiao D, Wei D, Liu Q, Kuang T, Ghanbari M, Li G, Gateno J, Shen SGF, Wang L, Shen D, Xia JJ, Yap PT. Bidirectional prediction of facial and bony shapes for orthognathic surgical planning. Med Image Anal 2023; 83:102644. [PMID: 36272236 PMCID: PMC10445637 DOI: 10.1016/j.media.2022.102644] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/18/2022] [Accepted: 09/27/2022] [Indexed: 11/07/2022]
Abstract
This paper proposes a deep learning framework to encode subject-specific transformations between facial and bony shapes for orthognathic surgical planning. Our framework involves a bidirectional point-to-point convolutional network (P2P-Conv) to predict the transformations between facial and bony shapes. P2P-Conv is an extension of the state-of-the-art P2P-Net and leverages dynamic point-wise convolution (i.e., PointConv) to capture local-to-global spatial information. Data augmentation is carried out in the training of P2P-Conv with multiple point subsets from the facial and bony shapes. During inference, network outputs generated for multiple point subsets are combined into a dense transformation. Finally, non-rigid registration using the coherent point drift (CPD) algorithm is applied to generate surface meshes based on the predicted point sets. Experimental results on real-subject data demonstrate that our method substantially improves the prediction of facial and bony shapes over state-of-the-art methods.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dongming Wei
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA
| | - Maryam Ghanbari
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Guoshi Li
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve G F Shen
- Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200025, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, Houston, TX 77030, USA; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
12
|
Guo N, Tian J, Wang L, Sun K, Mi L, Ming H, Zhe Z, Sun F. Discussion on the possibility of multi-layer intelligent technologies to achieve the best recover of musculoskeletal injuries: Smart materials, variable structures, and intelligent therapeutic planning. Front Bioeng Biotechnol 2022; 10:1016598. [PMID: 36246357 PMCID: PMC9561816 DOI: 10.3389/fbioe.2022.1016598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 09/14/2022] [Indexed: 11/16/2022] Open
Abstract
Although intelligent technologies has facilitated the development of precise orthopaedic, simple internal fixation, ligament reconstruction or arthroplasty can only relieve pain of patients in short-term. To achieve the best recover of musculoskeletal injuries, three bottlenecks must be broken through, which includes scientific path planning, bioactive implants and personalized surgical channels building. As scientific surgical path can be planned and built by through AI technology, 4D printing technology can make more bioactive implants be manufactured, and variable structures can establish personalized channels precisely, it is possible to achieve satisfied and effective musculoskeletal injury recovery with the progress of multi-layer intelligent technologies (MLIT).
Collapse
Affiliation(s)
- Na Guo
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
| | - Jiawen Tian
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
| | - Litao Wang
- College of Engineering, China Agricultural University, Beijing, China
| | - Kai Sun
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Lixin Mi
- Musculoskeletal Department, Beijing Rehabilitation Hospital, Beijing, China
| | - Hao Ming
- Orthopaedics, Chinese PLA General Hospital, Beijing, China
| | - Zhao Zhe
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Fuchun Sun
- Department of Computer Science and Technology, Tsinghua University, Beijing, China
- Institute of Precision Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
13
|
Jeong SH, Woo MW, Shin DS, Yeom HG, Lim HJ, Kim BC, Yun JP. Three-Dimensional Postoperative Results Prediction for Orthognathic Surgery through Deep Learning-Based Alignment Network. J Pers Med 2022; 12:998. [PMID: 35743782 PMCID: PMC9225553 DOI: 10.3390/jpm12060998] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/13/2022] Open
Abstract
To date, for the diagnosis of dentofacial dysmorphosis, we have relied almost entirely on reference points, planes, and angles. This is time consuming, and it is also greatly influenced by the skill level of the practitioner. To solve this problem, we wanted to know if deep neural networks could predict postoperative results of orthognathic surgery without relying on reference points, planes, and angles. We use three-dimensional point cloud data of the skull of 269 patients. The proposed method has two main stages for prediction. In step 1, the skull is divided into six parts through the segmentation network. In step 2, three-dimensional transformation parameters are predicted through the alignment network. The ground truth values of transformation parameters are calculated through the iterative closest points (ICP), which align the preoperative part of skull to the corresponding postoperative part of skull. We compare pointnet, pointnet++ and pointconv for the feature extractor of the alignment network. Moreover, we design a new loss function, which considers the distance error of transformed points for a better accuracy. The accuracy, mean intersection over union (mIoU), and dice coefficient (DC) of the first segmentation network, which divides the upper and lower part of skull, are 0.9998, 0.9994, and 0.9998, respectively. For the second segmentation network, which divides the lower part of skull into 5 parts, they were 0.9949, 0.9900, 0.9949, respectively. The mean absolute error of transverse, anterior-posterior, and vertical distance of part 2 (maxilla) are 0.765 mm, 1.455 mm, and 1.392 mm, respectively. For part 3 (mandible), they were 1.069 mm, 1.831 mm, and 1.375 mm, respectively, and for part 4 (chin), they were 1.913 mm, 2.340 mm, and 1.257 mm, respectively. From this study, postoperative results can now be easily predicted by simply entering the point cloud data of computed tomography.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
| | - Min Woo Woo
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea
| | - Dong Sun Shin
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Han Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea;
| | - Hun Jun Lim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, College of Dentistry, Wonkwang University, Daejeon 35233, Korea; (D.S.S.); (H.J.L.)
| | - Jong Pil Yun
- Advanced Mechatronics R&D Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (M.W.W.)
- KITECH School, University of Science and Technology, Daejeon 34113, Korea
| |
Collapse
|
14
|
Performance of Artificial Intelligence Models Designed for Diagnosis, Treatment Planning and Predicting Prognosis of Orthognathic Surgery (OGS)—A Scoping Review. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115581] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The technological advancements in the field of medical science have led to an escalation in the development of artificial intelligence (AI) applications, which are being extensively used in health sciences. This scoping review aims to outline the application and performance of artificial intelligence models used for diagnosing, treatment planning and predicting the prognosis of orthognathic surgery (OGS). Data for this paper was searched through renowned electronic databases such as PubMed, Google Scholar, Scopus, Web of science, Embase and Cochrane for articles related to the research topic that have been published between January 2000 and February 2022. Eighteen articles that met the eligibility criteria were critically analyzed based on QUADAS-2 guidelines and the certainty of evidence of the included studies was assessed using the GRADE approach. AI has been applied for predicting the post-operative facial profiles and facial symmetry, deciding on the need for OGS, predicting perioperative blood loss, planning OGS, segmentation of maxillofacial structures for OGS, and differential diagnosis of OGS. AI models have proven to be efficient and have outperformed the conventional methods. These models are reported to be reliable and reproducible, hence they can be very useful for less experienced practitioners in clinical decision making and in achieving better clinical outcomes.
Collapse
|
15
|
A Dual Discriminator Adversarial Learning Approach for Dental Occlusal Surface Reconstruction. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1933617. [PMID: 35449834 PMCID: PMC9018184 DOI: 10.1155/2022/1933617] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/12/2022] [Indexed: 11/18/2022]
Abstract
Objective. Restoring the correct masticatory function of partially edentulous patient is a challenging task primarily due to the complex tooth morphology between individuals. Although some deep learning-based approaches have been proposed for dental restorations, most of them do not consider the influence of dental biological characteristics for the occlusal surface reconstruction. Description. In this article, we propose a novel dual discriminator adversarial learning network to address these challenges. In particular, this network architecture integrates two models: a dilated convolutional-based generative model and a dual global-local discriminative model. While the generative model adopts dilated convolution layers to generate a feature representation that preserves clear tissue structure, the dual discriminative model makes use of two discriminators to jointly distinguish whether the input is real or fake. While the global discriminator focuses on the missing teeth and adjacent teeth to assess whether it is coherent as a whole, the local discriminator aims only at the defective teeth to ensure the local consistency of the generated dental crown. Results. Experiments on 1000 real-world patient dental samples demonstrate the effectiveness of our method. For quantitative comparison, the image quality metrics are used to measure the similarity of the generated occlusal surface, and the root mean square between the generated result and the target crown calculated by our method is 0.114 mm. In qualitative analysis, the proposed approach can generate more reasonable dental biological morphology. Conclusion. The results demonstrate that our method significantly outperforms the state-of-the-art methods in occlusal surface reconstruction. Importantly, the designed occlusal surface has enough anatomical morphology of natural teeth and superior clinical application value.
Collapse
|
16
|
Xiao D, Deng H, Kuang T, Ma L, Liu Q, Chen X, Lian C, Lang Y, Kim D, Gateno J, Shen SG, Shen D, Yap PT, Xia JJ. A Self-Supervised Deep Framework for Reference Bony Shape Estimation in Orthognathic Surgical Planning. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:469-477. [PMID: 34927176 PMCID: PMC8674926 DOI: 10.1007/978-3-030-87202-1_45] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Virtual orthognathic surgical planning involves simulating surgical corrections of jaw deformities on 3D facial bony shape models. Due to the lack of necessary guidance, the planning procedure is highly experience-dependent and the planning results are often suboptimal. A reference facial bony shape model representing normal anatomies can provide an objective guidance to improve planning accuracy. Therefore, we propose a self-supervised deep framework to automatically estimate reference facial bony shape models. Our framework is an end-to-end trainable network, consisting of a simulator and a corrector. In the training stage, the simulator maps jaw deformities of a patient bone to a normal bone to generate a simulated deformed bone. The corrector then restores the simulated deformed bone back to normal. In the inference stage, the trained corrector is applied to generate a patient-specific normal-looking reference bone from a real deformed bone. The proposed framework was evaluated using a clinical dataset and compared with a state-of-the-art method that is based on a supervised point-cloud network. Experimental results show that the estimated shape models given by our approach are clinically acceptable and significantly more accurate than that of the competing method.
Collapse
Affiliation(s)
- Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Hannah Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Xu Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| | - Steve Guofang Shen
- Oral and Craniomaxillofacial Surgery at Shanghai Ninth Hospital, Shanghai Jiaotong University College of Medicine, Shanghai 200011, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC 27599, USA
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital, TX 77030, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, NY 10065, USA
| |
Collapse
|
17
|
Ma L, Kim D, Lian C, Xiao D, Kuang T, Liu Q, Lang Y, Deng HH, Gateno J, Wu Y, Yang E, Liebschner MAK, Xia JJ, Yap PT. Deep Simulation of Facial Appearance Changes Following Craniomaxillofacial Bony Movements in Orthognathic Surgical Planning. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:459-468. [PMID: 34966912 PMCID: PMC8713535 DOI: 10.1007/978-3-030-87202-1_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Facial appearance changes with the movements of bony segments in orthognathic surgery of patients with craniomaxillofacial (CMF) deformities. Conventional bio-mechanical methods, such as finite element modeling (FEM), for simulating such changes, are labor intensive and computationally expensive, preventing them from being used in clinical settings. To overcome these limitations, we propose a deep learning framework to predict post-operative facial changes. Specifically, FC-Net, a facial appearance change simulation network, is developed to predict the point displacement vectors associated with a facial point cloud. FC-Net learns the point displacements of a pre-operative facial point cloud from the bony movement vectors between pre-operative and simulated post-operative bony models. FC-Net is a weakly-supervised point displacement network trained using paired data with strict point-to-point correspondence. To preserve the topology of the facial model during point transform, we employ a local-point-transform loss to constrain the local movements of points. Experimental results on real patient data reveal that the proposed framework can predict post-operative facial appearance changes remarkably faster than a state-of-the-art FEM method with comparable prediction accuracy.
Collapse
Affiliation(s)
- Lei Ma
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Daeseung Kim
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Deqiang Xiao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Tianshu Kuang
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Qin Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yankun Lang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Hannah H Deng
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
| | - Jaime Gateno
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, Ithaca, NY, USA
| | - Ye Wu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Erkun Yang
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | | | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX, USA
- Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, Ithaca, NY, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
18
|
Jeong SH, Yun JP, Yeom HG, Kim HK, Kim BC. Deep-Learning-Based Detection of Cranio-Spinal Differences between Skeletal Classification Using Cephalometric Radiography. Diagnostics (Basel) 2021; 11:591. [PMID: 33806132 PMCID: PMC8064489 DOI: 10.3390/diagnostics11040591] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/10/2021] [Accepted: 03/22/2021] [Indexed: 12/22/2022] Open
Abstract
The aim of this study was to reveal cranio-spinal differences between skeletal classification using convolutional neural networks (CNNs). Transverse and longitudinal cephalometric images of 832 patients were used for training and testing of CNNs (365 males and 467 females). Labeling was performed such that the jawbone was sufficiently masked, while the parts other than the jawbone were minimally masked. DenseNet was used as the feature extractor. Five random sampling crossvalidations were performed for two datasets. The average and maximum accuracy of the five crossvalidations were 90.43% and 92.54% for test 1 (evaluation of the entire posterior-anterior (PA) and lateral cephalometric images) and 88.17% and 88.70% for test 2 (evaluation of the PA and lateral cephalometric images obscuring the mandible). In this study, we found that even when jawbones of class I (normal mandible), class II (retrognathism), and class III (prognathism) are masked, their identification is possible through deep learning applied only in the cranio-spinal area. This suggests that cranio-spinal differences between each class exist.
Collapse
Affiliation(s)
- Seung Hyun Jeong
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Jong Pil Yun
- Safety System Research Group, Korea Institute of Industrial Technology (KITECH), Gyeongsan 38408, Korea; (S.H.J.); (J.P.Y.)
| | - Han-Gyeol Yeom
- Department of Oral and Maxillofacial Radiology, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Hwi Kang Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| | - Bong Chul Kim
- Department of Oral and Maxillofacial Surgery, Daejeon Dental Hospital, Wonkwang University College of Dentistry, Daejeon 35233, Korea;
| |
Collapse
|