1
|
Ying S, Huang F, Liu W, He F. Deep learning in the overall process of implant prosthodontics: A state-of-the-art review. Clin Implant Dent Relat Res 2024; 26:835-846. [PMID: 38286659 DOI: 10.1111/cid.13307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/13/2024] [Accepted: 01/16/2024] [Indexed: 01/31/2024]
Abstract
Artificial intelligence represented by deep learning has attracted attention in the field of dental implant restoration. It is widely used in surgical image analysis, implant plan design, prosthesis shape design, and prognosis judgment. This article mainly describes the research progress of deep learning in the whole process of dental implant prosthodontics. It analyzes the limitations of current research, and looks forward to the future development direction.
Collapse
Affiliation(s)
- Shunv Ying
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Feng Huang
- School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou, China
| | - Wei Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Fuming He
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Clinical Research Center for Oral Diseases of Zhejiang Province, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
2
|
Süküt Y, Yurdakurban E, Duran GS. Accuracy of deep learning-based upper airway segmentation. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024:102048. [PMID: 39244033 DOI: 10.1016/j.jormas.2024.102048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 09/04/2024] [Indexed: 09/09/2024]
Abstract
INTRODUCTION In orthodontic treatments, accurately assessing the upper airway volume and morphology is essential for proper diagnosis and planning. Cone beam computed tomography (CBCT) is used for assessing upper airway volume through manual, semi-automatic, and automatic airway segmentation methods. This study evaluates upper airway segmentation accuracy by comparing the results of an automatic model and a semi-automatic method against the gold standard manual method. MATERIALS AND METHODS An automatic segmentation model was trained using the MONAI Label framework to segment the upper airway from CBCT images. An open-source program, ITK-SNAP, was used for semi-automatic segmentation. The accuracy of both methods was evaluated against manual segmentations. Evaluation metrics included Dice Similarity Coefficient (DSC), Precision, Recall, 95% Hausdorff Distance (HD), and volumetric differences. RESULTS The automatic segmentation group averaged a DSC score of 0.915±0.041, while the semi-automatic group scored 0.940±0.021, indicating clinically acceptable accuracy for both methods. Analysis of the 95% HD revealed that semi-automatic segmentation (0.997±0.585) was more accurate and closer to manual segmentation than automatic segmentation (1.447±0.674). Volumetric comparisons revealed no statistically significant differences between automatic and manual segmentation for total, oropharyngeal, and velopharyngeal airway volumes. Similarly, no significant differences were noted between the semi-automatic and manual methods across these regions. CONCLUSION It has been observed that both automatic and semi-automatic methods, which utilise open-source software, align effectively with manual segmentation. Implementing these methods can aid in decision-making by allowing faster and easier upper airway segmentation with comparable accuracy in orthodontic practice.
Collapse
Affiliation(s)
- Yağızalp Süküt
- Department of Orthodontics, Gülhane Faculty of Dentistry, University of Health Sciences, Ankara 06010, Turkey.
| | - Ebru Yurdakurban
- Department of Orthodontics, Faculty of Dentistry, Muğla Sıtkı Koçman University, Muğla 48000, Turkey
| | - Gökhan Serhat Duran
- Department of Orthodontics, Faculty of Dentistry, Çanakkale 18 March University, Çanakkale 17000, Turkey
| |
Collapse
|
3
|
Dot G, Chaurasia A, Dubois G, Savoldelli C, Haghighat S, Azimian S, Taramsari AR, Sivaramakrishnan G, Issa J, Dubey A, Schouman T, Gajny L. DentalSegmentator: Robust open source deep learning-based CT and CBCT image segmentation. J Dent 2024; 147:105130. [PMID: 38878813 DOI: 10.1016/j.jdent.2024.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/08/2024] [Accepted: 06/12/2024] [Indexed: 06/30/2024] Open
Abstract
OBJECTIVES Segmentation of anatomical structures on dento-maxillo-facial (DMF) computed tomography (CT) or cone beam computed tomography (CBCT) scans is increasingly needed in digital dentistry. The main aim of this research was to propose and evaluate a novel open source tool called DentalSegmentator for fully automatic segmentation of five anatomical structures on DMF CT and CBCT scans: maxilla/upper skull, mandible, upper teeth, lower teeth, and the mandibular canal. METHODS A retrospective sample of 470 CT and CBCT scans was used as a training/validation set. The performance and generalizability of the tool was evaluated by comparing segmentations provided by experts and automatic segmentations in two hold-out test datasets: an internal dataset of 133 CT and CBCT scans acquired before orthognathic surgery and an external dataset of 123 CBCT scans randomly sampled from routine examinations in 5 institutions. RESULTS The mean overall results in the internal test dataset (n = 133) were a Dice similarity coefficient (DSC) of 92.2 ± 6.3 % and a normalised surface distance (NSD) of 98.2 ± 2.2 %. The mean overall results on the external test dataset (n = 123) were a DSC of 94.2 ± 7.4 % and a NSD of 98.4 ± 3.6 %. CONCLUSIONS The results obtained from this highly diverse dataset demonstrate that this tool can provide fully automatic and robust multiclass segmentation for DMF CT and CBCT scans. To encourage the clinical deployment of DentalSegmentator, the pre-trained nnU-Net model has been made publicly available along with an extension for the 3D Slicer software. CLINICAL SIGNIFICANCE DentalSegmentator open source 3D Slicer extension provides a free, robust, and easy-to-use approach to obtaining patient-specific three-dimensional models from CT and CBCT scans. These models serve various purposes in a digital dentistry workflow, such as visualization, treatment planning, intervention, and follow-up.
Collapse
Affiliation(s)
- Gauthier Dot
- UFR Odontologie, Universite Paris Cité, Paris, France; Service de Medecine Bucco-Dentaire, AP-HP, Hopital Pitie-Salpetriere, Paris, France; Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.
| | - Akhilanand Chaurasia
- Department of Oral Medicine and Radiology, Faculty of Dental Sciences, King George Medical University, Lucknow, Uttar Pradesh, India
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; Materialise France, Malakoff, France
| | - Charles Savoldelli
- Department of Oral and Maxillofacial Surgery, Head and Neck Institute, University Hospital of Nice, France
| | - Sara Haghighat
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Sarina Azimian
- Research Committee, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | | | - Julien Issa
- Department of Diagnostics, Chair of Practical Clinical Dentistry, Poznan University of Medical Sciences, Poznan, Poland; Doctoral School, Poznan University of Medical Sciences, Poznan, Poland
| | - Abhishek Dubey
- Department of Oral Medicine and Radiology, Maharana Pratap Dental College, Kanpur, India
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Medecine Sorbonne Universite, Paris, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
4
|
Shi J, Lin G, Bao R, Zhang Z, Tang J, Chen W, Chen H, Zuo X, Feng Q, Liu S. An automated method for assessing condyle head changes in patients with skeletal class II malocclusion based on Cone-beam CT images. Dentomaxillofac Radiol 2024; 53:325-335. [PMID: 38696751 PMCID: PMC11211682 DOI: 10.1093/dmfr/twae017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/01/2024] [Accepted: 04/06/2024] [Indexed: 05/04/2024] Open
Abstract
OBJECTIVES Currently, there is no reliable automated measurement method to study the changes in the condylar process after orthognathic surgery. Therefore, this study proposes an automated method to measure condylar changes in patients with skeletal class II malocclusion following surgical-orthodontic treatment. METHODS Cone-beam CT (CBCT) scans from 48 patients were segmented using the nnU-Net network for automated maxillary and mandibular delineation. Regions unaffected by orthognathic surgery were selectively cropped. Automated registration yielded condylar displacement and volume calculations, each repeated three times for precision. Logistic regression and linear regression were used to analyse the correlation between condylar position changes at different time points. RESULTS The Dice score for the automated segmentation of the condyle was 0.971. The intraclass correlation coefficients (ICCs) for all repeated measurements ranged from 0.93 to 1.00. The results of the automated measurement showed that 83.33% of patients exhibited condylar resorption occurring six months or more after surgery. Logistic regression and linear regression indicated a positive correlation between counterclockwise rotation in the pitch plane and condylar resorption (P < .01). And a positive correlation between the rotational angles in both three planes and changes in the condylar volume at six months after surgery (P ≤ .04). CONCLUSIONS This study's automated method for measuring condylar changes shows excellent repeatability. Skeletal class II malocclusion patients may experience condylar resorption after bimaxillary orthognathic surgery, and this is correlated with counterclockwise rotation in the sagittal plane. ADVANCES IN KNOWLEDGE This study proposes an innovative multi-step registration method based on CBCT, and establishes an automated approach for quantitatively measuring condyle changes post-orthognathic surgery. This method opens up new possibilities for studying condylar morphology.
Collapse
Affiliation(s)
- Jiayu Shi
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Guoye Lin
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Rui Bao
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Zhen Zhang
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Jin Tang
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Wenyue Chen
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Hongjin Chen
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Xinwei Zuo
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, 510515, China
| | - Shuguang Liu
- Department of Oral and Maxillofacial Surgery, Stomatological Hospital, School of Stomatology, Southern Medical University, Guangzhou 510261, China
| |
Collapse
|
5
|
Elgarba BM, Fontenele RC, Tarce M, Jacobs R. Artificial intelligence serving pre-surgical digital implant planning: A scoping review. J Dent 2024; 143:104862. [PMID: 38336018 DOI: 10.1016/j.jdent.2024.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
OBJECTIVES To conduct a scoping review focusing on artificial intelligence (AI) applications in presurgical dental implant planning. Additionally, to assess the automation degree of clinically available pre-surgical implant planning software. DATA AND SOURCES A systematic electronic literature search was performed in five databases (PubMed, Embase, Web of Science, Cochrane Library, and Scopus), along with exploring gray literature web-based resources until November 2023. English-language studies on AI-driven tools for digital implant planning were included based on an independent evaluation by two reviewers. An assessment of automation steps in dental implant planning software available on the market up to November 2023 was also performed. STUDY SELECTION AND RESULTS From an initial 1,732 studies, 47 met eligibility criteria. Within this subset, 39 studies focused on AI networks for anatomical landmark-based segmentation, creating virtual patients. Eight studies were dedicated to AI networks for virtual implant placement. Additionally, a total of 12 commonly available implant planning software applications were identified and assessed for their level of automation in pre-surgical digital implant workflows. Notably, only six of these featured at least one fully automated step in the planning software, with none possessing a fully automated implant planning protocol. CONCLUSIONS AI plays a crucial role in achieving accurate, time-efficient, and consistent segmentation of anatomical landmarks, serving the process of virtual patient creation. Additionally, currently available systems for virtual implant placement demonstrate different degrees of automation. It is important to highlight that, as of now, full automation of this process has not been documented nor scientifically validated. CLINICAL SIGNIFICANCE Scientific and clinical validation of AI applications for presurgical dental implant planning is currently scarce. The present review allows the clinician to identify AI-based automation in presurgical dental implant planning and assess the potential underlying scientific validation.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt.
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Mihai Tarce
- Division of Periodontology & Implant Dentistry, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China & Periodontology and Oral Microbiology, Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
6
|
Gurgel M, Alvarez MA, Aristizabal JF, Baquero B, Gillot M, Al Turkestani N, Miranda F, Castillo AAD, Bianchi J, de Oliveira Ruellas AC, Ioshida M, Yatabe M, Rey D, Prieto J, Cevidanes L. Automated artificial intelligence-based three-dimensional comparison of orthodontic treatment outcomes with and without piezocision surgery. Orthod Craniofac Res 2024; 27:321-331. [PMID: 38009409 PMCID: PMC10949222 DOI: 10.1111/ocr.12737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 09/26/2023] [Accepted: 11/12/2023] [Indexed: 11/28/2023]
Abstract
OBJECTIVE(S) This study aims to evaluate the influence of the piezocision surgery in the orthodontic biomechanics, as well as in the magnitude and direction of tooth movement in the mandibular arch using novel artificial intelligence (AI)-automated tools. MATERIALS AND METHODS Nineteen patients, who had piezocision performed in the lower arch at the beginning of treatment with the goal of accelerating tooth movement, were compared to 19 patients who did not receive piezocision. Cone beam computed tomography (CBCT) and intraoral scans (IOS) were acquired before and after orthodontic treatment. AI-automated dental tools were used to segment and locate landmarks in dental crowns from IOS and root canals from CBCT scans to quantify 3D tooth movement. Differences in mesial-distal, buccolingual, intrusion and extrusion linear movements, as well as tooth long axis angulation and rotation were compared. RESULTS The treatment time for the control and experimental groups were 13.2 ± 5.06 and 13 ± 5.52 months respectively (P = .176). Overall, anterior and posterior tooth movement presented similar 3D linear and angular changes in the groups. The piezocision group demonstrated greater (P = .01) mesial long axis angulation of lower right first premolar (4.4 ± 6°) compared with control group (0.02 ± 4.9°), while the mesial rotation was significantly smaller (P = .008) in the experimental group (0.5 ± 7.8°) than in the control (8.5 ± 9.8°) considering the same tooth. CONCLUSION The open source-automated dental tools facilitated the clinicians' assessment of piezocision treatment outcomes. The piezocision surgery prior to the orthodontic treatment did not decrease the treatment time and did not influence in the orthodontic biomechanics, leading to similar tooth movements compared to conventional treatment.
Collapse
Affiliation(s)
- Marcela Gurgel
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | | | | | - Baptiste Baquero
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Maxime Gillot
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Najla Al Turkestani
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Felicia Miranda
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Aron Aliaga-Del Castillo
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Jonas Bianchi
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | | | - Marcos Ioshida
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Marilia Yatabe
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Diego Rey
- Department of Orthodontics, CES University, Medellin, Colombia
| | - Juan Prieto
- Department of Computer Sciences, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
7
|
Barone S, Cevidanes L, Miranda F, Gurgel ML, Anchling L, Hutin N, Bianchi J, Goncalves JR, Giudice A. Enhancing skeletal stability and Class III correction through active orthodontist engagement in virtual surgical planning: A voxel-based 3-dimensional analysis. Am J Orthod Dentofacial Orthop 2024; 165:321-331. [PMID: 38010236 PMCID: PMC10923113 DOI: 10.1016/j.ajodo.2023.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 09/01/2023] [Accepted: 09/01/2023] [Indexed: 11/29/2023]
Abstract
INTRODUCTION Skeletal stability after bimaxillary surgical correction of Class III malocclusion was investigated through a qualitative and quantitative analysis of the maxilla and the distal and proximal mandibular segments using a 3-dimensional voxel-based superimposition among virtual surgical predictions performed by the orthodontist in close communication with the maxillofacial surgeon and 12-18 months postoperative outcomes. METHODS A comprehensive secondary data analysis was conducted on deidentified preoperative (1 month before surgery [T1]) and 12-18 months postoperative (midterm [T2]) cone-beam computed tomography scans, along with virtual surgical planning (VSP) data obtained by Dolphin Imaging software. The sample for the study consisted of 17 patients (mean age, 24.8 ± 3.5 years). Using 3D Slicer software, automated tools based on deep-learning approaches were used for cone-beam computed tomography orientation, registration, bone segmentation, and landmark identification. Colormaps were generated for qualitative analysis, whereas linear and angular differences between the planned (T1-VSP) and observed (T1-T2) outcomes were calculated for quantitative assessments. Statistical analysis was conducted with a significance level of α = 0.05. RESULTS The midterm surgical outcomes revealed a slight but significantly less maxillary advancement compared with the planned position (mean difference, 1.84 ± 1.50 mm; P = 0.004). The repositioning of the mandibular distal segment was stable, with insignificant differences in linear (T1-VSP, 1.01 ± 3.66 mm; T1-T2, 0.32 ± 4.17 mm) and angular (T1-VSP, 1.53° ± 1.60°; T1-T2, 1.54° ± 1.50°) displacements (P >0.05). The proximal segments exhibited lateral displacement within 1.5° for both the mandibular right and left ramus at T1-VSP and T1-T2 (P >0.05). CONCLUSIONS The analysis of fully digital planned and surgically repositioned maxilla and mandible revealed excellent precision. In the midterm surgical outcomes of maxillary advancement, a minor deviation from the planned anterior movement was observed.
Collapse
Affiliation(s)
- Selene Barone
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy.
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Mich
| | - Felicia Miranda
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, São Paulo, Brazil
| | - Marcela Lima Gurgel
- Department of Orthodontics and Pediatric Dentistry, School of Dentistry, University of Michigan, Ann Arbor, Mich
| | - Luc Anchling
- Chemistry and Chemical Engineering School - Digital Sciences School Lyon, Lyon, France
| | - Nathan Hutin
- Chemistry and Chemical Engineering School - Digital Sciences School Lyon, Lyon, France
| | - Jonas Bianchi
- Department of Orthodontics, Arthur A. Dugoni School of Dentistry, University of the Pacific, San Francisco, Calif
| | - Joao Roberto Goncalves
- Department of Pediatric Dentistry, School of Dentist, São Paulo State University, Araraquara, São Paulo, Brazil
| | - Amerigo Giudice
- Department of Health Sciences, School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy
| |
Collapse
|
8
|
Requist MR, Mills MK, Carroll KL, Lenz AL. Quantitative Skeletal Imaging and Image-Based Modeling in Pediatric Orthopaedics. Curr Osteoporos Rep 2024; 22:44-55. [PMID: 38243151 DOI: 10.1007/s11914-023-00845-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 12/19/2023] [Indexed: 01/21/2024]
Abstract
PURPOSE OF REVIEW Musculoskeletal imaging serves a critical role in clinical care and orthopaedic research. Image-based modeling is also gaining traction as a useful tool in understanding skeletal morphology and mechanics. However, there are fewer studies on advanced imaging and modeling in pediatric populations. The purpose of this review is to provide an overview of recent literature on skeletal imaging modalities and modeling techniques with a special emphasis on current and future uses in pediatric research and clinical care. RECENT FINDINGS While many principles of imaging and 3D modeling are relevant across the lifespan, there are special considerations for pediatric musculoskeletal imaging and fewer studies of 3D skeletal modeling in pediatric populations. Improved understanding of bone morphology and growth during childhood in healthy and pathologic patients may provide new insight into the pathophysiology of pediatric-onset skeletal diseases and the biomechanics of bone development. Clinical translation of 3D modeling tools developed in orthopaedic research is limited by the requirement for manual image segmentation and the resources needed for segmentation, modeling, and analysis. This paper highlights the current and future uses of common musculoskeletal imaging modalities and 3D modeling techniques in pediatric orthopaedic clinical care and research.
Collapse
Affiliation(s)
- Melissa R Requist
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA
| | - Megan K Mills
- Department of Radiology and Imaging Sciences, University of Utah, 30 N Mario Capecchi Dr. 2 South, Salt Lake City, UT, 84112, USA
| | - Kristen L Carroll
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA
- Shriners Hospital for Children, 1275 E Fairfax Rd, Salt Lake City, UT, 84103, USA
| | - Amy L Lenz
- Department of Orthopaedics, University of Utah, 590 Wakara Way, Salt Lake City, UT, 84108, USA.
- Department of Biomedical Engineering, University of Utah, 36 S Wasatch Dr., Salt Lake City, UT, 84112, USA.
| |
Collapse
|
9
|
Bencherqui S, Barone S, Cevidanes L, Perrin JP, Corre P, Bertin H. 3D analysis of condylar and mandibular remodeling one year after intra-oral ramus vertical lengthening osteotomy. Clin Oral Investig 2024; 28:114. [PMID: 38267793 PMCID: PMC10904022 DOI: 10.1007/s00784-024-05504-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 01/08/2024] [Indexed: 01/26/2024]
Abstract
OBJECTIVES Among the existing techniques for the correction of mandibular posterior vertical insufficiency (PVI), the intra-oral ramus vertical lengthening osteotomy (IORVLO) can be proposed as it allows simultaneous correction of mandibular height and retrusion. This study assessed the 3D morpho-anatomical changes of the ramus-condyle unit and occlusal stability after IORVLO. MATERIALS AND METHODS This retrospective analysis compared immediate and 1-year post-operative 3D CBCT reconstructions. The analysis focused on the condylar height (primary endpoint) and on the changes in condylar (condylar diameter, condylar axis angle) and mandibular (ramus height, Frankfort-mandibular plane angle, gonion position, intergonial distance, angular remodeling) parameters. Additionally, this analysis investigated the maxillary markers and occlusal stability. RESULTS On the 38 condyles studied in 21 included patients (mean age 23.7 ± 3.9 years), a condylar height (CH) loss of 0.66 mm (p < 0,03) was observed, with no correlation with the degree of ramus lengthening (mean 13.3 ± 0.76 mm). Only one patient presented an occlusal relapse of Class II, but a 3.4 mm (28%) condylar diameter loss and a 33% condylar volume reduction with loss of 1 mm and 3.4 mm in CH and condyle diameter, respectively. A mean 3.56 mm (p < 0.001) decrease in ramus height was noted, mainly due to bone resorption in the mandibular angles. CONCLUSION This study confirms the overall stability obtained with IORVLO for the correction of PVI. CLINICAL RELEVANCE This study aims to precise indication of IORVLO, and to validate the clinical and anatomical stability of results.
Collapse
Affiliation(s)
- Samy Bencherqui
- Nantes Université, CHU Nantes, Service de Chirurgie Maxillo-Faciale Et Stomatologie, 44000, Nantes, France.
| | - Selene Barone
- School of Dentistry, Department of Health Sciences, Magna, Graecia University of Catanzaro, Viale Europa, 88100, Catanzaro, Italy
| | - Lucia Cevidanes
- Department of Orthodontics & Ped Dentistry, University of Michigan, Ann Arbor, MI, USA
| | - Jean-Philippe Perrin
- Nantes Université, CHU Nantes, Service de Chirurgie Maxillo-Faciale Et Stomatologie, 44000, Nantes, France
| | - Pierre Corre
- Nantes Université, CHU Nantes, Service de Chirurgie Maxillo-Faciale Et Stomatologie, 44000, Nantes, France
- Nantes Université, Oniris, Univ Angers, CHU Nantes, INSERM, Regenerative Medicine and Skeleton, RMeS, UMR 1229, 44000, Nantes, France
| | - Hélios Bertin
- Nantes Université, CHU Nantes, Service de Chirurgie Maxillo-Faciale Et Stomatologie, 44000, Nantes, France
- Nantes Université, Oniris, Univ Angers, CHU Nantes, INSERM, Regenerative Medicine and Skeleton, RMeS, UMR 1229, 44000, Nantes, France
- Nantes Université, Univ Angers, CHU Nantes, INSERM, CNRS, CRCI2NA, 44000, Nantes, France
| |
Collapse
|
10
|
Wu Z, Liu M, Pang Y, Deng L, Yang Y, Wu Y. A Comparative Study of Deep Learning Dose Prediction Models for Cervical Cancer Volumetric Modulated Arc Therapy. Technol Cancer Res Treat 2024; 23:15330338241242654. [PMID: 38584413 PMCID: PMC11005497 DOI: 10.1177/15330338241242654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/19/2023] [Accepted: 02/19/2024] [Indexed: 04/09/2024] Open
Abstract
Purpose: Deep learning (DL) is widely used in dose prediction for radiation oncology, multiple DL techniques comparison is often lacking in the literature. To compare the performance of 4 state-of-the-art DL models in predicting the voxel-level dose distribution for cervical cancer volumetric modulated arc therapy (VMAT). Methods and Materials: A total of 261 patients' plans for cervical cancer were retrieved in this retrospective study. A three-channel feature map, consisting of a planning target volume (PTV) mask, organs at risk (OARs) mask, and CT image was fed into the three-dimensional (3D) U-Net and its 3 variants models. The data set was randomly divided into 80% as training-validation and 20% as testing set, respectively. The model performance was evaluated on the 52 testing patients by comparing the generated dose distributions against the clinical approved ground truth (GT) using mean absolute error (MAE), dose map difference (GT-predicted), clinical dosimetric indices, and dice similarity coefficients (DSC). Results: The 3D U-Net and its 3 variants DL models exhibited promising performance with a maximum MAE within the PTV 0.83% ± 0.67% in the UNETR model. The maximum MAE among the OARs is the left femoral head, which reached 6.95% ± 6.55%. For the body, the maximum MAE was observed in UNETR, which is 1.19 ± 0.86%, and the minimum MAE was 0.94 ± 0.85% for 3D U-Net. The average error of the Dmean difference for different OARs is within 2.5 Gy. The average error of V40 difference for the bladder and rectum is about 5%. The mean DSC under different isodose volumes was above 90%. Conclusions: DL models can predict the voxel-level dose distribution accurately for cervical cancer VMAT treatment plans. All models demonstrated almost analogous performance for voxel-wise dose prediction maps. Considering all voxels within the body, 3D U-Net showed the best performance. The state-of-the-art DL models are of great significance for further clinical applications of cervical cancer VMAT.
Collapse
Affiliation(s)
- Zhe Wu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Army Medical University (Third Military Medical University), Chongqing, China
- Department of Radiation Oncology, Zigong Disease Prevention and Control Center Mental Health Center, Zigong First People's Hospital, Zigong, Sichuan, China
| | - Mujun Liu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Army Medical University (Third Military Medical University), Chongqing, China
| | - Ya Pang
- Department of Radiation Oncology, Zigong Disease Prevention and Control Center Mental Health Center, Zigong First People's Hospital, Zigong, Sichuan, China
| | - Lihua Deng
- Department of Radiology, The First Affiliated Hospital of the Army Medical University, Chongqing, China
| | - Yi Yang
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Army Medical University (Third Military Medical University), Chongqing, China
| | - Yi Wu
- Department of Digital Medicine, School of Biomedical Engineering and Medical Imaging, Army Medical University (Third Military Medical University), Chongqing, China
| |
Collapse
|
11
|
Anchling L, Hutin N, Huang Y, Barone S, Roberts S, Miranda F, Gurgel M, Al Turkestani N, Tinawi S, Bianchi J, Yatabe M, Ruellas A, Prieto JC, Cevidanes L. Automated Orientation and Registration of Cone-Beam Computed Tomography Scans. CLINICAL IMAGE-BASED PROCEDURES, FAIRNESS OF AI IN MEDICAL IMAGING, AND ETHICAL AND PHILOSOPHICAL ISSUES IN MEDICAL IMAGING : 12TH INTERNATIONAL WORKSHOP, CLIP 2023 1ST INTERNATIONAL WORKSHOP, FAIMI 2023 AND 2ND INTERNATIONAL WORKSHOP, ... 2023; 14242:43-58. [PMID: 38770027 PMCID: PMC11104011 DOI: 10.1007/978-3-031-45249-9_5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Automated clinical decision support systems rely on accurate analysis of three-dimensional (3D) medical and dental images to assist clinicians in diagnosis, treatment planning, intervention, and assessment of growth and treatment effects. However, analyzing longitudinal 3D images requires standardized orientation and registration, which can be laborious and error-prone tasks dependent on structures of reference for registration. This paper proposes two novel tools to automatically perform the orientation and registration of 3D Cone-Beam Computed Tomography (CBCT) scans with high accuracy (<3° and <2mm of angular and linear errors when compared to expert clinicians). These tools have undergone rigorous testing, and are currently being evaluated by clinicians who utilize the 3D Slicer open-source platform. Our work aims to reduce the sources of error in the 3D medical image analysis workflow by automating these operations. These methods combine conventional image processing approaches and Artificial Intelligence (AI) based models trained and tested on de-identified CBCT volumetric images. Our results showed robust performance for standardized and reproducible image orientation and registration that provide a more complete understanding of individual patient facial growth and response to orthopedic treatment in less than 5 min.
Collapse
Affiliation(s)
- Luc Anchling
- University of Michigan, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Nathan Hutin
- University of Michigan, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | | | - Selene Barone
- University of Michigan, Ann Arbor, MI, USA
- Magna Graecia University of Catanzaro, Catanzaro, Italy
| | - Sophie Roberts
- Department of Orthodontics, University of Melbourne, Melbourne, Australia
| | - Felicia Miranda
- University of Michigan, Ann Arbor, MI, USA
- Bauru Dental School, University of Sao Paulo, Bauru, SP, Brazil
| | | | - Najla Al Turkestani
- University of Michigan, Ann Arbor, MI, USA
- King Abdulaziz University, Jeddah, Saudi Arabia
| | | | - Jonas Bianchi
- University of Michigan, Ann Arbor, MI, USA
- University of the Pacific, San Francisco, USA
| | | | - Antonio Ruellas
- Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | | | | |
Collapse
|
12
|
Miranda F, Choudhari V, Barone S, Anchling L, Hutin N, Gurgel M, Al Turkestani N, Yatabe M, Bianchi J, Aliaga-Del Castillo A, Zupelari-Gonçalves P, Edwards S, Garib D, Cevidanes L, Prieto J. Interpretable artificial intelligence for classification of alveolar bone defect in patients with cleft lip and palate. Sci Rep 2023; 13:15861. [PMID: 37740091 PMCID: PMC10516946 DOI: 10.1038/s41598-023-43125-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/20/2023] [Indexed: 09/24/2023] Open
Abstract
Cleft lip and/or palate (CLP) is the most common congenital craniofacial anomaly and requires bone grafting of the alveolar cleft. This study aimed to develop a novel classification algorithm to assess the severity of alveolar bone defects in patients with CLP using three-dimensional (3D) surface models and to demonstrate through an interpretable artificial intelligence (AI)-based algorithm the decisions provided by the classifier. Cone-beam computed tomography scans of 194 patients with CLP were used to train and test the performance of an automatic classification of the severity of alveolar bone defect. The shape, height, and width of the alveolar bone defect were assessed in automatically segmented maxillary 3D surface models to determine the ground truth classification index of its severity. The novel classifier algorithm renders the 3D surface models from different viewpoints and captures 2D image snapshots fed into a 2D Convolutional Neural Network. An interpretable AI algorithm was developed that uses features from each view and aggregated via Attention Layers to explain the classification. The precision, recall and F-1 score were 0.823, 0.816, and 0.817, respectively, with agreement ranging from 97.4 to 100% on the severity index within 1 group difference. The new classifier and interpretable AI algorithm presented satisfactory accuracy to classify the severity of alveolar bone defect morphology using 3D surface models of patients with CLP and graphically displaying the features that were considered during the deep learning model's classification decision.
Collapse
Affiliation(s)
- Felicia Miranda
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA.
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil.
| | - Vishakha Choudhari
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Selene Barone
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Health Science, School of Dentistry, Magna Graecia University of Catanzaro, Catanzaro, Italy
| | - Luc Anchling
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Nathan Hutin
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- CPE Lyon, Lyon, France
| | - Marcela Gurgel
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Najla Al Turkestani
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
- Department of Restorative and Aesthetic Dentistry, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Marilia Yatabe
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Jonas Bianchi
- Department of Orthodontics, University of the Pacific, Arthur A. Dugoni School of Dentistry, San Francisco, CA, USA
| | - Aron Aliaga-Del Castillo
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Paulo Zupelari-Gonçalves
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Sean Edwards
- Department of Oral and Maxillofacial Surgery, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Daniela Garib
- Department of Orthodontics, Bauru Dental School, University of São Paulo, Bauru, SP, Brazil
- Department of Orthodontics, Hospital for Rehabilitation of Craniofacial Anomalies, University of São Paulo, Bauru, SP, Brazil
| | - Lucia Cevidanes
- Department of Orthodontics and Pediatric Dentistry, University of Michigan School of Dentistry, Ann Arbor, MI, USA
| | - Juan Prieto
- Department of Psychiatry, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
13
|
Kang D. Evaluating the Accuracy and Reliability of Blowout Fracture Area Measurement Methods: A Review and the Potential Role of Artificial Intelligence. J Craniofac Surg 2023; 34:1834-1836. [PMID: 37322582 DOI: 10.1097/scs.0000000000009486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/09/2023] [Indexed: 06/17/2023] Open
Abstract
Blowout fractures are a common type of facial injury that requires accurate measurement of the fracture area for proper treatment planning. This systematic review aimed to summarize and evaluate the current methods for measuring blowout fracture areas and explore the potential role of artificial intelligence (AI) in enhancing accuracy and reliability. A comprehensive search of the PubMed database was conducted, focusing on studies published since 2000 that investigated methods for measuring blowout fracture area using computed tomography scans. The review included 20 studies, and the results showed that automatic methods, such as computer-aided measurements and computed tomography-based volumetric analysis, provide higher accuracy and reliability compared with manual and semiautomatic techniques. Standardizing the method for measuring blowout fracture areas can improve clinical decision-making and facilitate outcome comparison across studies. Future research should focus on developing AI models that can account for multiple factors, including fracture area and herniated tissue volume, to enhance their accuracy and reliability. Integration of AI models has the potential to improve clinical decision-making and patient outcomes in the assessment and management of blowout fractures.
Collapse
Affiliation(s)
- Daihun Kang
- Department of Plastic and Reconstructive Surgery, Catholic Kwandong University International Saint Mary's Hospital, Seo-gu, Incheon, Republic of Korea
| |
Collapse
|
14
|
Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, Wu Y. A deep learning-based automatic segmentation of zygomatic bones from cone-beam computed tomography images: A proof of concept. J Dent 2023:104582. [PMID: 37321334 DOI: 10.1016/j.jdent.2023.104582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/28/2023] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
OBJECTIVES To investigate the efficiency and accuracy of a deep learning-based automatic segmentation method for zygomatic bones from cone-beam computed tomography (CBCT) images. METHODS One hundred thirty CBCT scans were included and randomly divided into three subsets (training, validation, and test) in a 6:2:2 ratio. A deep learning-based model was developed, and it included a classification network and a segmentation network, where an edge supervision module was added to increase the attention of the edges of zygomatic bones. Attention maps were generated by the Grad-CAM and Guided Grad-CAM algorithms to improve the interpretability of the model. The performance of the model was then compared with that of four dentists on 10 CBCT scans from the test dataset. A p value <.05 was considered statistically significant. RESULTS The accuracy of the classification network was 99.64%. The Dice coefficient (Dice) of the deep learning-based model for the test dataset was 92.34 ± 2.04%, the average surface distance (ASD) was 0.1 ± 0.15 mm, and the 95% Hausdorff distance (HD) was 0.98 ± 0.42 mm. The model required 17.03 seconds on average to segment zygomatic bones, whereas this task took 49.3 minutes for dentists to complete. The Dice score of the model for the 10 CBCT scans was 93.2 ± 1.3%, while that of the dentists was 90.37 ± 3.32%. CONCLUSIONS The proposed deep learning-based model could segment zygomatic bones with high accuracy and efficiency compared with those of dentists. CLINICAL SIGNIFICANCE The proposed automatic segmentation model for zygomatic bone could generate an accurate 3D model for the preoperative digital planning of zygoma reconstruction, orbital surgery, zygomatic implant surgery, and orthodontics.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xinbo Yu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Wenying Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Haowei Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Room 805, Dongchuan Road 800, Minhang District, Shanghai, 200240, China..
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| |
Collapse
|
15
|
A Semi-Automatic Approach for Holistic 3D Assessment of Temporomandibular Joint Changes. J Pers Med 2023; 13:jpm13020343. [PMID: 36836577 PMCID: PMC9959062 DOI: 10.3390/jpm13020343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 02/10/2023] [Accepted: 02/13/2023] [Indexed: 02/18/2023] Open
Abstract
The literature lacks a reliable holistic approach for the three-dimensional (3D) assessment of the temporomandibular joint (TMJ) including all three adaptive processes, which are believed to contribute to the position of the mandible: (1) adaptive condylar changes, (2) glenoid fossa changes, and (3) condylar positional changes within the fossa. Hence, the purpose of the present study was to propose and assess the reliability of a semi-automatic approach for a 3D assessment of the TMJ from cone-beam computed tomography (CBCT) following orthognathic surgery. The TMJs were 3D reconstructed from a pair of superimposed pre- and postoperative (two years) CBCT scans, and spatially divided into sub-regions. The changes in the TMJ were calculated and quantified by morphovolumetrical measurements. To evaluate the reliability, intra-class correlation coefficients (ICC) were calculated at a 95% confidence interval on the measurements of two observers. The approach was deemed reliable if the ICC was good (>0.60). Pre- and postoperative CBCT scans of ten subjects (nine female; one male; mean age 25.6 years) with class II malocclusion and maxillomandibular retrognathia, who underwent bimaxillary surgery, were assessed. The inter-observer reliability of the measurements on the sample of the twenty TMJs was good to excellent, ICC range (0.71-1.00). The range of the mean absolute difference of the repeated inter-observer condylar volumetric and distance measurements, glenoid fossa surface distance measurements, and change in minimum joint space distance measurements were (1.68% (1.58)-5.01% (3.85)), (0.09 mm (0.12)-0.25 mm (0.46)), (0.05 mm (0.05)-0.08 mm (0.06)) and (0.12 mm (0.09)-0.19 mm (0.18)), respectively. The proposed semi-automatic approach demonstrated good to excellent reliability for the holistic 3D assessment of the TMJ including all three adaptive processes.
Collapse
|
16
|
Chen C, Qi S, Zhou K, Lu T, Ning H, Xiao R. Pairwise attention-enhanced adversarial model for automatic bone segmentation in CT images. Phys Med Biol 2023; 68. [PMID: 36634367 DOI: 10.1088/1361-6560/acb2ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Bone segmentation is a critical step in screw placement navigation. Although the deep learning methods have promoted the rapid development for bone segmentation, the local bone separation is still challenging due to irregular shapes and similar representational features.Approach. In this paper, we proposed the pairwise attention-enhanced adversarial model (Pair-SegAM) for automatic bone segmentation in computed tomography images, which includes the two parts of the segmentation model and discriminator. Considering that the distributions of the predictions from the segmentation model contains complicated semantics, we improve the discriminator to strengthen the awareness ability of the target region, improving the parsing of semantic information features. The Pair-SegAM has a pairwise structure, which uses two calculation mechanics to set up pairwise attention maps, then we utilize the semantic fusion to filter unstable regions. Therefore, the improved discriminator provides more refinement information to capture the bone outline, thus effectively enhancing the segmentation models for bone segmentation.Main results. To test the Pair-SegAM, we selected the two bone datasets for assessment. We evaluated our method against several bone segmentation models and latest adversarial models on the both datasets. The experimental results prove that our method not only exhibits superior bone segmentation performance, but also states effective generalization.Significance. Our method provides a more efficient segmentation of specific bones and has the potential to be extended to other semantic segmentation domains.
Collapse
Affiliation(s)
- Cheng Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Siyu Qi
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Kangneng Zhou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Tong Lu
- Visual 3D Medical Science and Technology Development Co. Ltd, Beijing 100082, People's Republic of China
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China
| | - Ruoxiu Xiao
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, People's Republic of China.,Shunde Innovation School, University of Science and Technology Beijing, Foshan 100024, People's Republic of China
| |
Collapse
|
17
|
Zhou H, Li H, Chen S, Yang S, Ruan G, Liu L, Chen H. BSMM-Net: Multi-modal neural network based on bilateral symmetry for nasopharyngeal carcinoma segmentation. Front Hum Neurosci 2023; 16:1068713. [PMID: 36704094 PMCID: PMC9872196 DOI: 10.3389/fnhum.2022.1068713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/05/2022] [Indexed: 01/11/2023] Open
Abstract
Introduction Automatically and accurately delineating the primary nasopharyngeal carcinoma (NPC) tumors in head magnetic resonance imaging (MRI) images is crucial for patient staging and radiotherapy. Inspired by the bilateral symmetry of head and complementary information of different modalities, a multi-modal neural network named BSMM-Net is proposed for NPC segmentation. Methods First, a bilaterally symmetrical patch block (BSP) is used to crop the image and the bilaterally flipped image into patches. BSP can improve the precision of locating NPC lesions and is a simulation of radiologist locating the tumors with the bilateral difference of head in clinical practice. Second, modality-specific and multi-modal fusion features (MSMFFs) are extracted by the proposed MSMFF encoder to fully utilize the complementary information of T1- and T2-weighted MRI. The MSMFFs are then fed into the base decoder to aggregate representative features and precisely delineate the NPC. MSMFF is the output of MSMFF encoder blocks, which consist of six modality-specific networks and one multi-modal fusion network. Except T1 and T2, the other four modalities are generated from T1 and T2 by the BSP and DT modal generate block. Third, the MSMFF decoder with similar structure to the MSMFF encoder is deployed to supervise the encoder during training and assure the validity of the MSMFF from the encoder. Finally, experiments are conducted on the dataset of 7633 samples collected from 745 patients. Results and discussion The global DICE, precision, recall and IoU of the testing set are 0.82, 0.82, 0.86, and 0.72, respectively. The results show that the proposed model is better than the other state-of-the-art methods for NPC segmentation. In clinical diagnosis, the BSMM-Net can give precise delineation of NPC, which can be used to schedule the radiotherapy.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, Guangxi, China
| | - Haojiang Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Shuchao Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Shixin Yang
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| | - Guangying Ruan
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Lizhi Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center (SYSUCC), Guanghzou, Guangdong, China
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, China
| |
Collapse
|