1
|
Dot G, Chaurasia A, Dubois G, Savoldelli C, Haghighat S, Azimian S, Taramsari AR, Sivaramakrishnan G, Issa J, Dubey A, Schouman T, Gajny L. DentalSegmentator: Robust open source deep learning-based CT and CBCT image segmentation. J Dent 2024; 147:105130. [PMID: 38878813 DOI: 10.1016/j.jdent.2024.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/08/2024] [Accepted: 06/12/2024] [Indexed: 06/30/2024] Open
Abstract
OBJECTIVES Segmentation of anatomical structures on dento-maxillo-facial (DMF) computed tomography (CT) or cone beam computed tomography (CBCT) scans is increasingly needed in digital dentistry. The main aim of this research was to propose and evaluate a novel open source tool called DentalSegmentator for fully automatic segmentation of five anatomical structures on DMF CT and CBCT scans: maxilla/upper skull, mandible, upper teeth, lower teeth, and the mandibular canal. METHODS A retrospective sample of 470 CT and CBCT scans was used as a training/validation set. The performance and generalizability of the tool was evaluated by comparing segmentations provided by experts and automatic segmentations in two hold-out test datasets: an internal dataset of 133 CT and CBCT scans acquired before orthognathic surgery and an external dataset of 123 CBCT scans randomly sampled from routine examinations in 5 institutions. RESULTS The mean overall results in the internal test dataset (n = 133) were a Dice similarity coefficient (DSC) of 92.2 ± 6.3 % and a normalised surface distance (NSD) of 98.2 ± 2.2 %. The mean overall results on the external test dataset (n = 123) were a DSC of 94.2 ± 7.4 % and a NSD of 98.4 ± 3.6 %. CONCLUSIONS The results obtained from this highly diverse dataset demonstrate that this tool can provide fully automatic and robust multiclass segmentation for DMF CT and CBCT scans. To encourage the clinical deployment of DentalSegmentator, the pre-trained nnU-Net model has been made publicly available along with an extension for the 3D Slicer software. CLINICAL SIGNIFICANCE DentalSegmentator open source 3D Slicer extension provides a free, robust, and easy-to-use approach to obtaining patient-specific three-dimensional models from CT and CBCT scans. These models serve various purposes in a digital dentistry workflow, such as visualization, treatment planning, intervention, and follow-up.
Collapse
Affiliation(s)
- Gauthier Dot
- UFR Odontologie, Universite Paris Cité, Paris, France; Service de Medecine Bucco-Dentaire, AP-HP, Hopital Pitie-Salpetriere, Paris, France; Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France.
| | - Akhilanand Chaurasia
- Department of Oral Medicine and Radiology, Faculty of Dental Sciences, King George Medical University, Lucknow, Uttar Pradesh, India
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; Materialise France, Malakoff, France
| | - Charles Savoldelli
- Department of Oral and Maxillofacial Surgery, Head and Neck Institute, University Hospital of Nice, France
| | - Sara Haghighat
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Sarina Azimian
- Research Committee, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | | | - Julien Issa
- Department of Diagnostics, Chair of Practical Clinical Dentistry, Poznan University of Medical Sciences, Poznan, Poland; Doctoral School, Poznan University of Medical Sciences, Poznan, Poland
| | - Abhishek Dubey
- Department of Oral Medicine and Radiology, Maharana Pratap Dental College, Kanpur, India
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France; AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Medecine Sorbonne Universite, Paris, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, Paris, France
| |
Collapse
|
2
|
Xi R, Ali M, Zhou Y, Tizzano M. A reliable deep-learning-based method for alveolar bone quantification using a murine model of periodontitis and micro-computed tomography imaging. J Dent 2024; 146:105057. [PMID: 38729290 DOI: 10.1016/j.jdent.2024.105057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 04/30/2024] [Accepted: 05/07/2024] [Indexed: 05/12/2024] Open
Abstract
OBJECTIVES This study focuses on artificial intelligence (AI)-assisted analysis of alveolar bone for periodontitis in a mouse model with the aim to create an automatic deep-learning segmentation model that enables researchers to easily examine alveolar bone from micro-computed tomography (µCT) data without needing prior machine learning knowledge. METHODS Ligature-induced experimental periodontitis was produced by placing a small-diameter silk sling ligature around the left maxillary second molar. At 4, 7, 9, or 14 days, the maxillary bone was harvested and processed with a µCT scanner (µCT-45, Scanco). Using Dragonfly (v2021.3), we developed a 3D deep learning model based on the U-Net AI deep learning engine for segmenting materials in complex images to measure alveolar bone volume (BV) and bone mineral density (BMD) while excluding the teeth from the measurements. RESULTS This model generates 3D segmentation output for a selected region of interest with over 98 % accuracy on different formats of µCT data. BV on the ligature side gradually decreased from 0.87 mm3 to 0.50 mm3 on day 9 and then increased to 0.63 mm3 on day 14. The ligature side lost 4.6 % of BMD on day 4, 9.6 % on day 7, 17.7 % on day 9, and 21.1 % on day 14. CONCLUSIONS This study developed an AI model that can be downloaded and easily applied, allowing researchers to assess metrics including BV, BMD, and trabecular bone thickness, while excluding teeth from the measurements of mouse alveolar bone. CLINICAL SIGNIFICANCE This work offers an innovative, user-friendly automatic segmentation model that is fast, accurate, and reliable, demonstrating new potential uses of artificial intelligence (AI) in dentistry with great potential in diagnosing, treating, and prognosis of oral diseases.
Collapse
Affiliation(s)
- Ranhui Xi
- Department of Basic & Translational Sciences, School of Dental Medicine, University of Pennsylvania, 240 South 40th Street, Philadelphia, PA 19014, United States
| | - Mamoon Ali
- Department of Basic & Translational Sciences, School of Dental Medicine, University of Pennsylvania, 240 South 40th Street, Philadelphia, PA 19014, United States
| | - Yilu Zhou
- McKay Orthopaedic Research Laboratory, Department of Orthopaedic Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19014, United States
| | - Marco Tizzano
- Department of Basic & Translational Sciences, School of Dental Medicine, University of Pennsylvania, 240 South 40th Street, Philadelphia, PA 19014, United States.
| |
Collapse
|
3
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
4
|
Takeya A, Watanabe K, Haga A. Fine structural human phantom in dentistry and instance tooth segmentation. Sci Rep 2024; 14:12630. [PMID: 38824210 PMCID: PMC11144222 DOI: 10.1038/s41598-024-63319-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.
Collapse
Affiliation(s)
- Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Keiichiro Watanabe
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan.
| |
Collapse
|
5
|
Boubaris M, Cameron A, Manakil J, George R. Artificial intelligence vs. semi-automated segmentation for assessment of dental periapical lesion volume index score: A cone-beam CT study. Comput Biol Med 2024; 175:108527. [PMID: 38714047 DOI: 10.1016/j.compbiomed.2024.108527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/26/2024] [Accepted: 04/26/2024] [Indexed: 05/09/2024]
Abstract
INTRODUCTION Cone beam computed tomography periapical volume index (CBCTPAVI) is a categorisation tool to assess periapical lesion size in three-dimensions and predict treatment outcomes. This index was determined using a time-consuming semi-automatic segmentation technique. This study compared artificial intelligence (AI) with semi-automated segmentation to determine AI's ability to accurately determine CBCTPAVI score. METHODS CBCTPAVI scores for 500 tooth roots were determined using both the semi-automatic segmentation technique in three-dimensional imaging analysis software (Mimics Research™) and AI (Diagnocat™). A confusion matrix was created to compare the CBCTPAVI score by the AI with the semi-automatic segmentation technique. Evaluation metrics, precision, recall, F1-score (2×precision×recallprecision+recall), and overall accuracy were determined. RESULTS In 84.4 % (n = 422) of cases the AI classified CBCTPAVI score the same as the semi-automated technique. AI was unable to classify any lesion as index 1 or 2, due to its limitation in small volume measurement. When lesions classified as index 1 and 2 by the semi-automatic segmentation technique were excluded, the AI demonstrated levels of precision, recall and F1-score, all above 0.85, for indices 0, 3-6; and accuracy over 90 %. CONCLUSIONS Diagnocat™ with its ability to determine CBCTPAVI score in approximately 2 min following upload of the CBCT could be an excellent and efficient tool to facilitate better monitoring and assessment of periapical lesions in everyday clinical practice and/or radiographic reporting. However, to assess three-dimensional healing of smaller lesions (with scores 1 and 2), further advancements in AI technologies are needed.
Collapse
Affiliation(s)
- Matthew Boubaris
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia
| | - Andrew Cameron
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia
| | - Jane Manakil
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia
| | - Roy George
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia.
| |
Collapse
|
6
|
Bao J, Zhang X, Xiang S, Liu H, Cheng M, Yang Y, Huang X, Xiang W, Cui W, Lai HC, Huang S, Wang Y, Qian D, Yu H. Deep Learning-Based Facial and Skeletal Transformations for Surgical Planning. J Dent Res 2024:220345241253186. [PMID: 38808566 DOI: 10.1177/00220345241253186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024] Open
Abstract
The increasing application of virtual surgical planning (VSP) in orthognathic surgery implies a critical need for accurate prediction of facial and skeletal shapes. The craniofacial relationship in patients with dentofacial deformities is still not understood, and transformations between facial and skeletal shapes remain a challenging task due to intricate anatomical structures and nonlinear relationships between the facial soft tissue and bones. In this study, a novel bidirectional 3-dimensional (3D) deep learning framework, named P2P-ConvGC, was developed and validated based on a large-scale data set for accurate subject-specific transformations between facial and skeletal shapes. Specifically, the 2-stage point-sampling strategy was used to generate multiple nonoverlapping point subsets to represent high-resolution facial and skeletal shapes. Facial and skeletal point subsets were separately input into the prediction system to predict the corresponding skeletal and facial point subsets via the skeletal prediction subnetwork and facial prediction subnetwork. For quantitative evaluation, the accuracy was calculated with shape errors and landmark errors between the predicted skeleton or face with corresponding ground truths. The shape error was calculated by comparing the predicted point sets with the ground truths, with P2P-ConvGC outperforming existing state-of-the-art algorithms including P2P-Net, P2P-ASNL, and P2P-Conv. The total landmark errors (Euclidean distances of craniomaxillofacial landmarks) of P2P-ConvGC in the upper skull, mandible, and facial soft tissues were 1.964 ± 0.904 mm, 2.398 ± 1.174 mm, and 2.226 ± 0.774 mm, respectively. Furthermore, the clinical feasibility of the bidirectional model was validated using a clinical cohort. The result demonstrated its prediction ability with average surface deviation errors of 0.895 ± 0.175 mm for facial prediction and 0.906 ± 0.082 mm for skeletal prediction. To conclude, our proposed model achieved good performance on the subject-specific prediction of facial and skeletal shapes and showed clinical application potential in postoperative facial prediction and VSP for orthognathic surgery.
Collapse
Affiliation(s)
- J Bao
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - X Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - S Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - M Cheng
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Y Yang
- Shanghai Lanhui Medical Technology Co., Ltd, Shanghai, China
| | - X Huang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Xiang
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - W Cui
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - H C Lai
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - S Huang
- Department of Oral and Maxillofacial Surgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Y Wang
- Qingdao Stomatological Hospital Affiliated to Qingdao University, Qingdao, Shandong, China
| | - D Qian
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - H Yu
- Department of Oral and Craniomaxillofacial Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| |
Collapse
|
7
|
Feng Y, Tao B, Fan J, Wang S, Mo J, Wu Y, Liang Q. Automatic planning of maxillary anterior dental implant based on prosthetically guided and pose evaluation indicator. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03142-x. [PMID: 38735893 DOI: 10.1007/s11548-024-03142-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 04/04/2024] [Indexed: 05/14/2024]
Abstract
PURPOSE Preoperative planning of maxillary anterior dental implant is a prerequisite to ensuring that the implant achieves the proper three-dimensional (3D) pose, which is essential for its long-term stability. However, the current planning process is labor-intensive and subjective, relying heavily on the surgeon's experience. Consequently, this paper proposes an automatic method for computing the optimal pose of the dental implant. METHODS The method adopts the principle of prosthetically guided dental implant placement. Initially, the prosthesis coordinate system is established to determine the implant candidate orientations. Subsequently, virtual slices of the maxilla in the buccal-palatal direction are generated according to the prosthesis position. By extracting feature points from the virtual slices, the implant candidate starting points are acquired. Then, a candidate pose set is obtained by combining these candidate starting points and orientations. Finally, a pose evaluation indicator is introduced to determine the optimal implant pose from this set. RESULTS Twenty-two cases were utilized to validate the method. The results show that the method could determine an ideal pose for the dental implant, with the average minimum distance between the implant and the left tooth root, the right tooth root, the palatal side, and the buccal side being 2.57 ± 0.53 mm, 2.59 ± 0.65 mm, 0.74 ± 0.19 mm, 1.83 ± 0.16 mm, respectively. The planning time was less than 9 s. CONCLUSION Unlike manual planning, the proposed method can efficiently and accurately complete maxillary anterior dental implant planning, providing a theoretical analysis of the success rate of the implant. Thus, it has great potential for future clinical application.
Collapse
Affiliation(s)
- Yuan Feng
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - BaoXin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China
- Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China
| | - JiaCheng Fan
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - ShiGang Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - JinQiu Mo
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China
| | - YiQun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China.
- National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Shanghai, China.
- Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China.
| | - QingHua Liang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Dongchuan Road 800, Minhang District, Shanghai, 200240, China.
| |
Collapse
|
8
|
Liu Y, Xie R, Wang L, Liu H, Liu C, Zhao Y, Bai S, Liu W. Fully automatic AI segmentation of oral surgery-related tissues based on cone beam computed tomography images. Int J Oral Sci 2024; 16:34. [PMID: 38719817 PMCID: PMC11079075 DOI: 10.1038/s41368-024-00294-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 02/21/2024] [Accepted: 03/09/2024] [Indexed: 05/12/2024] Open
Abstract
Accurate segmentation of oral surgery-related tissues from cone beam computed tomography (CBCT) images can significantly accelerate treatment planning and improve surgical accuracy. In this paper, we propose a fully automated tissue segmentation system for dental implant surgery. Specifically, we propose an image preprocessing method based on data distribution histograms, which can adaptively process CBCT images with different parameters. Based on this, we use the bone segmentation network to obtain the segmentation results of alveolar bone, teeth, and maxillary sinus. We use the tooth and mandibular regions as the ROI regions of tooth segmentation and mandibular nerve tube segmentation to achieve the corresponding tasks. The tooth segmentation results can obtain the order information of the dentition. The corresponding experimental results show that our method can achieve higher segmentation accuracy and efficiency compared to existing methods. Its average Dice scores on the tooth, alveolar bone, maxillary sinus, and mandibular canal segmentation tasks were 96.5%, 95.4%, 93.6%, and 94.8%, respectively. These results demonstrate that it can accelerate the development of digital dentistry.
Collapse
Affiliation(s)
- Yu Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Rui Xie
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Lifeng Wang
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Hongpeng Liu
- Beijing Yakebot Technology Co., Ltd., Beijing, China
- School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Chen Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Yimin Zhao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China.
| | - Shizhu Bai
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, National Clinical Research Center for Oral Diseases, Shaanxi Key Laboratory of Stomatology, Digital Center, School of Stomatology, The Fourth Military Medical University, Xi'an, China
| | - Wenyong Liu
- Key Laboratory of Biomechanics and Mechanobiology of the Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| |
Collapse
|
9
|
Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY. Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 2024; 144:104931. [PMID: 38458378 DOI: 10.1016/j.jdent.2024.104931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.
Collapse
Affiliation(s)
- Fang-Duan Ni
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | | | - Mu-Qing Liu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| | - Min-Juan Zhang
- Second Dental Center, Peking University Hospital of Stomatology, Beijing 100101, China
| | - Shu Li
- Department of Stomatology, Beijing Hospital, Beijing 100005, China
| | | | | | - Kai-Yuan Fu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
10
|
Tyndall DA, Price JB, Gaalaas L, Spin-Neto R. Surveying the landscape of diagnostic imaging in dentistry's future: Four emerging technologies with promise. J Am Dent Assoc 2024; 155:364-378. [PMID: 38520421 DOI: 10.1016/j.adaj.2024.01.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 01/04/2024] [Accepted: 01/07/2024] [Indexed: 03/25/2024]
Abstract
BACKGROUND Advances in digital radiography for both intraoral and panoramic imaging and cone-beam computed tomography have led the way to an increase in diagnostic capabilities for the dental care profession. In this article, the authors provide information on 4 emerging technologies with promise. TYPES OF STUDIES REVIEWED The authors feature the following: artificial intelligence in the form of deep learning using convolutional neural networks, dental magnetic resonance imaging, stationary intraoral tomosynthesis, and second-generation cone-beam computed tomography sources based on carbon nanotube technology and multispectral imaging. The authors review and summarize articles featuring these technologies. RESULTS The history and background of these emerging technologies are previewed along with their development and potential impact on the practice of dental diagnostic imaging. The authors conclude that these emerging technologies have the potential to have a substantial influence on the practice of dentistry as these systems mature. The degree of influence most likely will vary, with artificial intelligence being the most influential of the 4. CONCLUSIONS AND PRACTICAL IMPLICATIONS The readers are informed about these emerging technologies and the potential effects on their practice going forward, giving them information on which to base decisions on adopting 1 or more of these technologies. The 4 technologies reviewed in this article have the potential to improve imaging diagnostics in dentistry thereby leading to better patient care and heightened professional satisfaction.
Collapse
|
11
|
Alqutaibi AY, Algabri R, Ibrahim WI, Alhajj MN, Elawady D. Dental implant planning using artificial intelligence: A systematic review and meta-analysis. J Prosthet Dent 2024:S0022-3913(24)00227-0. [PMID: 38653687 DOI: 10.1016/j.prosdent.2024.03.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 03/15/2024] [Accepted: 03/15/2024] [Indexed: 04/25/2024]
Abstract
STATEMENT OF PROBLEM Data on the role of artificial intelligence (AI) in dental implant planning is insufficient. PURPOSE The purpose of this systematic review with meta-analysis was to analyze and evaluate articles that assess the effectiveness of AI algorithms in dental implant planning, specifically in detecting edentulous areas and evaluating bone dimensions. MATERIAL AND METHODS A systematic review was conducted across the MEDLINE/PubMed, Web of Science, Cochrane, and Scopus databases. In addition, a manual search was performed. The inclusion criteria consisted of peer-reviewed studies that examined the accuracy of AI-based diagnostic tools on dental radiographs for dental implant planning. The most recent search was conducted in January 2024. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was used to assess the quality of the included articles. RESULTS Twelve articles met the inclusion criteria for this review and focused on the application of AI in dental implant planning using cone beam computed tomography (CBCT) images. The pooled data indicated an overall accuracy of 96% (95% CI=94% to 98%) for the mandible and 83% (95% CI=82% to 84%) for the maxilla in identifying edentulous areas for implant planning. Eight studies had a low risk of bias, 2 studies had some concern of bias, and 2 studies had a high risk of bias. CONCLUSIONS AI models have the potential to identify edentulous areas and provide measurements of bone as part of dental implant planning using CBCT images. However, additional well-conducted research is needed to enhance the accuracy, generalizability, and applicability of AI-based approaches.
Collapse
Affiliation(s)
- Ahmed Yaseen Alqutaibi
- Associate Professor, Substitutive Dental Science Department, College of Dentistry, Taibah University, Al Madinah, Saudi Arabia; and Associate Professor, Department of Prosthodontics, Faculty of Dentistry, Ibb University, Ibb, Yemen.
| | - Radhwan Algabri
- Assistant Professor, Department of Prosthodontics, Faculty of Dentistry, Ibb University, Ibb, Yemen; and Assistant Professor, Department of Prosthodontics, Faculty of Dentistry, National University, Ibb, Yemen
| | - Wafaa Ibrahim Ibrahim
- Associate Professor, Department of Prosthodontics, Faculty of Oral and Dental Medicine, Delta University for Science and Technology, Mansoura, Egypt
| | - Mohammed Nasser Alhajj
- Assistant Professor, Department of Prosthodontics, Faculty of Dentistry, Thamar University, Dhamar, Yemen
| | - Dina Elawady
- Associate Professor, Department of Prosthodontics, Faculty of Dentistry, October University for Modern Sciences and Arts (MSA), 6th October City, Egypt
| |
Collapse
|
12
|
Zheng Q, Gao Y, Zhou M, Li H, Lin J, Zhang W, Chen X. Semi or fully automatic tooth segmentation in CBCT images: a review. PeerJ Comput Sci 2024; 10:e1994. [PMID: 38660190 PMCID: PMC11041986 DOI: 10.7717/peerj-cs.1994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cone beam computed tomography (CBCT) is widely employed in modern dentistry, and tooth segmentation constitutes an integral part of the digital workflow based on these imaging data. Previous methodologies rely heavily on manual segmentation and are time-consuming and labor-intensive in clinical practice. Recently, with advancements in computer vision technology, scholars have conducted in-depth research, proposing various fast and accurate tooth segmentation methods. In this review, we review 55 articles in this field and discuss the effectiveness, advantages, and disadvantages of each approach. In addition to simple classification and discussion, this review aims to reveal how tooth segmentation methods can be improved by the application and refinement of existing image segmentation algorithms to solve problems such as irregular morphology and fuzzy boundaries of teeth. It is assumed that with the optimization of these methods, manual operation will be reduced, and greater accuracy and robustness in tooth segmentation will be achieved. Finally, we highlight the challenges that still exist in this field and provide prospects for future directions.
Collapse
Affiliation(s)
- Qianhan Zheng
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yu Gao
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mengqi Zhou
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Huimin Li
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jiaqi Lin
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weifang Zhang
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Social Medicine & Health Affairs Administration, Zhejiang University, Hangzhou, China
| | - Xuepeng Chen
- Stomatology Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Clinical Research Center for Oral Diseases of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
13
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
14
|
Bonny T, Al-Ali A, Al-Ali M, Alsaadi R, Al Nassan W, Obaideen K, AlMallahi M. Dental bitewing radiographs segmentation using deep learning-based convolutional neural network algorithms. Oral Radiol 2024; 40:165-177. [PMID: 38047985 DOI: 10.1007/s11282-023-00717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 10/11/2023] [Indexed: 12/05/2023]
Abstract
OBJECTIVES Dental radiographs, particularly bitewing radiographs, are widely used in dental diagnosis and treatment Dental image segmentation is difficult for various reasons, such as intricate structures, low contrast, noise, roughness, and unclear borders, resulting in poor image quality. Recent developments in deep learning models have improved performance in analyzing dental images. In this research, our primary objective is to determine the most effective segmentation technique for bitewing radiographs based on different metrics: accuracy, training time, and the number of training parameters as a reflection of architectural cost. METHODS In this research, we employ several deep learning models, namely Resnet-18, Resnet-50, Xception, Inception Resnet v2, and Mobilenetv2, to segment bitewing radiographs. The process begins by importing the radiographs into MATLAB®(MathWorks Inc), where the images are first improved, then segmented using the graph cut method based on regions to produce a binary mask that distinguishes the background from the original X-ray. RESULTS The deep learning models were trained on 298 and 99 radiograph training and validation sets and were evaluated using 99 images from the testing set. We also compare the segmentation model using several criteria, including accuracy, speed, and size, to determine which network is superior. Furthermore, we compare our findings with prior research to provide a comprehensive understanding of the advancements made in dental image segmentation. The accurate segmentation achieved was 93.67% and 94.42% by the Resnet-18 and Resnet-50 models, respectively. CONCLUSION This research advances dental image analysis and facilitates more accurate diagnoses and treatment planning by determining the best segmentation technique. The outcomes of this study can guide researchers and practitioners in selecting appropriate segmentation methods for practical dental image analysis.
Collapse
Affiliation(s)
- Talal Bonny
- Department of Computer Engineering, University of Sharjah, Sharjah, United Arab Emirates.
| | - Abdelaziz Al-Ali
- Department of Computer Engineering, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammed Al-Ali
- Department of Computer Engineering, University of Sharjah, Sharjah, United Arab Emirates
| | - Rashid Alsaadi
- Electrical and Electronics Engineering, University of Sharjah, Sharjah, United Arab Emirates
| | - Wafaa Al Nassan
- Department of Computer Engineering, University of Sharjah, Sharjah, United Arab Emirates
| | - Khaled Obaideen
- Research Institute of Science and Technology, University of Sharjah, Sharjah, United Arab Emirates
| | - Maryam AlMallahi
- Industrial Engineering and Engineering Management Department, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
15
|
Elgarba BM, Fontenele RC, Tarce M, Jacobs R. Artificial intelligence serving pre-surgical digital implant planning: A scoping review. J Dent 2024; 143:104862. [PMID: 38336018 DOI: 10.1016/j.jdent.2024.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/22/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
OBJECTIVES To conduct a scoping review focusing on artificial intelligence (AI) applications in presurgical dental implant planning. Additionally, to assess the automation degree of clinically available pre-surgical implant planning software. DATA AND SOURCES A systematic electronic literature search was performed in five databases (PubMed, Embase, Web of Science, Cochrane Library, and Scopus), along with exploring gray literature web-based resources until November 2023. English-language studies on AI-driven tools for digital implant planning were included based on an independent evaluation by two reviewers. An assessment of automation steps in dental implant planning software available on the market up to November 2023 was also performed. STUDY SELECTION AND RESULTS From an initial 1,732 studies, 47 met eligibility criteria. Within this subset, 39 studies focused on AI networks for anatomical landmark-based segmentation, creating virtual patients. Eight studies were dedicated to AI networks for virtual implant placement. Additionally, a total of 12 commonly available implant planning software applications were identified and assessed for their level of automation in pre-surgical digital implant workflows. Notably, only six of these featured at least one fully automated step in the planning software, with none possessing a fully automated implant planning protocol. CONCLUSIONS AI plays a crucial role in achieving accurate, time-efficient, and consistent segmentation of anatomical landmarks, serving the process of virtual patient creation. Additionally, currently available systems for virtual implant placement demonstrate different degrees of automation. It is important to highlight that, as of now, full automation of this process has not been documented nor scientifically validated. CLINICAL SIGNIFICANCE Scientific and clinical validation of AI applications for presurgical dental implant planning is currently scarce. The present review allows the clinician to identify AI-based automation in presurgical dental implant planning and assess the potential underlying scientific validation.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt.
| | - Rocharles Cavalcante Fontenele
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium
| | - Mihai Tarce
- Division of Periodontology & Implant Dentistry, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China & Periodontology and Oral Microbiology, Department of Oral Health Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals, Campus Sint-Rafael, 3000 Leuven, Belgium & Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
16
|
Hu F, Chen Z, Wu F. A novel difficult-to-segment samples focusing network for oral CBCT image segmentation. Sci Rep 2024; 14:5068. [PMID: 38429362 PMCID: PMC10907706 DOI: 10.1038/s41598-024-55522-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 02/24/2024] [Indexed: 03/03/2024] Open
Abstract
Using deep learning technology to segment oral CBCT images for clinical diagnosis and treatment is one of the important research directions in the field of clinical dentistry. However, the blurred contour and the scale difference limit the segmentation accuracy of the crown edge and the root part of the current methods, making these regions become difficult-to-segment samples in the oral CBCT segmentation task. Aiming at the above problems, this work proposed a Difficult-to-Segment Focus Network (DSFNet) for segmenting oral CBCT images. The network utilizes a Feature Capturing Module (FCM) to efficiently capture local and long-range features, enhancing the feature extraction performance. Additionally, a Multi-Scale Feature Fusion Module (MFFM) is employed to merge multiscale feature information. To further improve the loss ratio for difficult-to-segment samples, a hybrid loss function is proposed, combining Focal Loss and Dice Loss. By utilizing the hybrid loss function, DSFNet achieves 91.85% Dice Similarity Coefficient (DSC) and 0.216 mm Average Symmetric Surface Distance (ASSD) performance in oral CBCT segmentation tasks. Experimental results show that the proposed method is superior to current dental CBCT image segmentation techniques and has real-world applicability.
Collapse
Affiliation(s)
- Fengjun Hu
- College of Information Science and Technology, Zhejiang Shuren University, Hangzhou, 310015, China
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China
| | - Zeyu Chen
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China
| | - Fan Wu
- College of Information Science and Technology, Zhejiang Shuren University, Hangzhou, 310015, China.
- Zhejiang-Netherlands Joint Laboratory for Digital Diagnosis and Treatment of Oral Diseases, Zhejiang Shuren University, Hangzhou, 310015, China.
| |
Collapse
|
17
|
Chen X, Ma N, Xu T, Xu C. Deep learning-based tooth segmentation methods in medical imaging: A review. Proc Inst Mech Eng H 2024; 238:115-131. [PMID: 38314788 DOI: 10.1177/09544119231217603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Deep learning approaches for tooth segmentation employ convolutional neural networks (CNNs) or Transformers to derive tooth feature maps from extensive training datasets. Tooth segmentation serves as a critical prerequisite for clinical dental analysis and surgical procedures, enabling dentists to comprehensively assess oral conditions and subsequently diagnose pathologies. Over the past decade, deep learning has experienced significant advancements, with researchers introducing efficient models such as U-Net, Mask R-CNN, and Segmentation Transformer (SETR). Building upon these frameworks, scholars have proposed numerous enhancement and optimization modules to attain superior tooth segmentation performance. This paper discusses the deep learning methods of tooth segmentation on dental panoramic radiographs (DPRs), cone-beam computed tomography (CBCT) images, intro oral scan (IOS) models, and others. Finally, we outline performance-enhancing techniques and suggest potential avenues for ongoing research. Numerous challenges remain, including data annotation and model generalization limitations. This paper offers insights for future tooth segmentation studies, potentially facilitating broader clinical adoption.
Collapse
Affiliation(s)
- Xiaokang Chen
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| | - Nan Ma
- Faculty of Information and Technology, Beijing University of Technology, Beijing, China
- Engineering Research Center of Intelligence Perception and Autonomous Control, Ministry of Education, Beijing University of Technology, Beijing, China
| | - Tongkai Xu
- Department of General Dentistry II, Peking University School and Hospital of Stomatology, Beijing, China
| | - Cheng Xu
- Beijing Key Laboratory of Information Service Engineering, Beijing Union University, Beijing, China
| |
Collapse
|
18
|
Huang J, Farpour N, Yang BJ, Mupparapu M, Lure F, Li J, Yan H, Setzer FC. Uncertainty-based Active Learning by Bayesian U-Net for Multi-label Cone-beam CT Segmentation. J Endod 2024; 50:220-228. [PMID: 37979653 PMCID: PMC10842728 DOI: 10.1016/j.joen.2023.11.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/20/2023]
Abstract
INTRODUCTION Training of Artificial Intelligence (AI) for biomedical image analysis depends on large annotated datasets. This study assessed the efficacy of Active Learning (AL) strategies training AI models for accurate multilabel segmentation and detection of periapical lesions in cone-beam CTs (CBCTs) using a limited dataset. METHODS Limited field-of-view CBCT volumes (n = 20) were segmented by clinicians (clinician segmentation [CS]) and Bayesian U-Net-based AL strategies. Two AL functions, Bayesian Active Learning by Disagreement [BALD] and Max_Entropy [ME], were used for multilabel segmentation ("Lesion"-"Tooth Structure"-"Bone"-"Restorative Materials"-"Background"), and compared to a non-AL benchmark Bayesian U-Net function. The training-to-testing set ratio was 4:1. Comparisons between the AL and Bayesian U-Net functions versus CS were made by evaluating the segmentation accuracy with the Dice indices and lesion detection accuracy. The Kruskal-Wallis test was used to assess statistically significant differences. RESULTS The final training set contained 26 images. After 8 AL iterations, lesion detection sensitivity was 84.0% for BALD, 76.0% for ME, and 32.0% for Bayesian U-Net, which was significantly different (P < .0001; H = 16.989). The mean Dice index for all labels was 0.680 ± 0.155 for Bayesian U-Net and 0.703 ± 0.166 for ME after eight AL iterations, compared to 0.601 ± 0.267 for Bayesian U-Net over the mean of all iterations. The Dice index for "Lesion" was 0.504 for BALD and 0.501 for ME after 8 AL iterations, and at a maximum 0.288 for Bayesian U-Net. CONCLUSIONS Both AL strategies based on uncertainty quantification from Bayesian U-Net BALD, and ME, provided improved segmentation and lesion detection accuracy for CBCTs. AL may contribute to reducing extensive labeling needs for training AI algorithms for biomedical image analysis in dentistry.
Collapse
Affiliation(s)
- Jiayu Huang
- School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona
| | - Nazbanoo Farpour
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Bingjian J Yang
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Muralidhar Mupparapu
- Department of Oral Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Fleming Lure
- MS Technologies Corporation, Rockville, Maryland
| | - Jing Li
- School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia
| | - Hao Yan
- School of Computing and Augmented Intelligence Arizona State University, Tempe, Arizona
| | - Frank C Setzer
- Department of Endodontics, University of Pennsylvania, Philadelphia, Pennsylvania.
| |
Collapse
|
19
|
Tan M, Cui Z, Zhong T, Fang Y, Zhang Y, Shen D. A progressive framework for tooth and substructure segmentation from cone-beam CT images. Comput Biol Med 2024; 169:107839. [PMID: 38150887 DOI: 10.1016/j.compbiomed.2023.107839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/13/2023] [Accepted: 12/07/2023] [Indexed: 12/29/2023]
Abstract
BACKGROUND Accurate segmentation of individual tooth and their substructures including enamel, pulp, and dentin from cone-beam computed tomography (CBCT) images is essential for dental diagnosis and treatment planning in digital dentistry. Existing methods for tooth segmentation based on CBCT images have achieved substantial progress; however, techniques for further segmentation into substructures are yet to be developed. PURPOSE We aim to propose a novel three-stage progressive deep-learning-based framework for automatically segmenting 3D tooth from CBCT images, focusing on finer substructures, i.e., enamel, pulp, and dentin. METHODS In this paper, we first detect each tooth using its centroid by a clustering scheme, which efficiently determines each tooth detection by applying learned displacement vectors from the foreground tooth region. Next, guided by the detected centroid, each tooth proposal, combined with the corresponding tooth map, is processed through our tooth segmentation network. We also present an attention-based hybrid feature fusion mechanism, which provides intricate details of the tooth boundary while maintaining the global tooth shape, thereby enhancing the segmentation process. Additionally, we utilize the skeleton of the tooth as a guide for subsequent substructure segmentation. RESULTS Our algorithm is extensively evaluated on a collected dataset of 314 patients, and the extensive comparison and ablation studies demonstrate superior segmentation results of our approach. CONCLUSIONS Our proposed method can automatically segment tooth and finer substructures from CBCT images, underlining its potential applicability for clinical diagnosis and surgical treatment.
Collapse
Affiliation(s)
- Minhui Tan
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China.
| | - Tao Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China
| | - Yu Fang
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 200230, China; Shanghai Clinical Research and Trial Center, Shanghai, 201210, China.
| |
Collapse
|
20
|
Jiang S, Zhang H, Mao Z, Li Y, Feng G. Accurate malocclusion tooth segmentation method based on a level set with adaptive edge feature enhancement. Heliyon 2024; 10:e23642. [PMID: 38259961 PMCID: PMC10801251 DOI: 10.1016/j.heliyon.2023.e23642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/21/2023] [Accepted: 12/08/2023] [Indexed: 01/24/2024] Open
Abstract
Objective This study aimed to accurately segment teeth under complex oral conditions, including complex structural interference among adjacent teeth or malocclusion conditions, such as tooth rotation and displacement caused by dental crowding. Study design Cone-beam computed tomography (CBCT) images were obtained from 19 patients with complex oral conditions, and a three-step solution was proposed. This study used a global convex level-set model to extract bony tissue and developed a flexible curve extraction method for separating neighbouring teeth under complex structural interference. In addition, a local level-set model with adaptive edge feature enhancement was proposed to segment individual teeth precisely. This model adaptively enhances edge features based on the structure of the root boundary and accurately distinguishes between the close-contact root and alveolar bone resulting from tooth rotation or displacement. Results The experimental results showed that the average Dice similarity coefficient values for incisors, canines, premolars, and molars were 93.30%, 93.47%, 93.24%, and 93.89%, respectively, and the average tooth centroid distances were 0.66, 0.61, 0.87, and 0.80 mm, respectively. Conclusion The proposed method can effectively segment teeth without relying on highly precise annotated datasets, yielding satisfactory results even under complex structural interference between adjacent teeth or tooth rotation and displacement caused by dental crowding. It is more robust than the other methods and provides valuable data for further research and clinical practice.
Collapse
Affiliation(s)
- Shuyi Jiang
- College of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130012, China
| | - Han Zhang
- Department of Orthodontics, Jilin University Stomatology Hospital, Changchun, 130021, China
| | - Zhi Mao
- Department of Orthodontics, Jilin University Stomatology Hospital, Changchun, 130021, China
| | - Yonghui Li
- College of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130012, China
| | - Guanyuan Feng
- College of Computer Science and Technology, Changchun University of Science and Technology, Changchun, 130012, China
| |
Collapse
|
21
|
Park JA, Kim D, Yang S, Kang JH, Kim JE, Huh KH, Lee SS, Yi WJ, Heo MS. Automatic detection of posterior superior alveolar artery in dental cone-beam CT images using a deeply supervised multi-scale 3D network. Dentomaxillofac Radiol 2024; 53:22-31. [PMID: 38214942 PMCID: PMC11003607 DOI: 10.1093/dmfr/twad002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/15/2023] [Accepted: 10/18/2023] [Indexed: 01/13/2024] Open
Abstract
OBJECTIVES This study aimed to develop a robust and accurate deep learning network for detecting the posterior superior alveolar artery (PSAA) in dental cone-beam CT (CBCT) images, focusing on the precise localization of the centre pixel as a critical centreline pixel. METHODS PSAA locations were manually labelled on dental CBCT data from 150 subjects. The left maxillary sinus images were horizontally flipped. In total, 300 datasets were created. Six different deep learning networks were trained, including 3D U-Net, deeply supervised 3D U-Net (3D U-Net DS), multi-scale deeply supervised 3D U-Net (3D U-Net MSDS), 3D Attention U-Net, 3D V-Net, and 3D Dense U-Net. The performance evaluation involved predicting the centre pixel of the PSAA. This was assessed using mean absolute error (MAE), mean radial error (MRE), and successful detection rate (SDR). RESULTS The 3D U-Net MSDS achieved the best prediction performance among the tested networks, with an MAE measurement of 0.696 ± 1.552 mm and MRE of 1.101 ± 2.270 mm. In comparison, the 3D U-Net showed the lowest performance. The 3D U-Net MSDS demonstrated a SDR of 95% within a 2 mm MAE. This was a significantly higher result than other networks that achieved a detection rate of over 80%. CONCLUSIONS This study presents a robust deep learning network for accurate PSAA detection in dental CBCT images, emphasizing precise centre pixel localization. The method achieves high accuracy in locating small vessels, such as the PSAA, and has the potential to enhance detection accuracy and efficiency, thus impacting oral and maxillofacial surgery planning and decision-making.
Collapse
Affiliation(s)
- Jae-An Park
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - DaEl Kim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| |
Collapse
|
22
|
Li J, Cheng B, Niu N, Gao G, Ying S, Shi J, Zeng T. A fine-grained orthodontics segmentation model for 3D intraoral scan data. Comput Biol Med 2024; 168:107821. [PMID: 38064844 DOI: 10.1016/j.compbiomed.2023.107821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 11/01/2023] [Accepted: 12/04/2023] [Indexed: 01/10/2024]
Abstract
With the widespread application of digital orthodontics in the diagnosis and treatment of oral diseases, more and more researchers focus on the accurate segmentation of teeth from intraoral scan data. The accuracy of the segmentation results will directly affect the follow-up diagnosis of dentists. Although the current research on tooth segmentation has achieved promising results, the 3D intraoral scan datasets they use are almost all indirect scans of plaster models, and only contain limited samples of abnormal teeth, so it is difficult to apply them to clinical scenarios under orthodontic treatment. The current issue is the lack of a unified and standardized dataset for analyzing and validating the effectiveness of tooth segmentation. In this work, we focus on deformed teeth segmentation and provide a fine-grained tooth segmentation dataset (3D-IOSSeg). The dataset consists of 3D intraoral scan data from more than 200 patients, with each sample labeled with a fine-grained mesh unit. Meanwhile, 3D-IOSSeg meticulously classified every tooth in the upper and lower jaws. In addition, we propose a fast graph convolutional network for 3D tooth segmentation named Fast-TGCN. In the model, the relationship between adjacent mesh cells is directly established by the naive adjacency matrix to better extract the local geometric features of the tooth. Extensive experiments show that Fast-TGCN can quickly and accurately segment teeth from the mouth with complex structures and outperforms other methods in various evaluation metrics. Moreover, we present the results of multiple classical tooth segmentation methods on this dataset, providing a comprehensive analysis of the field. All code and data will be available at https://github.com/MIVRC/Fast-TGCN.
Collapse
Affiliation(s)
- Juncheng Li
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Bodong Cheng
- School of Computer Science and Technology, East China Normal University, Shanghai, China.
| | - Najun Niu
- School of Stomatology, Nanjing Medical University, Nanjing, China.
| | - Guangwei Gao
- Institute of Advanced Technology, Nanjing University of Posts and Telecommunications, Nanjing, China.
| | - Shihui Ying
- Department of Mathematics, School of Science, Shanghai University, Shanghai, China.
| | - Jun Shi
- School of Communication Information Engineering, Shanghai University, Shanghai, China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, New Territories, Hong Kong.
| |
Collapse
|
23
|
He W, Zhang C, Dai J, Liu L, Wang T, Liu X, Jiang Y, Li N, Xiong J, Wang L, Xie Y, Liang X. A statistical deformation model-based data augmentation method for volumetric medical image segmentation. Med Image Anal 2024; 91:102984. [PMID: 37837690 DOI: 10.1016/j.media.2023.102984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 07/15/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.
Collapse
Affiliation(s)
- Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston Salem, North Carolina 27157, USA
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Jing Xiong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lei Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
24
|
Tao B, Xu J, Gao J, He S, Jiang S, Wang F, Chen X, Wu Y. Deep learning-based automatic segmentation of bone graft material after maxillary sinus augmentation. Clin Oral Implants Res 2023. [PMID: 38033189 DOI: 10.1111/clr.14221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/14/2023] [Accepted: 11/15/2023] [Indexed: 12/02/2023]
Abstract
OBJECTIVES To investigate the accuracy and reliability of deep learning in automatic graft material segmentation after maxillary sinus augmentation (SA) from cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS One hundred paired CBCT scans (a preoperative scan and a postoperative scan) were collected and randomly allocated to training (n = 82) and testing (n = 18) subsets. The ground truths of graft materials were labeled by three observers together (two experienced surgeons and a computer engineer). A deep learning model including a 3D V-Net and a 3D Attention V-Net was developed. The overall performance of the model was assessed through the testing data set. The comparative accuracy and inference time consumption of the model-driven and manual segmentation (by two surgeons with 3 years of experience in dental implant surgery) were conducted on 10 CBCT scans from the test samples. RESULTS The deep learning model had a Dice coefficient (Dice) of 90.36 ± 2.53%, a 95% Hausdorff distance (HD) of 1.59 ± 0.82 mm, and an average surface distance (ASD) of 0.38 ± 0.11 mm. The proposed model only needed 7.2 s, while the surgeon took 19.15 min on average to complete a segmentation task. The overall performances of the model were significantly superior to those of surgeons. CONCLUSIONS The proposed deep learning model yielded a more accurate and efficient performance of automatic segmentation of graft material after SA than that of the two surgeons. The proposed model could facilitate a powerful system for volumetric change evaluation, dental implant planning, and digital dentistry.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Gao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Shamin He
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Shuanglin Jiang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Stomatology, Shanghai Jiao Tong University, Shanghai, China
- National Center for Stomatology, Shanghai, China
- National Clinical Research Center for Oral Diseases, Shanghai, China
- Shanghai Key Laboratory of Stomatology, Shanghai, China
- Shanghai Research Institute of Stomatology, Shanghai, China
| |
Collapse
|
25
|
Dumbryte I, Narbutis D, Androulidaki M, Vailionis A, Juodkazis S, Malinauskas M. Teeth Microcracks Research: Towards Multi-Modal Imaging. Bioengineering (Basel) 2023; 10:1354. [PMID: 38135945 PMCID: PMC10740647 DOI: 10.3390/bioengineering10121354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/16/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
This perspective is an overview of the recent advances in teeth microcrack (MC) research, where there is a clear tendency towards a shift from two-dimensional (2D) to three-dimensional (3D) examination techniques, enhanced with artificial intelligence models for data processing and image acquisition. X-ray micro-computed tomography combined with machine learning allows 3D characterization of all spatially resolved cracks, despite the locations within the tooth in which they begin and extend, and the arrangement of MCs and their structural properties. With photoluminescence and micro-/nano-Raman spectroscopy, optical properties and chemical and elemental composition of the material can be evaluated, thus helping to assess the structural integrity of the tooth at the MC site. Approaching tooth samples having cracks from different perspectives and using complementary laboratory techniques, there is a natural progression from 3D to multi-modal imaging, where the volumetric (passive: dimensions) information of the tooth sample can be supplemented by dynamic (active: composition, interaction) image data. Revelation of tooth cracks clearly shows the need to re-assess the role of these MCs and their effect on the structural integrity and longevity of the tooth. This provides insight into the nature of cracks in natural hard materials and contributes to a better understanding of how bio-inspired structures could be designed to foresee crack propagation in biosolids.
Collapse
Affiliation(s)
- Irma Dumbryte
- Institute of Odontology, Vilnius University, LT-08217 Vilnius, Lithuania
| | - Donatas Narbutis
- Institute of Theoretical Physics and Astronomy, Vilnius University, LT-10222 Vilnius, Lithuania
| | - Maria Androulidaki
- Microelectronics Research Group, Institute of Electronic Structure & Laser, Foundation for Research and Technology FORTH-Hellas, 70013 Heraklion, Crete, Greece
| | - Arturas Vailionis
- Stanford Nano Shared Facilities, Stanford University, Stanford, CA 94305, USA
- Department of Physics, Kaunas University of Technology, LT-51368 Kaunas, Lithuania
| | - Saulius Juodkazis
- Optical Sciences Centre and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
- WRH Program International Research Frontiers Initiative (IRFI), Tokyo Institute of Technology, Nagatsuta-cho, Midori-ku, Yokohama 226-8503, Japan
| | | |
Collapse
|
26
|
Kim H, Jeon YD, Park KB, Cha H, Kim MS, You J, Lee SW, Shin SH, Chung YG, Kang SB, Jang WS, Yoon DK. Automatic segmentation of inconstant fractured fragments for tibia/fibula from CT images using deep learning. Sci Rep 2023; 13:20431. [PMID: 37993627 PMCID: PMC10665312 DOI: 10.1038/s41598-023-47706-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/17/2023] [Indexed: 11/24/2023] Open
Abstract
Orthopaedic surgeons need to correctly identify bone fragments using 2D/3D CT images before trauma surgery. Advances in deep learning technology provide good insights into trauma surgery over manual diagnosis. This study demonstrates the application of the DeepLab v3+ -based deep learning model for the automatic segmentation of fragments of the fractured tibia and fibula from CT images and the results of the evaluation of the performance of the automatic segmentation. The deep learning model, which was trained using over 11 million images, showed good performance with a global accuracy of 98.92%, a weighted intersection over the union of 0.9841, and a mean boundary F1 score of 0.8921. Moreover, deep learning performed 5-8 times faster than the experts' recognition performed manually, which is comparatively inefficient, with almost the same significance. This study will play an important role in preoperative surgical planning for trauma surgery with convenience and speed.
Collapse
Affiliation(s)
- Hyeonjoo Kim
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Young Dae Jeon
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Ki Bong Park
- Department of Orthopedic Surgery, University of Ulsan, College of Medicine, Ulsan University Hospital, Ulsan, Republic of Korea
| | - Hayeong Cha
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Moo-Sub Kim
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Juyeon You
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Se-Won Lee
- Department of Orthopedic Surgery, Yeouido St. Mary's Hospital,, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Seung-Han Shin
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yang-Guk Chung
- Department of Orthopedic Surgery, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Bin Kang
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea
| | - Won Seuk Jang
- Department of Medical Device Engineering and Management, College of Medicine, Yonsei University, Seoul, Republic of Korea.
| | - Do-Kun Yoon
- Industrial R&D Center, KAVILAB Co. Ltd., Seoul, Republic of Korea.
| |
Collapse
|
27
|
Lv J, Zhang L, Xu J, Li W, Li G, Zhou H. Automatic segmentation of mandibular canal using transformer based neural networks. Front Bioeng Biotechnol 2023; 11:1302524. [PMID: 38047288 PMCID: PMC10693337 DOI: 10.3389/fbioe.2023.1302524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 11/01/2023] [Indexed: 12/05/2023] Open
Abstract
Accurate 3D localization of the mandibular canal is crucial for the success of digitally-assisted dental surgeries. Damage to the mandibular canal may result in severe consequences for the patient, including acute pain, numbness, or even facial paralysis. As such, the development of a fast, stable, and highly precise method for mandibular canal segmentation is paramount for enhancing the success rate of dental surgical procedures. Nonetheless, the task of mandibular canal segmentation is fraught with challenges, including a severe imbalance between positive and negative samples and indistinct boundaries, which often compromise the completeness of existing segmentation methods. To surmount these challenges, we propose an innovative, fully automated segmentation approach for the mandibular canal. Our methodology employs a Transformer architecture in conjunction with cl-Dice loss to ensure that the model concentrates on the connectivity of the mandibular canal. Additionally, we introduce a pixel-level feature fusion technique to bolster the model's sensitivity to fine-grained details of the canal structure. To tackle the issue of sample imbalance and vague boundaries, we implement a strategy founded on mandibular foramen localization to isolate the maximally connected domain of the mandibular canal. Furthermore, a contrast enhancement technique is employed for pre-processing the raw data. We also adopt a Deep Label Fusion strategy for pre-training on synthetic datasets, which substantially elevates the model's performance. Empirical evaluations on a publicly accessible mandibular canal dataset reveal superior performance metrics: a Dice score of 0.844, click score of 0.961, IoU of 0.731, and HD95 of 2.947 mm. These results not only validate the efficacy of our approach but also establish its state-of-the-art performance on the public mandibular canal dataset.
Collapse
Affiliation(s)
| | | | | | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | | | | |
Collapse
|
28
|
Wu L, Wang H, Chen Y, Zhang X, Zhang T, Shen N, Tao G, Sun Z, Ding Y, Wang W, Bu J. Beyond radiologist-level liver lesion detection on multi-phase contrast-enhanced CT images by deep learning. iScience 2023; 26:108183. [PMID: 38026220 PMCID: PMC10654534 DOI: 10.1016/j.isci.2023.108183] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/22/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Accurate detection of liver lesions from multi-phase contrast-enhanced CT (CECT) scans is a fundamental step for precise liver diagnosis and treatment. However, the analysis of multi-phase contexts is heavily challenged by the misalignment caused by respiration coupled with the movement of organs. Here, we proposed an AI system for multi-phase liver lesion segmentation (named MULLET) for precise and fully automatic segmentation of real-patient CECT images. MULLET enables effectively embedding the important ROIs of CECT images and exploring multi-phase contexts by introducing a transformer-based attention mechanism. Evaluated on 1,229 CECT scans from 1,197 patients, MULLET demonstrated significant performance gains in terms of Dice, Recall, and F2 score, which are 5.80%, 6.57%, and 5.87% higher than state of the arts, respectively. MULLET has been successfully deployed in real-world settings. The deployed AI web server provides a powerful system to boost clinical workflows of liver lesion diagnosis and could be straightforwardly extended to general CECT analyses.
Collapse
Affiliation(s)
- Lei Wu
- Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China
- Pujian Technology, Hangzhou, Zhejiang, China
| | - Haishuai Wang
- Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China
| | - Yining Chen
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiang Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Tianyun Zhang
- Liangzhu Laboratory, Zhejiang University Medical Center, Hangzhou, Zhejiang, China
| | - Ning Shen
- Liangzhu Laboratory, Zhejiang University Medical Center, Hangzhou, Zhejiang, China
| | - Guangyu Tao
- Department of Radiology, Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhongquan Sun
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yuan Ding
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Weilin Wang
- Department of Hepatobiliary and Pancreatic Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Jiajun Bu
- Zhejiang Provincial Key Laboratory of Service Robot, College of Computer Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
29
|
Rauf AM, Mahmood TMA, Mohammed MH, Omer ZQ, Kareem FA. Orthodontic Implementation of Machine Learning Algorithms for Predicting Some Linear Dental Arch Measurements and Preventing Anterior Segment Malocclusion: A Prospective Study. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1973. [PMID: 38004022 PMCID: PMC10673436 DOI: 10.3390/medicina59111973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 10/27/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023]
Abstract
Background and Objectives: Orthodontics is a field that has seen significant advancements in recent years, with technology playing a crucial role in improving diagnosis and treatment planning. The study aimed to implement artificial intelligence to predict the arch width as a preventive measure to avoid future crowding in growing patients or even in adult patients seeking orthodontic treatment as a tool for orthodontic diagnosis. Materials and Methods: Four hundred and fifty intraoral scan (IOS) images were selected from orthodontic patients seeking treatment in private orthodontic centers. Real inter-canine, inter-premolar, and inter-molar widths were measured digitally. Two of the main machine learning models were used: the Python programming language and machine learning algorithms, implementing the data on k-nearest neighbor and linear regression. Results: After the dataset had been implemented on the two ML algorithms, linear regression and k-nearest neighbor, the evaluation metric shows that KNN gives better prediction accuracy than LR does. The resulting accuracy was around 99%. Conclusions: it is possible to leverage machine learning to enhance orthodontic diagnosis and treatment planning by predicting linear dental arch measurements and preventing anterior segment malocclusion.
Collapse
Affiliation(s)
- Aras Maruf Rauf
- Department of Pedodontics, Orthodontics and Preventive Dentistry, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq; (A.M.R.); (T.M.A.M.)
| | - Trefa Mohammed Ali Mahmood
- Department of Pedodontics, Orthodontics and Preventive Dentistry, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq; (A.M.R.); (T.M.A.M.)
| | - Miran Hikmat Mohammed
- Department of Basic Sciences, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq;
| | - Zana Qadir Omer
- Department of Pedodontics, Orthodontics and Preventive Dentistry, College of Dentistry, Hawler Medical University, Erbil 44001, Iraq;
| | - Fadil Abdullah Kareem
- Department of Pedodontics, Orthodontics and Preventive Dentistry, College of Dentistry, University of Sulaimani, Sulaimaniyah 46001, Iraq; (A.M.R.); (T.M.A.M.)
| |
Collapse
|
30
|
Cameron AB, Abdelhamid HMHAS, George R. CBCT Segmentation and Additive Manufacturing for the Management of Root Canals with Ledges: A Case Report and Technique. J Endod 2023; 49:1570-1575. [PMID: 37582414 DOI: 10.1016/j.joen.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/06/2023] [Accepted: 08/06/2023] [Indexed: 08/17/2023]
Abstract
Cone-beam computed tomography (CBCT) assessment of a ledge could be useful to a clinician; however, using this information effectively during a treatment procedure can be challenging. Advanced additive manufacturing technologies combined with semi-automated segmentation of root canals can help simulate the ledge and help in management of these iatrogenic complications. A patient presented after unsuccessful root canal treatment with a ledge on the left mandibular first molar. A CBCT was taken, and the images imported into a segmentation software (Mimics, Materialise). The canal was isolated, and segmentation performed along with the other structures of the tooth. A 3-dimensional digital model of the internal structures of the canal were used to design a mock-up which was additively manufactured. This was used as a preclinical guide to simulate the procedure, precurve the file, and manage the canal. This novel technique using virtual modeling from CBCT data post ledge formation allowed for successful and quick management of a tooth with ledges.
Collapse
Affiliation(s)
- Andrew B Cameron
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia; Menzies Health Institute Queensland Disability & Rehabilitation Center, Gold Coast, Australia
| | | | - Roy George
- School of Medicine and Dentistry, Griffith University, Gold Coast, Australia.
| |
Collapse
|
31
|
Zhang L, Li W, Lv J, Xu J, Zhou H, Li G, Ai K. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview. J Dent 2023; 138:104727. [PMID: 37769934 DOI: 10.1016/j.jdent.2023.104727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/12/2023] [Accepted: 09/25/2023] [Indexed: 10/03/2023] Open
Abstract
OBJECTIVES This article reviews recent advances in computer-aided segmentation methods for oral and maxillofacial surgery and describes the advantages and limitations of these methods. The objective is to provide an invaluable resource for precise therapy and surgical planning in oral and maxillofacial surgery. Study selection, data and sources: This review includes full-text articles and conference proceedings reporting the application of segmentation methods in the field of oral and maxillofacial surgery. The research focuses on three aspects: tooth detection segmentation, mandibular canal segmentation and alveolar bone segmentation. The most commonly used imaging technique is CBCT, followed by conventional CT and Orthopantomography. A systematic electronic database search was performed up to July 2023 (Medline via PubMed, IEEE Xplore, ArXiv, Google Scholar were searched). RESULTS These segmentation methods can be mainly divided into two categories: traditional image processing and machine learning (including deep learning). Performance testing on a dataset of images labeled by medical professionals shows that it performs similarly to dentists' annotations, confirming its effectiveness. However, no studies have evaluated its practical application value. CONCLUSION Segmentation methods (particularly deep learning methods) have demonstrated unprecedented performance, while inherent challenges remain, including the scarcity and inconsistency of datasets, visible artifacts in images, unbalanced data distribution, and the "black box" nature. CLINICAL SIGNIFICANCE Accurate image segmentation is critical for precise treatment and surgical planning in oral and maxillofacial surgery. This review aims to facilitate more accurate and effective surgical treatment planning among dental researchers.
Collapse
Affiliation(s)
- Lang Zhang
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Wang Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China.
| | - Jinxun Lv
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Jiajie Xu
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Hengyu Zhou
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Gen Li
- School of Biomedical Engineering, Chongqing University of Technology, Chongqing 400054, China
| | - Keqi Ai
- Department of Radiology, Xinqiao Hospital, Army Medical University, Chongqing 400037, China.
| |
Collapse
|
32
|
Tian Y, Zhang Z, Zhao B, Liu L, Liu X, Feng Y, Tian J, Kou D. Coarse-to-fine prior-guided attention network for multi-structure segmentation on dental panoramic radiographs. Phys Med Biol 2023; 68:215010. [PMID: 37816372 DOI: 10.1088/1361-6560/ad0218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/10/2023] [Indexed: 10/12/2023]
Abstract
Objective. Accurate segmentation of various anatomical structures from dental panoramic radiographs is essential for the diagnosis and treatment planning of various diseases in digital dentistry. In this paper, we propose a novel deep learning-based method for accurate and fully automatic segmentation of the maxillary sinus, mandibular condyle, mandibular nerve, alveolar bone and teeth on panoramic radiographs.Approach. A two-stage coarse-to-fine prior-guided segmentation framework is proposed to segment multiple structures on dental panoramic radiographs. In the coarse stage, a multi-label segmentation network is used to generate the coarse segmentation mask, and in the fine-tuning stage, a prior-guided attention network with an encoder-decoder architecture is proposed to precisely predict the mask of each anatomical structure. First, a prior-guided edge fusion module is incorporated into the network at the input of each convolution level of the encode path to generate edge-enhanced image feature maps. Second, a prior-guided spatial attention module is proposed to guide the network to extract relevant spatial features from foreground regions based on the combination of the prior information and the spatial attention mechanism. Finally, a prior-guided hybrid attention module is integrated at the bottleneck of the network to explore global context from both spatial and category perspectives.Main results. We evaluated the segmentation performance of our method on a testing dataset that contains 150 panoramic radiographs collected from real-world clinical scenarios. The segmentation results indicate that our proposed method achieves more accurate segmentation performance compared with state-of-the-art methods. The average Jaccard scores are 87.91%, 85.25%, 63.94%, 93.46% and 88.96% for the maxillary sinus, mandibular condyle, mandibular nerve, alveolar bone and teeth, respectively.Significance. The proposed method was able to accurately segment multiple structures on panoramic radiographs. This method has the potential to be part of the process of automatic pathology diagnosis from dental panoramic radiographs.
Collapse
Affiliation(s)
- Yuan Tian
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Zhejia Zhang
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Bailiang Zhao
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Lichao Liu
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Xiaolin Liu
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Yang Feng
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Jie Tian
- Angelalign Inc. No. 500 Zhengli Road, Yangpu District, Shanghai, People's Republic of China
| | - Dazhi Kou
- Shanghai Supercomputer Center. No. 585 Guoshoujing Road, Pudong New District, Shanghai, People's Republic of China
| |
Collapse
|
33
|
Chou TH, Liao SW, Huang JX, Huang HY, Vu-Dinh H, Yau HT. Virtual Dental Articulation Using Computed Tomography Data and Motion Tracking. Bioengineering (Basel) 2023; 10:1248. [PMID: 38002372 PMCID: PMC10669225 DOI: 10.3390/bioengineering10111248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/11/2023] [Accepted: 10/19/2023] [Indexed: 11/26/2023] Open
Abstract
Dental articulation holds crucial and fundamental importance in the design of dental restorations and analysis of prosthetic or orthodontic occlusions. However, common traditional and digital articulators are difficult and cumbersome in use to effectively translate the dental cast model to the articulator workspace when using traditional facebows. In this study, we have developed a personalized virtual dental articulator that directly utilizes computed tomography (CT) data to mathematically model the complex jaw movement, providing a more efficient and accurate way of analyzing and designing dental restorations. By utilizing CT data, Frankfurt's horizontal plane was established for the mathematical modeling of virtual articulation, eliminating tedious facebow transfers. After capturing the patients' CT images and tracking their jaw movements prior to dental treatment, the jaw-tracking information was incorporated into the articulation mathematical model. The validation and analysis of the personalized articulation approach were conducted by comparing the jaw movement between simulation data (virtual articulator) and real measurement data. As a result, the proposed virtual articulator achieves two important functions. Firstly, it replaces the traditional facebow transfer process by transferring the digital dental model to the virtual articulator through the anatomical relationship derived from the cranial CT data. Secondly, the jaw movement trajectory provided by optical tracking was incorporated into the mathematical articulation model to create a personalized virtual articulation with a small Fréchet distance of 1.7 mm. This virtual articulator provides a valuable tool that enables dentists to obtain diagnostic information about the temporomandibular joint (TMJ) and configure personalized settings of occlusal analysis for patients.
Collapse
Affiliation(s)
- Ting-Han Chou
- Department of Stomatology, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600, Taiwan; (T.-H.C.); (H.-Y.H.)
| | - Shu-Wei Liao
- Department of Mechanical Engineering, Advanced Institute of Manufacturing with High-Innovation, National Chung Cheng University, Chiayi 621, Taiwan; (S.-W.L.); (J.-X.H.); (H.V.-D.)
| | - Jun-Xuan Huang
- Department of Mechanical Engineering, Advanced Institute of Manufacturing with High-Innovation, National Chung Cheng University, Chiayi 621, Taiwan; (S.-W.L.); (J.-X.H.); (H.V.-D.)
| | - Hsun-Yu Huang
- Department of Stomatology, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chiayi 600, Taiwan; (T.-H.C.); (H.-Y.H.)
| | - Hien Vu-Dinh
- Department of Mechanical Engineering, Advanced Institute of Manufacturing with High-Innovation, National Chung Cheng University, Chiayi 621, Taiwan; (S.-W.L.); (J.-X.H.); (H.V.-D.)
| | - Hong-Tzong Yau
- Department of Mechanical Engineering, Advanced Institute of Manufacturing with High-Innovation, National Chung Cheng University, Chiayi 621, Taiwan; (S.-W.L.); (J.-X.H.); (H.V.-D.)
- School of Dentistry Kaohsiung, Medical University Kaohsiung, Kaohsiung 807, Taiwan
| |
Collapse
|
34
|
Elgarba BM, Van Aelst S, Swaity A, Morgan N, Shujaat S, Jacobs R. Deep learning-based segmentation of dental implants on cone-beam computed tomography images: A validation study. J Dent 2023; 137:104639. [PMID: 37517787 DOI: 10.1016/j.jdent.2023.104639] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/21/2023] [Accepted: 07/26/2023] [Indexed: 08/01/2023] Open
Abstract
OBJECTIVES To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images. METHODS A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences. RESULTS The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy. CONCLUSIONS The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images. CLINICAL SIGNIFICANCE AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri‑implant bone levels.
Collapse
Affiliation(s)
- Bahaaeldeen M Elgarba
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Prosthodontics, Faculty of Dentistry, Tanta University, 31511 Tanta, Egypt
| | - Stijn Van Aelst
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium
| | - Abdullah Swaity
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Prosthodontic Department, King Hussein Medical Center, Royal Medical Services, Amman, Jordan
| | - Nermin Morgan
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Oral Medicine, Faculty of Dentistry, Mansoura University, Mansoura, Egypt
| | - Sohaib Shujaat
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, Kingdom of Saudi Arabia
| | - Reinhilde Jacobs
- OMFS-IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven & Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Belgium, 3000 Leuven, Belgium; Department of Dental Medicine, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
35
|
Almalki SA, Alsubai S, Alqahtani A, Alenazi AA. Denoised encoder-based residual U-net for precise teeth image segmentation and damage prediction on panoramic radiographs. J Dent 2023; 137:104651. [PMID: 37553029 DOI: 10.1016/j.jdent.2023.104651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/02/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
OBJECTIVES This research focuses on performing teeth segmentation with panoramic radiograph images using a denoised encoder-based residual U-Net model, which enhances segmentation techniques and has the capacity to adapt to predictions with different and new data in the dataset, making the proposed model more robust and assisting in the accurate identification of damages in individual teeth. METHODS The effective segmentation starts with pre-processing the Tufts dataset to resize images to avoid computational complexities. Subsequently, the prediction of the defect in teeth is performed with the denoised encoder block in the residual U-Net model, in which a modified identity block is provided in the encoder section for finer segmentation on specific regions in images, and features are identified optimally. The denoised block aids in handling noisy ground truth images effectively. RESULTS Proposed module achieved greater values of mean dice and mean IoU with 98.90075 and 98.74147 CONCLUSIONS: Proposed AI enabled model permitted a precise approach to segment the teeth on Tuffs dental dataset in spite of the existence of densed dental filling and the kind of tooth. CLINICAL SIGNIFICANCE The proposed model is pivotal for improved dental diagnostics, offering precise identification of dental anomalies. This could revolutionize clinical dental settings by facilitating more accurate treatments and safer examination processes with lower radiation exposure, thus enhancing overall patient care.
Collapse
Affiliation(s)
- Sultan A Almalki
- Department of Preventive Dental Sciences, College of Dentistry, Prince Sattam Bin AbdulAziz University, Al-Kharj 11942, Saudi Arabia.
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Abdullah Alqahtani
- Department of Software Engineering, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Adel A Alenazi
- Department of Oral and Maxillofacial Surgery and Diagnostic Science, College of Dentistry, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
36
|
Shen X, Zhang C, Jia X, Li D, Liu T, Tian S, Wei W, Sun Y, Liao W. TranSDFNet: Transformer-Based Truncated Signed Distance Fields for the Shape Design of Removable Partial Denture Clasps. IEEE J Biomed Health Inform 2023; 27:4950-4960. [PMID: 37471183 DOI: 10.1109/jbhi.2023.3295387] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
The ever-growing aging population has led to an increasing need for removable partial dentures (RPDs) since they are typically the least expensive treatment options for partial edentulism. However, the digital design of RPDs remains challenging for dental technicians due to the variety of partially edentulous scenarios and complex combinations of denture components. To accelerate the design of RPDs, we propose a U-shape network incorporated with Transformer blocks to automatically generate RPD clasps, one of the most frequently used RPD components. Unlike existing dental restoration design algorithms, we introduce the voxel-based truncated signed distance field (TSDF) as an intermediate representation, which reduces the sensitivity of the network to resolution and contributes to more smooth reconstruction. Besides, a selective insertion scheme is proposed for solving the memory issue caused by Transformer blocks and enables the algorithm to work well in scenarios with insufficient data. We further design two weighted loss functions to filter out the noisy signals generated from the zero-gradient areas in TSDF. Ablation and comparison studies demonstrate that our algorithm outperforms state-of-the-art reconstruction methods by a large margin and can serve as an intelligent auxiliary in denture design.
Collapse
|
37
|
Zhang J, Cui Z, Shi Z, Jiang Y, Zhang Z, Dai X, Yang Z, Gu Y, Zhou L, Han C, Huang X, Ke C, Li S, Xu Z, Gao F, Zhou L, Wang R, Liu J, Zhang J, Ding Z, Sun K, Li Z, Liu Z, Shen D. A robust and efficient AI assistant for breast tumor segmentation from DCE-MRI via a spatial-temporal framework. PATTERNS (NEW YORK, N.Y.) 2023; 4:100826. [PMID: 37720328 PMCID: PMC10499873 DOI: 10.1016/j.patter.2023.100826] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) allows screening, follow up, and diagnosis for breast tumor with high sensitivity. Accurate tumor segmentation from DCE-MRI can provide crucial information of tumor location and shape, which significantly influences the downstream clinical decisions. In this paper, we aim to develop an artificial intelligence (AI) assistant to automatically segment breast tumors by capturing dynamic changes in multi-phase DCE-MRI with a spatial-temporal framework. The main advantages of our AI assistant include (1) robustness, i.e., our model can handle MR data with different phase numbers and imaging intervals, as demonstrated on a large-scale dataset from seven medical centers, and (2) efficiency, i.e., our AI assistant significantly reduces the time required for manual annotation by a factor of 20, while maintaining accuracy comparable to that of physicians. More importantly, as the fundamental step to build an AI-assisted breast cancer diagnosis system, our AI assistant will promote the application of AI in more clinical diagnostic practices regarding breast cancer.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhiming Cui
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Yingjia Jiang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Zhiliang Zhang
- School of Medical Imaging, Hangzhou Medical College, Zhejiang 310059, China
| | - Xiaoting Dai
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhenlu Yang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Lei Zhou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Chenglu Ke
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Suyun Li
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Zeyan Xu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Fei Gao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Luping Zhou
- School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW 2006, Australia
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People’s Hospital, Guizhou 550002, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Hunan 410011, China
| | - Jiayin Zhang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200080, China
| | - Zhongxiang Ding
- Department of Radiology, Key Laboratory of Clinical Cancer Pharmacology and Toxicology Research of Zhejiang Province, Hangzhou 310003, China
| | - Kun Sun
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| | - Zhenhui Li
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Kunming 650118, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong 510080, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China
- Shanghai Clinical Research and Trial Center, Shanghai 200052, China
| |
Collapse
|
38
|
Liu J, Hao J, Lin H, Pan W, Yang J, Feng Y, Wang G, Li J, Jin Z, Zhao Z, Liu Z. Deep learning-enabled 3D multimodal fusion of cone-beam CT and intraoral mesh scans for clinically applicable tooth-bone reconstruction. PATTERNS (NEW YORK, N.Y.) 2023; 4:100825. [PMID: 37720330 PMCID: PMC10499902 DOI: 10.1016/j.patter.2023.100825] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 03/24/2023] [Accepted: 07/21/2023] [Indexed: 09/19/2023]
Abstract
High-fidelity three-dimensional (3D) models of tooth-bone structures are valuable for virtual dental treatment planning; however, they require integrating data from cone-beam computed tomography (CBCT) and intraoral scans (IOS) using methods that are either error-prone or time-consuming. Hence, this study presents Deep Dental Multimodal Fusion (DDMF), an automatic multimodal framework that reconstructs 3D tooth-bone structures using CBCT and IOS. Specifically, the DDMF framework comprises CBCT and IOS segmentation modules as well as a multimodal reconstruction module with novel pixel representation learning architectures, prior knowledge-guided losses, and geometry-based 3D fusion techniques. Experiments on real-world large-scale datasets revealed that DDMF achieved superior segmentation performance on CBCT and IOS, achieving a 0.17 mm average symmetric surface distance (ASSD) for 3D fusion with a substantial processing time reduction. Additionally, clinical applicability studies have demonstrated DDMF's potential for accurately simulating tooth-bone structures throughout the orthodontic treatment process.
Collapse
Affiliation(s)
- Jiaxiang Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China
| | - Jin Hao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
- Harvard School of Dental Medicine, Harvard University, Boston, MA 02115, USA
| | - Hangzheng Lin
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Wei Pan
- OPT Machine Vision Tech Co., Ltd., Tokyo 135-0064, Japan
| | - Jianfei Yang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Yang Feng
- Angelalign Inc., Shanghai 200433, China
| | - Gaoang Wang
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| | - Jin Li
- Department of Stomatology, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People’s Hospital, Shenzhen 518025, China
| | - Zuolin Jin
- Department of Orthodontics, School of Stomatology, Air Force Medical University, Xi’an 710032, China
| | - Zhihe Zhao
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Zuozhu Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Hangzhou 310000, China
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining 314400, China
| |
Collapse
|
39
|
Chen Z, Chen S, Hu F. CTA-UNet: CNN-transformer architecture UNet for dental CBCT images segmentation. Phys Med Biol 2023; 68:175042. [PMID: 37579767 DOI: 10.1088/1361-6560/acf026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 08/14/2023] [Indexed: 08/16/2023]
Abstract
In view of the limitations of current deep learning models in segmenting dental cone-beam computed tomography (CBCT) images, specifically dealing with complex root morphological features, fuzzy boundaries between tooth roots and alveolar bone, and the need for costly annotation of dental CBCT images. We collected dental CBCT data from 200 patients and annotated 45 of them for network training, and proposed a CNN-Transformer Architecture UNet network, which combines the advantages of CNN and Transformer. The CNN component effectively extracts local features, while the Transformer captures long-range dependencies. Multiple spatial attention modules were included to enhance the network's ability to extract and represent spatial information. Additionally, we introduced a novel Masked image modeling method to pre-train the CNN and Transformer modules simultaneously, mitigating limitations due to a smaller amount of labeled training data. Experimental results demonstrate that the proposed method achieved superior performance (DSC of 87.12%, IoU of 78.90%, HD95 of 0.525 mm, ASSD of 0.199 mm), and provides a more efficient and effective approach to automatically and accurately segment dental CBCT images, has real-world applicability in orthodontics and dental implants.
Collapse
Affiliation(s)
- Zeyu Chen
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, People's Republic of China
| | - Senyang Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, People's Republic of China
| | - Fengjun Hu
- College of Information Science and Technology, Zhejiang Shuren University, Hangzhou, 310015, People's Republic of China
| |
Collapse
|
40
|
Liang X, He J, He L, Lin Y, Li Y, Cai K, Wei J, Lu Y, Chen Z. An ultrasound-based deep learning radiomic model combined with clinical data to predict clinical pregnancy after frozen embryo transfer: a pilot cohort study. Reprod Biomed Online 2023; 47:103204. [PMID: 37248145 DOI: 10.1016/j.rbmo.2023.03.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/16/2023] [Accepted: 03/21/2023] [Indexed: 03/29/2023]
Abstract
RESEARCH QUESTION Can a multi-modal fusion model based on ultrasound-based deep learning radiomics combined with clinical parameters provide personalized evaluation of endometrial receptivity and predict the occurrence of clinical pregnancy after frozen embryo transfer (FET)? DESIGN Prospective cohort study of women (n = 326) who underwent FET between August 2019 and December 2021. Input quantitative variables and input image data for radiomic feature extraction were collected to establish a multi-modal fusion prediction model. An additional independent dataset of 453 ultrasound endometrial images was used to establish the segmentation model to determine the endometrial region on ultrasound images for analysis. The performance of different algorithms and different input data for prediction of FET outcome were compared. RESULTS A total of 240 patients with complete data were included in the final cohort. The proposed multi-modal fusion model performed significantly better than the use of either image or quantitative variables alone to predict the occurrence of clinical pregnancy after FET (P ≤ 0.034). Its area under the curve, accuracy, sensitivity, specificity, positive predictive value and negative predictive value of the proposed model were 0.825, 72.5%, 96.2%, 58.3%, 72.3% and 89.5%, respectively. The Dice coefficient of the multi-task endometrial ultrasound segmentation model was 0.89. Use of endometrial segmentation features significantly improved the prediction performance of the model (P = 0.041). CONCLUSIONS The multi-modal fusion model based on ultrasound-based deep learning radiomics combined with clinical quantitative variables offers a favourable and rapid non-invasive approach for personalized prediction of FET outcome.
Collapse
Affiliation(s)
- Xiaowen Liang
- Institution of Medical Imaging, University of South China, Hengyang, China; The Seventh Affiliated Hospital, Hengyang Medical School, University of South China, Changsha, China; The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China
| | - Jianchong He
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Lu He
- The First Affiliated Hospital, Department of Obstetrics and Gynecology, Hengyang Medical School, University of South China, Hengyang, China
| | - Yan Lin
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Yuewei Li
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Kuan Cai
- Department of Ultrasound Medicine, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jun Wei
- Institution of Medical Imaging, University of South China, Hengyang, China
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China.
| | - Zhiyi Chen
- Institution of Medical Imaging, University of South China, Hengyang, China; The Seventh Affiliated Hospital, Hengyang Medical School, University of South China, Changsha, China; The First Affiliated Hospital, Medical Imaging Centre, Hengyang Medical School, University of South China, Hengyang, China.
| |
Collapse
|
41
|
Huang H, Zheng O, Wang D, Yin J, Wang Z, Ding S, Yin H, Xu C, Yang R, Zheng Q, Shi B. ChatGPT for shaping the future of dentistry: the potential of multi-modal large language model. Int J Oral Sci 2023; 15:29. [PMID: 37507396 PMCID: PMC10382494 DOI: 10.1038/s41368-023-00239-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 07/06/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
The ChatGPT, a lite and conversational variant of Generative Pretrained Transformer 4 (GPT-4) developed by OpenAI, is one of the milestone Large Language Models (LLMs) with billions of parameters. LLMs have stirred up much interest among researchers and practitioners in their impressive skills in natural language processing tasks, which profoundly impact various fields. This paper mainly discusses the future applications of LLMs in dentistry. We introduce two primary LLM deployment methods in dentistry, including automated dental diagnosis and cross-modal dental diagnosis, and examine their potential applications. Especially, equipped with a cross-modal encoder, a single LLM can manage multi-source data and conduct advanced natural language reasoning to perform complex clinical operations. We also present cases to demonstrate the potential of a fully automatic Multi-Modal LLM AI system for dentistry clinical application. While LLMs offer significant potential benefits, the challenges, such as data privacy, data quality, and model bias, need further study. Overall, LLMs have the potential to revolutionize dental diagnosis and treatment, which indicates a promising avenue for clinical application and research in dentistry.
Collapse
Affiliation(s)
- Hanyao Huang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China.
| | - Ou Zheng
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, USA.
| | - Dongdong Wang
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, USA
| | - Jiayi Yin
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Zijin Wang
- Department of Civil, Environmental & Construction Engineering, University of Central Florida, Orlando, USA
| | - Shengxuan Ding
- College of Transportation Engineering, University of Central Florida, Orlando, USA
| | - Heng Yin
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Chuan Xu
- School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, China
- C2SMART Center, Tandon School of Engineering, New York University, Brooklyn, USA
| | - Renjie Yang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Eastern Clinic, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Qian Zheng
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Bing Shi
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
42
|
Ha EG, Jeon KJ, Lee C, Kim HS, Han SS. Development of deep learning model and evaluation in real clinical practice of lingual mandibular bone depression (Stafne cyst) on panoramic radiographs. Dentomaxillofac Radiol 2023; 52:20220413. [PMID: 37192044 PMCID: PMC10304844 DOI: 10.1259/dmfr.20220413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/31/2023] [Accepted: 04/03/2023] [Indexed: 05/18/2023] Open
Abstract
OBJECTIVES Lingual mandibular bone depression (LMBD) is a developmental bony defect in the lingual aspect of the mandible that does not require any surgical treatment. It is sometimes confused with a cyst or another radiolucent pathologic lesion on panoramic radiography. Thus, it is important to differentiate LMBD from true pathological radiolucent lesions requiring treatment. This study aimed to develop a deep learning model for the fully automatic differential diagnosis of LMBD from true pathological radiolucent cysts or tumors on panoramic radiographs without a manual process and evaluate the model's performance using a test dataset that reflected real clinical practice. METHODS A deep learning model using the EfficientDet algorithm was developed with training and validation data sets (443 images) consisting of 83 LMBD patients and 360 patients with true pathological radiolucent lesions. The test data set (1500 images) consisted of 8 LMBD patients, 53 patients with pathological radiolucent lesions, and 1439 healthy patients based on the clinical prevalence of these conditions in order to simulate real-world conditions, and the model was evaluated in terms of accuracy, sensitivity, and specificity using this test data set. RESULTS The model's accuracy, sensitivity, and specificity were more than 99.8%, and only 10 out of 1500 test images were erroneously predicted. CONCLUSION Excellent performance was found for the proposed model, in which the number of patients in each group was composed to reflect the prevalence in real-world clinical practice. The model can help dental clinicians make accurate diagnoses and avoid unnecessary examinations in real clinical settings.
Collapse
Affiliation(s)
- Eun-Gyu Ha
- Department of Electrical and Electronic Engineering, Yonsei University College of Engineering, Seoul, Republic of Korea
| | - Kug Jin Jeon
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea
| | - Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea
| | - Hak-Sun Kim
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea
| |
Collapse
|
43
|
Hou X, Guo P, Wang P, Liu P, Lin DDM, Fan H, Li Y, Wei Z, Lin Z, Jiang D, Jin J, Kelly C, Pillai JJ, Huang J, Pinho MC, Thomas BP, Welch BG, Park DC, Patel VM, Hillis AE, Lu H. Deep-learning-enabled brain hemodynamic mapping using resting-state fMRI. NPJ Digit Med 2023; 6:116. [PMID: 37344684 PMCID: PMC10284915 DOI: 10.1038/s41746-023-00859-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 06/09/2023] [Indexed: 06/23/2023] Open
Abstract
Cerebrovascular disease is a leading cause of death globally. Prevention and early intervention are known to be the most effective forms of its management. Non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. Resting-state functional magnetic resonance imaging (rs-fMRI), a powerful tool previously used for mapping neural activity, is available in most hospitals. Here we show that rs-fMRI can be used to map cerebral hemodynamic function and delineate impairment. By exploiting time variations in breathing pattern during rs-fMRI, deep learning enables reproducible mapping of cerebrovascular reactivity (CVR) and bolus arrival time (BAT) of the human brain using resting-state CO2 fluctuations as a natural "contrast media". The deep-learning network is trained with CVR and BAT maps obtained with a reference method of CO2-inhalation MRI, which includes data from young and older healthy subjects and patients with Moyamoya disease and brain tumors. We demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. In addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. Deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging.
Collapse
Affiliation(s)
- Xirui Hou
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Pengfei Guo
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Puyang Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Peiying Liu
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Doris D M Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hongli Fan
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Yang Li
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Zhiliang Wei
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA
| | - Zixuan Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Dengrong Jiang
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jin Jin
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Catherine Kelly
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jay J Pillai
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Judy Huang
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Marco C Pinho
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Binu P Thomas
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Babu G Welch
- Department of Neurologic Surgery, UT Southwestern Medical Center, Dallas, TX, USA
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Denise C Park
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Vishal M Patel
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hanzhang Lu
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA.
| |
Collapse
|
44
|
Tao B, Yu X, Wang W, Wang H, Chen X, Wang F, Wu Y. A deep learning-based automatic segmentation of zygomatic bones from cone-beam computed tomography images: A proof of concept. J Dent 2023:104582. [PMID: 37321334 DOI: 10.1016/j.jdent.2023.104582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/28/2023] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
OBJECTIVES To investigate the efficiency and accuracy of a deep learning-based automatic segmentation method for zygomatic bones from cone-beam computed tomography (CBCT) images. METHODS One hundred thirty CBCT scans were included and randomly divided into three subsets (training, validation, and test) in a 6:2:2 ratio. A deep learning-based model was developed, and it included a classification network and a segmentation network, where an edge supervision module was added to increase the attention of the edges of zygomatic bones. Attention maps were generated by the Grad-CAM and Guided Grad-CAM algorithms to improve the interpretability of the model. The performance of the model was then compared with that of four dentists on 10 CBCT scans from the test dataset. A p value <.05 was considered statistically significant. RESULTS The accuracy of the classification network was 99.64%. The Dice coefficient (Dice) of the deep learning-based model for the test dataset was 92.34 ± 2.04%, the average surface distance (ASD) was 0.1 ± 0.15 mm, and the 95% Hausdorff distance (HD) was 0.98 ± 0.42 mm. The model required 17.03 seconds on average to segment zygomatic bones, whereas this task took 49.3 minutes for dentists to complete. The Dice score of the model for the 10 CBCT scans was 93.2 ± 1.3%, while that of the dentists was 90.37 ± 3.32%. CONCLUSIONS The proposed deep learning-based model could segment zygomatic bones with high accuracy and efficiency compared with those of dentists. CLINICAL SIGNIFICANCE The proposed automatic segmentation model for zygomatic bone could generate an accurate 3D model for the preoperative digital planning of zygoma reconstruction, orbital surgery, zygomatic implant surgery, and orthodontics.
Collapse
Affiliation(s)
- Baoxin Tao
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xinbo Yu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Wenying Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Haowei Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Room 805, Dongchuan Road 800, Minhang District, Shanghai, 200240, China..
| | - Feng Wang
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| | - Yiqun Wu
- Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology, Shanghai, China..
| |
Collapse
|
45
|
Kim DY, Woo S, Roh JY, Choi JY, Kim KA, Cha JY, Kim N, Kim SJ. Subregional pharyngeal changes after orthognathic surgery in skeletal Class III patients analyzed by convolutional neural networks-based segmentation. J Dent 2023:104565. [PMID: 37308053 DOI: 10.1016/j.jdent.2023.104565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/14/2023] Open
Abstract
OBJECTIVES To evaluate the accuracy of fully automatic segmentation of pharyngeal volume of interests (VOIs) before and after orthognathic surgery in skeletal Class III patients using a convolutional neural network (CNN) model and to investigate the clinical applicability of artificial intelligence for quantitative evaluation of treatment changes in pharyngeal VOIs. METHODS 310 cone-beam computed tomography (CBCT) images were divided into a training set (n=150), validation set (n=40), and test set (n=120). The test datasets comprised matched pairs of pre- and posttreatment images of 60 skeletal Class III patients (mean age 23.1±5.0 years; ANB<-2⁰) who underwent bimaxillary orthognathic surgery with orthodontic treatment. A 3D U-Net CNNs model was applied for fully automatic segmentation and measurement of subregional pharyngeal volumes of pretreatment (T0) and posttreatment (T1) scans. The model's accuracy was compared to semi-automatic segmentation outcomes by humans using the dice similarity coefficient (DSC) and volume similarity (VS). The correlation between surgical skeletal changes and model accuracy was obtained. RESULTS The proposed model achieved high performance of subregional pharyngeal segmentation on both T0 and T1 images, representing a significant T1-T0 difference of DSC only in the nasopharynx. Region-specific differences among pharyngeal VOIs, which were observed at T0, disappeared on the T1 images. The decreased DSC of nasopharyngeal segmentation after treatment was weakly correlated with the amount of maxillary advancement. There was no correlation between the mandibular setback amount and model accuracy. CONCLUSIONS The proposed model offers fast and accurate subregional pharyngeal segmentation on both pretreatment and posttreatment CBCT images in skeletal Class III patients. CLINICAL SIGNIFICANCE We elucidated the clinical applicability of the CNNs model to quantitatively evaluate subregional pharyngeal changes after surgical-orthodontic treatment, which offers a basis for developing a fully integrated multiclass CNNs model to predict pharyngeal responses after dentoskeletal treatments.
Collapse
Affiliation(s)
- Dong-Yul Kim
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Seoyeon Woo
- Department of Convergence Medicine, Asan Medical Institute of Convergence, Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jae-Yon Roh
- Department of Dentistry, Graduate School, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jin-Young Choi
- Department of Orthodontics, Kyung Hee University Dental Hospital, 23, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Kyung-A Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea
| | - Jung-Yul Cha
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, 50-1 Yonseiro, Seodaemun-gu, Seoul, 03722, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Su-Jung Kim
- Department of Orthodontics, School of Dentistry, Kyung Hee University, 26, Kyungheedae-ro, Dongdaemun-gu, Seoul, 02447, Republic of Korea.
| |
Collapse
|
46
|
Abesi F, Maleki M, Zamani M. Diagnostic performance of artificial intelligence using cone-beam computed tomography imaging of the oral and maxillofacial region: A scoping review and meta-analysis. Imaging Sci Dent 2023; 53:101-108. [PMID: 37405196 PMCID: PMC10315225 DOI: 10.5624/isd.20220224] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/13/2023] [Accepted: 02/22/2023] [Indexed: 04/12/2024] Open
Abstract
PURPOSE The aim of this study was to conduct a scoping review and meta-analysis to provide overall estimates of the recall and precision of artificial intelligence for detection and segmentation using oral and maxillofacial cone-beam computed tomography (CBCT) scans. MATERIALS AND METHODS A literature search was done in Embase, PubMed, and Scopus through October 31, 2022 to identify studies that reported the recall and precision values of artificial intelligence systems using oral and maxillofacial CBCT images for the automatic detection or segmentation of anatomical landmarks or pathological lesions. Recall (sensitivity) indicates the percentage of certain structures that are correctly detected. Precision (positive predictive value) indicates the percentage of accurately identified structures out of all detected structures. The performance values were extracted and pooled, and the estimates were presented with 95% confidence intervals (CIs). RESULTS In total, 12 eligible studies were finally included. The overall pooled recall for artificial intelligence was 0.91 (95% CI: 0.87-0.94). In a subgroup analysis, the pooled recall was 0.88 (95% CI: 0.77-0.94) for detection and 0.92 (95% CI: 0.87-0.96) for segmentation. The overall pooled precision for artificial intelligence was 0.93 (95% CI: 0.88-0.95). A subgroup analysis showed that the pooled precision value was 0.90 (95% CI: 0.77-0.96) for detection and 0.94 (95% CI: 0.89-0.97) for segmentation. CONCLUSION Excellent performance was found for artificial intelligence using oral and maxillofacial CBCT images.
Collapse
Affiliation(s)
- Farida Abesi
- Department of Oral and Maxillofacial Radiology, Dental Faculty, Babol University of Medical Sciences, Babol, Iran
| | - Mahla Maleki
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| | - Mohammad Zamani
- Student Research Committee, Babol University of Medical Sciences, Babol, Iran
| |
Collapse
|
47
|
Polizzi A, Quinzi V, Ronsivalle V, Venezia P, Santonocito S, Lo Giudice A, Leonardi R, Isola G. Tooth automatic segmentation from CBCT images: a systematic review. Clin Oral Investig 2023:10.1007/s00784-023-05048-5. [PMID: 37148371 DOI: 10.1007/s00784-023-05048-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 04/26/2023] [Indexed: 05/08/2023]
Abstract
OBJECTIVES To describe the current state of the art regarding technological advances in full-automatic tooth segmentation approaches from 3D cone-beam computed tomography (CBCT) images. MATERIALS AND METHODS In March 2023, a search strategy without a timeline setting was carried out through a combination of MeSH terms and free text words pooled through Boolean operators ('AND', 'OR') on the following databases: PubMed, Scopus, Web of Science and IEEE Explore. Randomized and non-randomized controlled trials, cohort, case-control, cross-sectional and retrospective studies in the English language only were included. RESULTS The search strategy identified 541 articles, of which 23 have been selected. The most employed segmentation methods were based on deep learning approaches. One article exposed an automatic approach for tooth segmentation based on a watershed algorithm and another article used an improved level set method. Four studies presented classical machine learning and thresholding approaches. The most employed metric for evaluating segmentation performance was the Dice similarity index which ranged from 90 ± 3% to 97.9 ± 1.5%. CONCLUSIONS Thresholding appeared not reliable for tooth segmentation from CBCT images, whereas convolutional neural networks (CNNs) have been demonstrated as the most promising approach. CNNs could help overcome tooth segmentation's main limitations from CBCT images related to root anatomy, heavy scattering, immature teeth, metal artifacts and time consumption. New studies with uniform protocols and evaluation metrics with random sampling and blinding for data analysis are encouraged to objectively compare the different deep learning architectures' reliability. CLINICAL RELEVANCE Automatic tooth segmentation's best performance has been obtained through CNNs for the different ambits of digital dentistry.
Collapse
Affiliation(s)
- Alessandro Polizzi
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy.
- Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics, University of L'Aquila, 67100, L'Aquila, Italy.
| | - Vincenzo Quinzi
- Department of Life, Health & Environmental Sciences, Postgraduate School of Orthodontics, University of L'Aquila, 67100, L'Aquila, Italy
| | - Vincenzo Ronsivalle
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Pietro Venezia
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Simona Santonocito
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Antonino Lo Giudice
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Rosalia Leonardi
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| | - Gaetano Isola
- Department of General Surgery and Surgical-Medical Specialties, School of Dentistry, University of Catania, AOU "Policlinico-San Marco", Via S. Sofia 78, 95124, Catania, Italy
| |
Collapse
|
48
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|
49
|
Wang Y, Xia W, Yan Z, Zhao L, Bian X, Liu C, Qi Z, Zhang S, Tang Z. Root canal treatment planning by automatic tooth and root canal segmentation in dental CBCT with deep multi-task feature learning. Med Image Anal 2023; 85:102750. [PMID: 36682153 DOI: 10.1016/j.media.2023.102750] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 10/16/2022] [Accepted: 01/10/2023] [Indexed: 01/21/2023]
Abstract
Accurate and automatic segmentation of individual tooth and root canal from cone-beam computed tomography (CBCT) images is an essential but challenging step for dental surgical planning. In this paper, we propose a novel framework, which consists of two neural networks, DentalNet and PulpNet, for efficient, precise, and fully automatic tooth instance segmentation and root canal segmentation from CBCT images. We first use the proposed DentalNet to achieve tooth instance segmentation and identification. Then, the region of interest (ROI) of the affected tooth is extracted and fed into the PulpNet to obtain precise segmentation of the pulp chamber and the root canal space. These two networks are trained by multi-task feature learning and evaluated on two clinical datasets respectively and achieve superior performances to several comparing methods. In addition, we incorporate our method into an efficient clinical workflow to improve the surgical planning process. In two clinical case studies, our workflow took only 2 min instead of 6 h to obtain the 3D model of tooth and root canal effectively for the surgical planning, resulting in satisfying outcomes in difficult root canal treatments.
Collapse
Affiliation(s)
- Yiwei Wang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Wenjun Xia
- Shanghai Xuhui District Dental Center, Shanghai 200031, China
| | - Zhennan Yan
- SenseBrain Technology, Princeton, NJ 08540, USA
| | - Liang Zhao
- SenseTime Research, Shanghai 200233, China
| | - Xiaohe Bian
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Chang Liu
- SenseTime Research, Shanghai 200233, China
| | - Zhengnan Qi
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China; Centre for Perceptual and Interactive Intelligence (CPII), Hong Kong Special Administrative Region of China.
| | - Zisheng Tang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China.
| |
Collapse
|
50
|
Al-Ekrish A, Hussain SA, ElGibreen H, Almurshed R, Alhusain L, Hörmann R, Widmann G. Prediction of the as Low as Diagnostically Acceptable CT Dose for Identification of the Inferior Alveolar Canal Using 3D Convolutional Neural Networks with Multi-Balancing Strategies. Diagnostics (Basel) 2023; 13:diagnostics13071220. [PMID: 37046438 PMCID: PMC10093627 DOI: 10.3390/diagnostics13071220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/14/2023] [Accepted: 03/22/2023] [Indexed: 04/14/2023] Open
Abstract
Ionizing radiation is necessary for diagnostic imaging and deciding the right radiation dose is extremely critical to obtain a decent quality image. However, increasing the dosage to improve the image quality has risks due to the potential harm from ionizing radiation. Thus, finding the optimal as low as diagnostically acceptable (ALADA) dosage is an open research problem that has yet to be tackled using artificial intelligence (AI) methods. This paper proposes a new multi-balancing 3D convolutional neural network methodology to build 3D multidetector computed tomography (MDCT) datasets and develop a 3D classifier model that can work properly with 3D CT scan images and balance itself over the heavy unbalanced multi-classes. The proposed models were exhaustively investigated through eighteen empirical experiments and three re-runs for clinical expert examination. As a result, it was possible to confirm that the proposed models improved the performance by an accuracy of 5% to 10% when compared to the baseline method. Furthermore, the resulting models were found to be consistent, and thus possibly applicable to different MDCT examinations and reconstruction techniques. The outcome of this paper can help radiologists to predict the suitability of CT dosages across different CT hardware devices and reconstruction algorithms. Moreover, the developed model is suitable for clinical application where the right dose needs to be predicted from numerous MDCT examinations using a certain MDCT device and reconstruction technique.
Collapse
Affiliation(s)
- Asma'a Al-Ekrish
- Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh 11545, Saudi Arabia
| | - Syed Azhar Hussain
- Department of Computer Science, Munster Technological University, Rossa Ave, Bishopstown, T12 P928 Cork, Ireland
| | - Hebah ElGibreen
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
- Artificial Intelligence Center of Advanced Studies (Thakaa), King Saud University, Riyadh 145111, Saudi Arabia
| | - Rana Almurshed
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Luluah Alhusain
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Romed Hörmann
- Division of Clinical and Functional Anatomy, Medical University of Innsbruck, Müllerstrasse 59, 6020 Innsbruck, Austria
| | - Gerlig Widmann
- Department of Radiology, Medical University of Innsbruck, Anichstr. 35, 6020 Innsbruck, Austria
| |
Collapse
|