1
|
Wang X, Alqahtani KA, Van den Bogaert T, Shujaat S, Jacobs R, Shaheen E. Convolutional neural network for automated tooth segmentation on intraoral scans. BMC Oral Health 2024; 24:804. [PMID: 39014389 PMCID: PMC11250967 DOI: 10.1186/s12903-024-04582-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 07/05/2024] [Indexed: 07/18/2024] Open
Abstract
BACKGROUND Tooth segmentation on intraoral scanned (IOS) data is a prerequisite for clinical applications in digital workflows. Current state-of-the-art methods lack the robustness to handle variability in dental conditions. This study aims to propose and evaluate the performance of a convolutional neural network (CNN) model for automatic tooth segmentation on IOS images. METHODS A dataset of 761 IOS images (380 upper jaws, 381 lower jaws) was acquired using an intraoral scanner. The inclusion criteria included a full set of permanent teeth, teeth with orthodontic brackets, and partially edentulous dentition. A multi-step 3D U-Net pipeline was designed for automated tooth segmentation on IOS images. The model's performance was assessed in terms of time and accuracy. Additionally, the model was deployed on an online cloud-based platform, where a separate subsample of 18 IOS images was used to test the clinical applicability of the model by comparing three modes of segmentation: automated artificial intelligence-driven (A-AI), refined (R-AI), and semi-automatic (SA) segmentation. RESULTS The average time for automated segmentation was 31.7 ± 8.1 s per jaw. The CNN model achieved an Intersection over Union (IoU) score of 91%, with the full set of teeth achieving the highest performance and the partially edentulous group scoring the lowest. In terms of clinical applicability, SA took an average of 860.4 s per case, whereas R-AI showed a 2.6-fold decrease in time (328.5 s). Furthermore, R-AI offered higher performance and reliability compared to SA, regardless of the dentition group. CONCLUSIONS The 3D U-Net pipeline was accurate, efficient, and consistent for automatic tooth segmentation on IOS images. The online cloud-based platform could serve as a viable alternative for IOS segmentation.
Collapse
Affiliation(s)
- Xiaotong Wang
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Oral and Maxillofacial Surgery, The First Affiliated Hospital of Harbin Medical University, Youzheng Street 23, Nangang, Harbin, 150001, China
| | - Khalid Ayidh Alqahtani
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
| | - Tom Van den Bogaert
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
| | - Sohaib Shujaat
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- King Abdullah International Medical Research Center, Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Ministry of National Guard Health Affairs, Riyadh, 14611, Saudi Arabia
| | - Reinhilde Jacobs
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
- Department of Oral and Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, Sattam Bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia.
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium.
| | - Eman Shaheen
- OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Kapucijnenvoer 33, Leuven, 3000, Belgium
- Department of Dental Medicine, Karolinska Institutet, Solnavägen 1, 171 77, stockholm, 3000, Sweden
| |
Collapse
|
2
|
Leclercq M, Ruellas A, Gurgel M, Yatabe M, Bianchi J, Cevidanes L, Styner M, Paniagua B, Prieto JC. DENTALMODELSEG: FULLY AUTOMATED SEGMENTATION OF UPPER AND LOWER 3D INTRA-ORAL SURFACES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230397. [PMID: 38505097 PMCID: PMC10949221 DOI: 10.1109/isbi53787.2023.10230397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
In this paper, we present a deep learning-based method for surface segmentation. This technique consists of acquiring 2D views and extracting features from the surface such as the normal vectors. The rendered images are analyzed with a 2D convolutional neural network, such as a UNET. We test our method in a dental application for the segmentation of dental crowns. The neural network is trained for multi-class segmentation, using image labels as ground truth. A 5-fold cross-validation was performed, and the segmentation task achieved an average Dice of 0.97, sensitivity of 0.98 and precision of 0.98. Our method and algorithms are available as a 3DSlicer extension.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Martin Styner
- University of North Carolina, Chapel Hill, United States
| | | | | |
Collapse
|
3
|
Liu Z, He X, Wang H, Xiong H, Zhang Y, Wang G, Hao J, Feng Y, Zhu F, Hu H. Hierarchical Self-Supervised Learning for 3D Tooth Segmentation in Intra-Oral Mesh Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:467-480. [PMID: 36378797 DOI: 10.1109/tmi.2022.3222388] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Accurately delineating individual teeth and the gingiva in the three-dimension (3D) intraoral scanned (IOS) mesh data plays a pivotal role in many digital dental applications, e.g., orthodontics. Recent research shows that deep learning based methods can achieve promising results for 3D tooth segmentation, however, most of them rely on high-quality labeled dataset which is usually of small scales as annotating IOS meshes requires intensive human efforts. In this paper, we propose a novel self-supervised learning framework, named STSNet, to boost the performance of 3D tooth segmentation leveraging on large-scale unlabeled IOS data. The framework follows two-stage training, i.e., pre-training and fine-tuning. In pre-training, three hierarchical-level, i.e., point-level, region-level, cross-level, contrastive losses are proposed for unsupervised representation learning on a set of predefined matched points from different augmented views. The pretrained segmentation backbone is further fine-tuned in a supervised manner with a small number of labeled IOS meshes. With the same amount of annotated samples, our method can achieve an mIoU of 89.88%, significantly outperforming the supervised counterparts. The performance gain becomes more remarkable when only a small amount of labeled samples are available. Furthermore, STSNet can achieve better performance with only 40% of the annotated samples as compared to the fully supervised baselines. To the best of our knowledge, we present the first attempt of unsupervised pre-training for 3D tooth segmentation, demonstrating its strong potential in reducing human efforts for annotation and verification.
Collapse
|
4
|
Hao J, Liao W, Zhang YL, Peng J, Zhao Z, Chen Z, Zhou BW, Feng Y, Fang B, Liu ZZ, Zhao ZH. Toward Clinically Applicable 3-Dimensional Tooth Segmentation via Deep Learning. J Dent Res 2021; 101:304-311. [PMID: 34719980 DOI: 10.1177/00220345211040459] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Digital dentistry plays a pivotal role in dental health care. A critical step in many digital dental systems is to accurately delineate individual teeth and the gingiva in the 3-dimension intraoral scanned mesh data. However, previous state-of-the-art methods are either time-consuming or error prone, hence hindering their clinical applicability. This article presents an accurate, efficient, and fully automated deep learning model trained on a data set of 4,000 intraoral scanned data annotated by experienced human experts. On a holdout data set of 200 scans, our model achieves a per-face accuracy, average-area accuracy, and area under the receiver operating characteristic curve of 96.94%, 98.26%, and 0.9991, respectively, significantly outperforming the state-of-the-art baselines. In addition, our model takes only about 24 s to generate segmentation outputs, as opposed to >5 min by the baseline and 15 min by human experts. A clinical performance test of 500 patients with malocclusion and/or abnormal teeth shows that 96.9% of the segmentations are satisfactory for clinical applications, 2.9% automatically trigger alarms for human improvement, and only 0.2% of them need rework. Our research demonstrates the potential for deep learning to improve the efficacy and efficiency of dental treatment and digital dentistry.
Collapse
Affiliation(s)
- J Hao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China.,Harvard School of Dental Medicine, Harvard University, Boston, MA, USA
| | - W Liao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Y L Zhang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - J Peng
- DeepAlign Tech Inc., Ningbo, China
| | - Z Zhao
- DeepAlign Tech Inc., Ningbo, China
| | - Z Chen
- DeepAlign Tech Inc., Ningbo, China
| | - B W Zhou
- Angelalign Research Institute, Angel Align Inc., Shanghai, China
| | - Y Feng
- Angelalign Research Institute, Angel Align Inc., Shanghai, China
| | - B Fang
- Ninth People's Hospital Affiliated to Shanghai Jiao Tong University, Shanghai Research Institute of Stomatology, National Clinical Research Center of Stomatology, Shanghai, China
| | - Z Z Liu
- Zhejiang University-University of Illinois at Urbana-Champaign Institute, Zhejiang University, Haining, China
| | - Z H Zhao
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases and West China Hospital of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
5
|
Brosset S, Dumont M, Cevidanes L, Soroushmehr R, Bianchi J, Gurgel M, Deleat-Besson R, Le C, Ruellas A, Yatabe M, Junior CC, Gomes L, Goncalves J, Najarian K, Gryak J, Styner M, Paniagua B, Prieto JC. Web Infrastructure for Data Management, Storage and Computation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11600:116001N. [PMID: 33814672 PMCID: PMC8015809 DOI: 10.1117/12.2582283] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The Data Storage for Computation and Integration (DSCI) proposes management innovations for web-based secure data storage, algorithms deployment, and task execution. Its architecture allows inclusion of plugins for upload, browsing, sharing, and task execution in remote computing grids. Here, we demonstrate the DSCI implementation and the deployment of Image processing tools (TMJSeg), machine learning algorithms (MandSeg, DentalModelSeg), and advanced statistical packages (Multivariate Functional Shape Data Analysis, MFSDA), with data transfer and task execution handled by the clusterpost plug-in. Due to its comprehensive web-based design, local software installation is no longer required. The DSCI aims to enable and maintain a distributed computing and collaboration environment across multi-site clinical centers for the data processing of multisource features such as clinical, biological markers, volumetric images, and 3D surface models, with particular emphasis on analytics for temporomandibular joint osteoarthritis (TMJ OA).
Collapse
|
6
|
Boumbolo L, Dumont M, Brosset S, Bianchi J, Ruellas A, Gurgel M, Massaro C, Del Castillo AA, Ioshida M, Yatabe M, Benavides E, Rios H, Soki F, Neiva G, Paniagua B, Cevidanes L, Styner M, Prieto JC. FlyBy CNN: A 3D surface segmentation framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:115962B. [PMID: 33758460 PMCID: PMC7983301 DOI: 10.1117/12.2582205] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this paper, we present FlyBy CNN, a novel deep learning based approach for 3D shape segmentation. FlyByCNN consists of sampling the surface of the 3D object from different view points and extracting surface features such as the normal vectors. The generated 2D images are then analyzed via 2D convolutional neural networks such as RUNETs. We test our framework in a dental application for segmentation of intra-oral surfaces. The RUNET is trained for the segmentation task using image pairs of surface features and image labels as ground truth. The resulting labels from each segmented image are put back into the surface thanks to our sampling approach that generates 1-1 correspondence of image pixels and triangles in the surface model. The segmentation task achieved an accuracy of 0.9.
Collapse
Affiliation(s)
- Louis Boumbolo
- University of North Carolina, Chapel Hill, United States
| | | | | | | | | | | | | | | | | | | | - E Benavides
- University of Michigan, Ann Arbor, United States
| | - Hector Rios
- University of Michigan, Ann Arbor, United States
| | - Fabiana Soki
- University of Michigan, Ann Arbor, United States
| | - Gisele Neiva
- University of Michigan, Ann Arbor, United States
| | | | | | - Martin Styner
- University of North Carolina, Chapel Hill, United States
| | - Juan C Prieto
- University of North Carolina, Chapel Hill, United States
| |
Collapse
|
7
|
Zhao Y, Li P, Gao C, Liu Y, Chen Q, Yang F, Meng D. TSASNet: Tooth segmentation on dental panoramic X-ray images by Two-Stage Attention Segmentation Network. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106338] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
8
|
Yuan T, Wang Y, Hou Z, Wang J. Tooth segmentation and gingival tissue deformation framework for 3D orthodontic treatment planning and evaluating. Med Biol Eng Comput 2020; 58:2271-2290. [PMID: 32700290 DOI: 10.1007/s11517-020-02230-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Accepted: 07/08/2020] [Indexed: 11/29/2022]
Abstract
In this study, we propose an integrated tooth segmentation and gingival tissue deformation simulation framework used to design and evaluate the orthodontic treatment plan especially with invisible aligners. Firstly, the bio-characteristics information of the digital impression is analyzed quantitatively and demonstrated visually. With the derived information, the transitional regions of tooth-tooth and tooth-gingiva are extracted as the solution domain of the segmentation boundaries. Then, a boundary detection approach is proposed, which is used for the tooth segmentation and region division of the digital impression. After tooth segmentation, we propose the deformation simulation framework driven by energy function based on the biological deformation properties of gingival tissues. The correctness and availability of the proposed segmentation and gingival tissue deformation simulation framework are demonstrated with typical cases and qualitative analysis. Experimental results show that segmentation boundaries calculated by the proposed method are accurate, and local details of the digital impression under study are preserved well during deformation simulation. Qualitative analysis results of the gingival tissues' surface area and volume variations indicate that the proposed gingival tissue deformation simulation framework is consistent with the clinical gingival tissue deformation characteristics, and it can be used to predict the rationality of the treatment plan from both visual inspection and numerical simulation. The proposed tooth segmentation and gingival tissue deformation simulation framework is shown to be effective and has good practicability, but accurate quantitative analysis based on clinical results is still an open problem in this study. Combined with tooth rearrangement steps, it can be used to design the orthodontic treatment plan, and to output the data for production of invisible aligners. Graphical abstract.
Collapse
Affiliation(s)
- Tianran Yuan
- Huaiyin Institute of Technology, Huai'an, China. .,Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Yimin Wang
- Huaiyin Institute of Technology, Huai'an, China
| | - Zhiwei Hou
- Huaiyin Institute of Technology, Huai'an, China
| | - Jun Wang
- Nanjing University of Aeronautics and Astronautics, Nanjing, China
| |
Collapse
|
9
|
3D Intelligent Scissors for Dental Mesh Segmentation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:1394231. [PMID: 32089728 PMCID: PMC7013310 DOI: 10.1155/2020/1394231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 12/12/2019] [Indexed: 11/18/2022]
Abstract
Teeth segmentation is a crucial technologic component of the digital dentistry system. The limitations of the live-wire segmentation include two aspects: (1) computing the wire as the segmentation boundary is time-consuming and (2) a great deal of interactions for dental mesh is inevitable. For overcoming these disadvantages, 3D intelligent scissors for dental mesh segmentation based on live-wire is presented. Two tensor-based anisotropic metrics for making wire lie at valleys and ridges are defined, and a timesaving anisotropic Dijkstra is adopted. Besides, to improve with the smoothness of the path tracking back by the traditional Dijkstra, a 3D midpoint smoothing algorithm is proposed. Experiments show that the method is effective for dental mesh segmentation and the proposed tool outperforms in time complexity and interactivity.
Collapse
|
10
|
Kahaki SMM, Nordin MJ, Ahmad NS, Arzoky M, Ismail W. Deep convolutional neural network designed for age assessment based on orthopantomography data. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04449-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Kim S, Choi S. Automatic tooth segmentation of dental mesh using a transverse plane. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:4122-4125. [PMID: 30441262 DOI: 10.1109/embc.2018.8513318] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper proposes an automatic method to separate the gingiva and individual teeth from a dental mesh. We define a transverse plane that produces a cross-section of tooth lingual and labial surfaces, preserving the shape of individual teeth. The upper vertices from the transverse plane, which belong to the tooth, are projected onto the transverse plane, and partitioned into individual teeth. We apply region growing to the remaining non-segmented parts to determine the cluster the vertices belong to, and the proposed approach is fully automatic, i.e., segmentation does not require user interaction for feature point search or tooth boundary markers. The proposed segmentation method is applied to several dental mesh models to demonstrate its robustness.
Collapse
|
12
|
Ju M, Choi Y, Seo J, Sa J, Lee S, Chung Y, Park D. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring. SENSORS 2018; 18:s18061746. [PMID: 29843479 PMCID: PMC6021839 DOI: 10.3390/s18061746] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 05/23/2018] [Accepted: 05/27/2018] [Indexed: 02/06/2023]
Abstract
Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN) with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96%) and execution time (i.e., real-time execution), even with low-contrast images obtained using a Kinect depth sensor.
Collapse
Affiliation(s)
- Miso Ju
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Younchang Choi
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Jihyun Seo
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Jaewon Sa
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Sungju Lee
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Yongwha Chung
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Daihee Park
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| |
Collapse
|