1
|
Wang S, Liang S, Chang Q, Zhang L, Gong B, Bai Y, Zuo F, Wang Y, Xie X, Gu Y. STSN-Net: Simultaneous Tooth Segmentation and Numbering Method in Crowded Environments with Deep Learning. Diagnostics (Basel) 2024; 14:497. [PMID: 38472969 DOI: 10.3390/diagnostics14050497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/25/2024] [Accepted: 02/01/2024] [Indexed: 03/14/2024] Open
Abstract
Accurate tooth segmentation and numbering are the cornerstones of efficient automatic dental diagnosis and treatment. In this paper, a multitask learning architecture has been proposed for accurate tooth segmentation and numbering in panoramic X-ray images. A graph convolution network was applied for the automatic annotation of the target region, a modified convolutional neural network-based detection subnetwork (DSN) was used for tooth recognition and boundary regression, and an effective region segmentation subnetwork (RSSN) was used for region segmentation. The features extracted using RSSN and DSN were fused to optimize the quality of boundary regression, which provided impressive results for multiple evaluation metrics. Specifically, the proposed framework achieved a top F1 score of 0.9849, a top Dice metric score of 0.9629, and an mAP (IOU = 0.5) score of 0.9810. This framework holds great promise for enhancing the clinical efficiency of dentists in tooth segmentation and numbering tasks.
Collapse
Affiliation(s)
- Shaofeng Wang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Shuang Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamicationental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qiao Chang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Li Zhang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Beiwen Gong
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Yuxing Bai
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| | - Feifei Zuo
- LargeV Instrument Corp., Ltd., Beijing 100084, China
| | - Yajie Wang
- LargeV Instrument Corp., Ltd., Beijing 100084, China
| | - Xianju Xie
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| | - Yu Gu
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| |
Collapse
|
2
|
Gong Z, Feng W, Su X, Choi C. System for automatically assessing the likelihood of inferior alveolar nerve injury. Comput Biol Med 2024; 169:107923. [PMID: 38199211 DOI: 10.1016/j.compbiomed.2024.107923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/20/2023] [Accepted: 01/01/2024] [Indexed: 01/12/2024]
Abstract
Inferior alveolar nerve (IAN) injury is a severe complication associated with mandibular third molar (MM3) extraction. Consequently, the likelihood of IAN injury must be assessed before performing such an extraction. However, existing deep learning methods for classifying the likelihood of IAN injury that rely on mask images often suffer from limited accuracy and lack of interpretability. In this paper, we propose an automated system based on panoramic radiographs, featuring a novel segmentation model SS-TransUnet and classification algorithm CD-IAN injury class. Our objective was to enhance the precision of segmentation of MM3 and mandibular canal (MC) and classification accuracy of the likelihood of IAN injury, ultimately reducing the occurrence of IAN injuries and providing a certain degree of interpretable foundation for diagnosis. The proposed segmentation model demonstrated a 0.9 % and 2.6 % enhancement in dice coefficient for MM3 and MC, accompanied by a reduction in 95 % Hausdorff distance, reaching 1.619 and 1.886, respectively. Additionally, our classification algorithm achieved an accuracy of 0.846, surpassing deep learning-based models by 3.8 %, confirming the effectiveness of our system.
Collapse
Affiliation(s)
- Ziyang Gong
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea
| | - Weikang Feng
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Xin Su
- College of Information Science and Engineering, Hohai University, Changzhou, 213000, China
| | - Chang Choi
- Department of Computer Engineering, Gachon University, Seongnam-si, 13120, Republic of Korea.
| |
Collapse
|
3
|
Sun D, Wang J, Zuo Z, Jia Y, Wang Y. STS-TransUNet: Semi-supervised Tooth Segmentation Transformer U-Net for dental panoramic image. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:2366-2384. [PMID: 38454687 DOI: 10.3934/mbe.2024104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
In this paper, we introduce a novel deep learning method for dental panoramic image segmentation, which is crucial in oral medicine and orthodontics for accurate diagnosis and treatment planning. Traditional methods often fail to effectively combine global and local context, and struggle with unlabeled data, limiting performance in varied clinical settings. We address these issues with an advanced TransUNet architecture, enhancing feature retention and utilization by connecting the input and output layers directly. Our architecture further employs spatial and channel attention mechanisms in the decoder segments for targeted region focus, and deep supervision techniques to overcome the vanishing gradient problem for more efficient training. Additionally, our network includes a self-learning algorithm using unlabeled data, boosting generalization capabilities. Named the Semi-supervised Tooth Segmentation Transformer U-Net (STS-TransUNet), our method demonstrated superior performance on the MICCAI STS-2D dataset, proving its effectiveness and robustness in tooth segmentation tasks.
Collapse
Affiliation(s)
- Duolin Sun
- University of Science and Technology of China, Hefei, China
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
| | - Jianqing Wang
- Hangzhou Sai Future Technology Co., Ltd, Hangzhou, China
| | - Zhaoyu Zuo
- University of Science and Technology of China, Hefei, China
| | - Yixiong Jia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Yimou Wang
- University of Science and Technology of China, Hefei, China
| |
Collapse
|
4
|
Felsch M, Meyer O, Schlickenrieder A, Engels P, Schönewolf J, Zöllner F, Heinrich-Weltzien R, Hesenius M, Hickel R, Gruhn V, Kühnisch J. Detection and localization of caries and hypomineralization on dental photographs with a vision transformer model. NPJ Digit Med 2023; 6:198. [PMID: 37880375 PMCID: PMC10600213 DOI: 10.1038/s41746-023-00944-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 10/13/2023] [Indexed: 10/27/2023] Open
Abstract
Caries and molar-incisor hypomineralization (MIH) are among the most prevalent diseases worldwide and need to be reliably diagnosed. The use of dental photographs and artificial intelligence (AI) methods may potentially contribute to realizing accurate and automated diagnostic visual examinations in the future. Therefore, the present study aimed to develop an AI-based algorithm that can detect, classify and localize caries and MIH. This study included an image set of 18,179 anonymous photographs. Pixelwise image labeling was achieved by trained and calibrated annotators using the Computer Vision Annotation Tool (CVAT). All annotations were made according to standard methods and were independently checked by an experienced dentist. The entire image set was divided into training (N = 16,679), validation (N = 500) and test sets (N = 1000). The AI-based algorithm was trained and finetuned over 250 epochs by using image augmentation and adapting a vision transformer network (SegFormer-B5). Statistics included the determination of the intersection over union (IoU), average precision (AP) and accuracy (ACC). The overall diagnostic performance in terms of IoU, AP and ACC were 0.959, 0.977 and 0.978 for the finetuned model, respectively. The corresponding data for the most relevant caries classes of non-cavitations (0.630, 0.813 and 0.990) and dentin cavities (0.692, 0.830, and 0.997) were found to be high. MIH-related demarcated opacity (0.672, 0.827, and 0.993) and atypical restoration (0.829, 0.902, and 0.999) showed similar results. Here, we report that the model achieves excellent precision for pixelwise detection and localization of caries and MIH. Nevertheless, the model needs to be further improved and externally validated.
Collapse
Affiliation(s)
- Marco Felsch
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Ole Meyer
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Anne Schlickenrieder
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Paula Engels
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Jule Schönewolf
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Felicitas Zöllner
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Roswitha Heinrich-Weltzien
- Department of Orthodontics, Section of Preventive and Paediatric Dentistry, University Hospital Jena, Jena, Germany
| | - Marc Hesenius
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Reinhard Hickel
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany
| | - Volker Gruhn
- Institute for Software Engineering, University of Duisburg-Essen, Essen, Germany
| | - Jan Kühnisch
- Department of Conservative Dentistry and Periodontology, School of Dentistry, Ludwig-Maximilians University of Munich, Munich, Germany.
| |
Collapse
|
5
|
You H, Wang J, Ma R, Chen Y, Li L, Song C, Dong Z, Feng S, Zhou X. Clinical Interpretability of Deep Learning for Predicting Microvascular Invasion in Hepatocellular Carcinoma by Using Attention Mechanism. Bioengineering (Basel) 2023; 10:948. [PMID: 37627833 PMCID: PMC10451856 DOI: 10.3390/bioengineering10080948] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023] Open
Abstract
Preoperative prediction of microvascular invasion (MVI) is essential for management decision in hepatocellular carcinoma (HCC). Deep learning-based prediction models of MVI are numerous but lack clinical interpretation due to their "black-box" nature. Consequently, we aimed to use an attention-guided feature fusion network, including intra- and inter-attention modules, to solve this problem. This retrospective study recruited 210 HCC patients who underwent gadoxetate-enhanced MRI examination before surgery. The MRIs on pre-contrast, arterial, portal, and hepatobiliary phases (hepatobiliary phase: HBP) were used to develop single-phase and multi-phase models. Attention weights provided by attention modules were used to obtain visual explanations of predictive decisions. The four-phase fusion model achieved the highest area under the curve (AUC) of 0.92 (95% CI: 0.84-1.00), and the other models proposed AUCs of 0.75-0.91. Attention heatmaps of collaborative-attention layers revealed that tumor margins in all phases and peritumoral areas in the arterial phase and HBP were salient regions for MVI prediction. Heatmaps of weights in fully connected layers showed that the HBP contributed the most to MVI prediction. Our study firstly implemented self-attention and collaborative-attention to reveal the relationship between deep features and MVI, improving the clinical interpretation of prediction models. The clinical interpretability offers radiologists and clinicians more confidence to apply deep learning models in clinical practice, helping HCC patients formulate personalized therapies.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Shiting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, 58th the Second Zhongshan Road, Guangzhou 510080, China; (H.Y.); (J.W.); (R.M.); (Y.C.); (L.L.); (C.S.); (Z.D.)
| | - Xiaoqi Zhou
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, 58th the Second Zhongshan Road, Guangzhou 510080, China; (H.Y.); (J.W.); (R.M.); (Y.C.); (L.L.); (C.S.); (Z.D.)
| |
Collapse
|
6
|
Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown-Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics (Basel) 2023; 13:diagnostics13081487. [PMID: 37189586 DOI: 10.3390/diagnostics13081487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/26/2023] [Accepted: 03/01/2023] [Indexed: 05/17/2023] Open
Abstract
BACKGROUND The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). METHODS 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool's database. All teeth, crown-bridge restorations, dental implants, composite-amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. RESULTS The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown-bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. CONCLUSIONS Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.
Collapse
Affiliation(s)
- Emel Gardiyanoğlu
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Gürkan Ünsal
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
- DESAM Institute, Near East University, 99138 Nicosia, Cyprus
| | - Nurullah Akkaya
- Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
| | - Seçil Aksoy
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
| | - Kaan Orhan
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
| |
Collapse
|