1
|
Kargilis DC, Xu W, Reddy S, Ramesh SSK, Wang S, Le AD, Rajapakse CS. Deep learning segmentation of mandible with lower dentition from cone beam CT. Oral Radiol 2024:10.1007/s11282-024-00770-6. [PMID: 39141154 DOI: 10.1007/s11282-024-00770-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 08/08/2024] [Indexed: 08/15/2024]
Abstract
OBJECTIVES This study aimed to train a 3D U-Net convolutional neural network (CNN) for mandible and lower dentition segmentation from cone-beam computed tomography (CBCT) scans. METHODS In an ambispective cross-sectional design, CBCT scans from two hospitals (2009-2019 and 2021-2022) constituted an internal dataset and external validation set, respectively. Manual segmentation informed CNN training, and evaluations employed Dice similarity coefficient (DSC) for volumetric accuracy. A blinded oral maxillofacial surgeon performed qualitative grading of CBCT scans and object meshes. Statistical analyses included independent t-tests and ANOVA tests to compare DSC across patient subgroups of gender, race, body mass index (BMI), test dataset used, age, and degree of metal artifact. Tests were powered for a minimum detectable difference in DSC of 0.025, with alpha of 0.05 and power level of 0.8. RESULTS 648 CBCT scans from 490 patients were included in the study. The CNN achieved high accuracy (average DSC: 0.945 internal, 0.940 external). No DSC differences were observed between test set used, gender, BMI, and race. Significant differences in DSC were identified based on age group and the degree of metal artifact. The majority (80%) of object meshes produced by both manual and automatic segmentation were rated as acceptable or higher quality. CONCLUSION We developed a model for automatic mandible and lower dentition segmentation from CBCT scans in a demographically diverse cohort including a high degree of metal artifacts. The model demonstrated good accuracy on internal and external test sets, with majority acceptable quality from a clinical grader.
Collapse
Affiliation(s)
- Daniel C Kargilis
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA.
- Johns Hopkins University, Baltimore, USA.
| | - Winnie Xu
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Samir Reddy
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | | | - Steven Wang
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Anh D Le
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| | - Chamith S Rajapakse
- University of Pennsylvania, 1 Founders Pavilion, 3400 Spruce Street, Philadelphia, PA, 19104-4283, USA
| |
Collapse
|
2
|
Neves CA, Chemaly TE, Fu F, Blevins NH. Deep Learning Method for Rapid Simultaneous Multistructure Temporal Bone Segmentation. Otolaryngol Head Neck Surg 2024; 170:1570-1580. [PMID: 38769857 DOI: 10.1002/ohn.764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 02/25/2024] [Accepted: 03/19/2024] [Indexed: 05/22/2024]
Abstract
OBJECTIVE To develop and validate a deep learning algorithm for the automated segmentation of key temporal bone structures from clinical computed tomography (CT) data sets. STUDY DESIGN Cross-sectional study. SETTING A total of 325 CT scans from a clinical database. METHOD A state-of-the-art deep learning (DL) algorithm (SwinUNETR) was used to train a prediction model for rapid segmentation of 9 key temporal bone structures in a data set of 325 clinical CTs. The data set was manually annotated by a specialist to serve as the ground truth. The data set was randomly split into training (n = 260) and testing (n = 65) sets. The model's performance was objectively assessed through external validation on the test set using metrics including Dice, Balanced accuracy, Hausdorff distances, and processing time. RESULTS The model achieved an average Dice coefficient of 0.87 for all structures, an average balanced accuracy of 0.94, an average Hausdorff distance of 0.79 mm, and an average processing time of 9.1 seconds per CT. CONCLUSION The present DL model for the automated simultaneous segmentation of multiple structures within the temporal bone from CTs achieved high accuracy according to currently commonly employed objective analysis. The results demonstrate the potential of the method to improve preoperative evaluation and intraoperative guidance in otologic surgery.
Collapse
Affiliation(s)
- Caio A Neves
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Stanford, California, USA
- Faculty of Medicine, University of Brasilia UnB, Brasilia, Brazil
| | - Trishia El Chemaly
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Stanford, California, USA
| | - Fanrui Fu
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Stanford, California, USA
| | - Nikolas H Blevins
- Department of Otolaryngology-Head and Neck Surgery, Stanford University, Stanford, California, USA
| |
Collapse
|
3
|
Bolton L, Young K, Ray J, Chawdhary G. Virtual temporal bone simulators and their use in surgical training: a narrative review. J Laryngol Otol 2024; 138:356-360. [PMID: 37973532 DOI: 10.1017/s0022215123002025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
OBJECTIVE Temporal bone dissection is a difficult skill to acquire, and the challenge has recently been further compounded by a reduction in conventional surgical training opportunities during the coronavirus disease 2019 pandemic. Consequently, there has been renewed interest in ear simulation as an adjunct to surgical training for trainees. We review the state-of-the-art virtual temporal bone simulators for surgical training. MATERIALS AND METHODS A narrative review of the current literature was performed following a Medline search using a pre-determined search strategy. RESULTS AND ANALYSIS Sixty-one studies were included. There are five validated temporal bone simulators: Voxel-Man, CardinalSim, Ohio State University Simulator, Melbourne University's Virtual Reality Surgical Simulation and Visible Ear Simulator. The merits of each have been reviewed, alongside their role in surgical training. CONCLUSION Temporal bone simulators have been demonstrated to be useful adjuncts to conventional surgical training methods and are likely to play an increasing role in the future.
Collapse
Affiliation(s)
- Lauren Bolton
- ENT Offices, York Hospital, York and Scarborough Teaching Hospitals NHS Foundation Trust, York UK
| | - Kenneth Young
- ENT, Castle Hill Hospital, Hull University Teaching Hospital, Hull, UK
| | - Jaydip Ray
- ENT, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| | - Gaurav Chawdhary
- ENT, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield, UK
| |
Collapse
|
4
|
Andersen SAW, Hittle B, Värendh M, Lee J, Varadarajan V, Powell KA, Wiet GJ. Further Validity Evidence for Patient-Specific Virtual Reality Temporal Bone Surgical Simulation. Laryngoscope 2024; 134:1403-1409. [PMID: 37650640 DOI: 10.1002/lary.31016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/21/2023] [Accepted: 08/16/2023] [Indexed: 09/01/2023]
Abstract
OBJECTIVE Patient-specific virtual reality (VR) simulation of cochlear implant (CI) surgery potentially enables preoperative rehearsal and planning. We aim to gather supporting validity evidence for patient-specific simulation through the analysis of virtual performance and comparison with postoperative imaging. METHODS Prospective, multi-institutional study. Pre- and postoperative cone-beam CT scans of CI surgical patients were obtained and processed for patient-specific VR simulation. The virtual performances of five trainees and four attendings were recorded and (1) compared with volumes removed during actual surgery as determined in postoperative imaging, and (2) assessed using the Copenhagen Cochlear Implant Surgery Assessment Tool (CISAT) by two blinded raters. The volumes compared were cortical mastoidectomy, facial recess, and round window (RW) cochleostomy as well as violation of the facial nerve and chorda. RESULTS Trainees drilled more volume in the cortical mastoidectomy and facial recess, whereas attendings drilled more volume for the RW cochleostomy and made more violations. Except for the cochleostomy, attendings removed volumes closer to that determined in postoperative imaging. Trainees achieved a higher CISAT performance score compared with attendings (22.0 vs. 18.4 points) most likely due to lack of certain visual cues. CONCLUSION We found that there were differences in performance of trainees and attendings in patient-specific VR simulation of CI surgery as assessed by raters and in comparison with actual drilled volumes. The presented approach of volume comparison is novel and might be used for further validation of patient-specific VR simulation before clinical implementation for preoperative rehearsal in temporal bone surgery. LEVEL OF EVIDENCE n/a Laryngoscope, 134:1403-1409, 2024.
Collapse
Affiliation(s)
- Steven Arild Wuyts Andersen
- Copenhagen Hearing and Balance Center, Department of Otorhinolaryngology, Rigshospitalet, Copenhagen, Denmark
| | - Brad Hittle
- Department of Biomedical Informatics, Ohio State University, Columbus, Ohio, U.S.A
| | - Maria Värendh
- Department of Otorhinolaryngology, Örebro University Hospital, Örebro University, Örebro, Sweden
- Department of Otorhinolaryngology, Department of Clinical Sciences Lund, Lund University, Lund, Sweden
| | - Julian Lee
- Department of Otorhinolaryngology, The Ohio State University, Columbus, Ohio, U.S.A
- Department of Otolaryngology, Nationwide Children's Hospital, Columbus, Ohio, U.S.A
| | | | - Kimerly A Powell
- Department of Biomedical Informatics, Ohio State University, Columbus, Ohio, U.S.A
| | - Gregory J Wiet
- Department of Otorhinolaryngology, The Ohio State University, Columbus, Ohio, U.S.A
- Department of Otolaryngology, Nationwide Children's Hospital, Columbus, Ohio, U.S.A
| |
Collapse
|
5
|
Pipeline for Automated Processing of Clinical Cone-Beam Computed Tomography for Patient-Specific Temporal Bone Simulation: Validation and Clinical Feasibility. Otol Neurotol 2023; 44:e88-e94. [PMID: 36624596 DOI: 10.1097/mao.0000000000003771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Patient-specific simulation allows the surgeon to plan and rehearse the surgical approach ahead of time. Preoperative clinical imaging for this purpose requires time-consuming manual processing and segmentation of landmarks such as the facial nerve. We aimed to evaluate an automated pipeline with minimal manual interaction for processing clinical cone-beam computed tomography (CBCT) temporal bone imaging for patient-specific virtual reality (VR) simulation. STUDY DESIGN Prospective image processing of retrospective imaging series. SETTING Academic hospital. METHODS Eleven CBCTs were selected based on quality and used for validation of the processing pipeline. A larger naturalistic sample of 36 CBCTs were obtained to explore parameters for successful processing and feasibility for patient-specific VR simulation.Visual inspection and quantitative metrics were used to validate the accuracy of automated segmentation compared with manual segmentation. Range of acceptable rotational offsets and translation point selection variability were determined. Finally, feasibility in relation to image acquisition quality, processing time, and suitability for VR simulation was evaluated. RESULTS The performance of automated segmentation was acceptable compared with manual segmentation as reflected in the quantitative metrics. Total time for processing for new data sets was on average 8.3 minutes per data set; of this, it was less than 30 seconds for manual steps. Two of the 36 data sets failed because of extreme rotational offset, but overall the registration routine was robust to rotation and manual selection of a translational reference point. Another seven data sets had successful automated segmentation but insufficient suitability for VR simulation. CONCLUSION Automated processing of CBCT imaging has potential for preoperative VR simulation but requires further refinement.
Collapse
|
6
|
Timonen T, Dietz A, Linder P, Lehtimäki A, Löppönen H, Elomaa AP, Iso-Mustajärvi M. The effect of virtual reality on temporal bone anatomy evaluation and performance. Eur Arch Otorhinolaryngol 2022; 279:4303-4312. [PMID: 34837519 PMCID: PMC9363303 DOI: 10.1007/s00405-021-07183-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 11/15/2021] [Indexed: 11/09/2022]
Abstract
PURPOSE There is only limited data on the application of virtual reality (VR) for the evaluation of temporal bone anatomy. The aim of the present study was to compare the VR environment to traditional cross-sectional viewing of computed tomography images in a simulated preoperative planning setting in novice and expert surgeons. METHODS A novice (n = 5) and an expert group (n = 5), based on their otosurgery experience, were created. The participants were asked to identify 24 anatomical landmarks, perform 11 distance measurements between surgically relevant anatomical structures and 10 fiducial markers on five cadaver temporal bones in both VR environment and cross-sectional viewings in PACS interface. The data on performance time and user-experience (i.e., subjective validation) were collected. RESULTS The novice group made significantly more errors (p < 0.001) and with significantly longer performance time (p = 0.001) in cross-sectional viewing than the expert group. In the VR environment, there was no significant differences (errors and time) between the groups. The performance of novices improved faster in the VR. The novices showed significantly faster task performance (p = 0.003) and a trend towards fewer errors (p = 0.054) in VR compared to cross-sectional viewing. No such difference between the methods were observed in the expert group. The mean overall scores of user-experience were significantly higher for VR than cross-sectional viewing in both groups (p < 0.001). CONCLUSION In the VR environment, novices performed the anatomical evaluation of temporal bone faster and with fewer errors than in the traditional cross-sectional viewing, which supports its efficiency for the evaluation of complex anatomy.
Collapse
Affiliation(s)
- Tomi Timonen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, 70210 Kuopio, PL 100, 70029, Kuopio, Finland.
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, 70210 Kuopio, PL 100, 70029, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Pia Linder
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, 70210 Kuopio, PL 100, 70029, Kuopio, Finland
| | - Antti Lehtimäki
- Department of Radiology, Kuopio University Hospital, Kuopio, Finland
| | - Heikki Löppönen
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, 70210 Kuopio, PL 100, 70029, Kuopio, Finland
- School of Medicine, Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Antti-Pekka Elomaa
- Microsurgery Centre of Eastern Finland, Kuopio, Finland
- Department of Neurosurgery, Kuopio University Hospital, Kuopio, Finland
| | - Matti Iso-Mustajärvi
- Department of Otorhinolaryngology, Kuopio University Hospital, Puijonlaaksontie 2, 70210 Kuopio, PL 100, 70029, Kuopio, Finland
- Microsurgery Centre of Eastern Finland, Kuopio, Finland
| |
Collapse
|