1
|
He SX, Ma C, Yuan ZY, Xu TF, Xie QT, Wang YX, Huang XP. Feasibility of augmented reality using dental arch-based registration applied to navigation in mandibular distraction osteogenesis: a phantom experiment. BMC Oral Health 2024; 24:1321. [PMID: 39478554 PMCID: PMC11523659 DOI: 10.1186/s12903-024-05105-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 10/23/2024] [Indexed: 11/02/2024] Open
Abstract
OBJECTIVE Distraction osteogenesis is a primary treatment for severe mandibular hypoplasia. Achieving the ideal mandible movement direction through precise distraction vector control is still a challenge in this surgery. Therefore, the aim of this study was to apply Optical See-Through (OST) Augmented Reality (AR) technology for intraoperative navigation during mandibular distractor installation and analyze the feasibility to evaluate the effectiveness of AR in a phantom experiment. METHODS Phantom was made of 3D-printed mandibular models based on preoperative CT scans and dental arch scans of real patients. Ten sets of 3D-printed mandible models were included in this study, with each set consisting of two identical mandible models assigned to the AR group and free-hand group. 10 sets of mandibular distraction osteogenesis surgical plans were designed using software, and the same set of plans was shared between the AR and free-hand groups. Surgeons performed bilateral mandibular distraction osteogenesis tasks under the guidance of AR navigation, or the reference of the preoperative surgical plan displayed on the computer screen. The differences in angular errors of distraction vectors and the distance errors of distractor positions under the guidance of the two methods were analyzed and compared. RESULTS 40 distractors were implanted in both groups, with 20 cases in each. In intra-group comparisons between the left and right sides, the AR group exhibited a three-dimensional spatial angle error of 1.88 (0.59, 2.48) on the left and 2.71 (1.33, 3.55) on the right, with P = 0.085, indicating no significant bias in guiding surgery on both sides of the mandible. In comparisons between the AR group and the traditional free-hand (FH) group, the average angle error was 1.94 (1.30, 2.93) in the AR group and 5.06 (3.61, 9.22) in the free-hand group, with P < 0.0001, resulting in a 61.6% improvement in accuracy. The average displacement error was 1.53 ± 0.54 mm in the AR group and 3.56 ± 1.89 mm in the free-hand group, with P < 0.0001, indicating a 57% improvement in accuracy. CONCLUSION Augmented Reality technology for intraoperative navigation in mandibular distraction osteogenesis is accurate and feasible. A large randomized controlled trial with long-term follow-up is needed to confirm these findings. TRIAL REGISTRATION The project has been registered with the Chinese Clinical Trial Registry, with registration number ChiCTR2300068417. Date of Registration: 17 February 2023.
Collapse
Affiliation(s)
- Shi-Xi He
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Cheng Ma
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Zong-Yi Yuan
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Tian-Feng Xu
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Qing-Tiao Xie
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Ya-Xi Wang
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China
| | - Xuan-Ping Huang
- Department of Oral and Maxillofacial Surgery, College & Hospital of Stomatology, Guangxi Medical University, Shuangyong Road 10, Nanning, Qingxiu District, Guangxi, China.
| |
Collapse
|
2
|
Wang X, Yang C, Liu Z, Zhang J, Xue C, Xing L, Zheng Y, Geng C, Yin X. R-MFE-TCN: A correlation prediction model between body surface and tumor during respiratory movement. Med Phys 2024; 51:6075-6089. [PMID: 38801342 DOI: 10.1002/mp.17183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 04/30/2024] [Accepted: 05/11/2024] [Indexed: 05/29/2024] Open
Abstract
BACKGROUND 2D CT image-guided radiofrequency ablation (RFA) is an exciting minimally invasive treatment that can destroy liver tumors without removing them. However, CT images can only provide limited static information, and the tumor will move with the patient's respiratory movement. Therefore, how to accurately locate tumors under free conditions is an urgent problem to be solved at present. PURPOSE The purpose of this study is to propose a respiratory correlation prediction model for mixed reality surgical assistance system, Riemannian and Multivariate Feature Enhanced Temporal Convolutional Network (R-MFE-TCN), and to achieve accurate respiratory correlation prediction. METHODS The model adopts a respiration-oriented Riemannian information enhancement strategy to expand the diversity of the dataset. A new Multivariate Feature Enhancement module (MFE) is proposed to retain respiratory data information, so that the network can fully explore the correlation of internal and external data information, the dual-channel is used to retain multivariate respiratory feature, and the Multi-headed Self-attention obtains respiratory peak-to-valley value periodic information. This information significantly improves the prediction performance of the network. At the same time, the PSO algorithm is used for hyperparameter optimization. In the experiment, a total of seven patients' internal and external respiratory motion trajectories were obtained from the dataset, and the first six patients were selected as the training set. The respiratory signal collection frequency was 21 Hz. RESULTS A large number of experiments on the dataset prove the good performance of this method, which improves the prediction accuracy while also having strong robustness. This method can reduce the delay deviation under long window prediction and achieve good performance. In the case of 400 ms, the average RMSE and MAE are 0.0453 and 0.0361 mm, respectively, which is better than other research methods. CONCLUSION The R-MFE-TCN can be extended to respiratory correlation prediction in different clinical situations, meeting the accuracy requirements for respiratory delay prediction in surgical assistance.
Collapse
Affiliation(s)
- Xuehu Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Chang Yang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Ziqi Liu
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Jushuo Zhang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Chao Xue
- Senior Department of Orthopedics, the Fourth Medical Center of PLA General Hospital, Beijing, China
| | - Lihong Xing
- Affiliated Hospital of Hebei University, Baoding, China
| | - Yongchang Zheng
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS & PUMC), Beijing, China
| | - Chen Geng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University, Baoding, China
| |
Collapse
|
3
|
Ha HG, Gu K, Jeung D, Hong J, Lee H. Simulated augmented reality-based calibration of optical see-through head mound display for surgical navigation. Int J Comput Assist Radiol Surg 2024; 19:1647-1657. [PMID: 38777946 DOI: 10.1007/s11548-024-03164-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024]
Abstract
PURPOSE Calibration of an optical see-through head-mounted display is critical for augmented reality-based surgical navigation. While conventional methods have advanced, calibration errors remain significant. Moreover, prior research has focused primarily on calibration accuracy and procedure, neglecting the impact on the overall surgical navigation system. Consequently, these enhancements do not necessarily translate to accurate augmented reality in the optical see-through head mount due to systemic errors, including those in calibration. METHOD This study introduces a simulated augmented reality-based calibration to address these issues. By replicating the augmented reality that appeared in the optical see-through head mount, the method achieves calibration that compensates for augmented reality errors, thereby reducing them. The process involves two distinct calibration approaches, followed by adjusting the transformation matrix to minimize displacement in the simulated augmented reality. RESULTS The efficacy of this method was assessed through two accuracy evaluations: registration accuracy and augmented reality accuracy. Experimental results showed an average translational error of 2.14 mm and rotational error of 1.06° across axes in both approaches. Additionally, augmented reality accuracy, measured by the overlay regions' ratio, increased to approximately 95%. These findings confirm the enhancement in both calibration and augmented reality accuracy with the proposed method. CONCLUSION The study presents a calibration method using simulated augmented reality, which minimizes augmented reality errors. This approach, requiring minimal manual intervention, offers a more robust and precise calibration technique for augmented reality applications in surgical navigation.
Collapse
Affiliation(s)
- Ho-Gun Ha
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Kyeongmo Gu
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Hyunki Lee
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea.
| |
Collapse
|
4
|
Tel A, Raccampo L, Vinayahalingam S, Troise S, Abbate V, Orabona GD, Sembronio S, Robiony M. Complex Craniofacial Cases through Augmented Reality Guidance in Surgical Oncology: A Technical Report. Diagnostics (Basel) 2024; 14:1108. [PMID: 38893634 PMCID: PMC11171943 DOI: 10.3390/diagnostics14111108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/20/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024] Open
Abstract
Augmented reality (AR) is a promising technology to enhance image guided surgery and represents the perfect bridge to combine precise virtual planning with computer-aided execution of surgical maneuvers in the operating room. In craniofacial surgical oncology, AR brings to the surgeon's sight a digital, three-dimensional representation of the anatomy and helps to identify tumor boundaries and optimal surgical paths. Intraoperatively, real-time AR guidance provides surgeons with accurate spatial information, ensuring accurate tumor resection and preservation of critical structures. In this paper, the authors review current evidence of AR applications in craniofacial surgery, focusing on real surgical applications, and compare existing literature with their experience during an AR and navigation guided craniofacial resection, to subsequently analyze which technological trajectories will represent the future of AR and define new perspectives of application for this revolutionizing technology.
Collapse
Affiliation(s)
- Alessandro Tel
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Luca Raccampo
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands
| | - Stefania Troise
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Vincenzo Abbate
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Giovanni Dell’Aversana Orabona
- Neurosciences Reproductive and Odontostomatological Sciences Department, University of Naples “Federico II”, 80131 Naples, Italy
| | - Salvatore Sembronio
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| | - Massimo Robiony
- Clinic of Maxillofacial Surgery, Head-Neck and NeuroScience Department, University Hospital of Udine, p.le S. Maria della Misericordia 15, 33100 Udine, Italy
| |
Collapse
|
5
|
Shao L, Fu T, Lin Y, Xiao D, Ai D, Zhang T, Fan J, Song H, Yang J. Facial augmented reality based on hierarchical optimization of similarity aspect graph. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108108. [PMID: 38461712 DOI: 10.1016/j.cmpb.2024.108108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 02/05/2024] [Accepted: 02/29/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND The existing face matching method requires a point cloud to be drawn on the real face for registration, which results in low registration accuracy due to the irregular deformation of the patient's skin that makes the point cloud have many outlier points. METHODS This work proposes a non-contact pose estimation method based on similarity aspect graph hierarchical optimization. The proposed method constructs a distance-weighted and triangular-constrained similarity measure to describe the similarity between views by automatically identifying the 2D and 3D feature points of the face. A mutual similarity clustering method is proposed to construct a hierarchical aspect graph with 3D pose as nodes. A Monte Carlo tree search strategy is used to search the hierarchical aspect graph for determining the optimal pose of the facial 3D model, so as to realize the accurate registration of the facial 3D model and the real face. RESULTS The proposed method was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with four advanced pose calibration methods. The proposed method obtained average fusion errors of 1.13 ± 0.20 mm and 0.92 ± 0.08 mm in head phantom and volunteer experiments, respectively, which exhibits the best fusion performance among all comparison methods. CONCLUSIONS Our experiments proved the effectiveness of the proposed pose estimation method in facial augmented reality.
Collapse
Affiliation(s)
- Long Shao
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tao Zhang
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Hong Song
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
6
|
Yang S, Wang Y, Ai D, Geng H, Zhang D, Xiao D, Song H, Li M, Yang J. Augmented Reality Navigation System for Biliary Interventional Procedures With Dynamic Respiratory Motion Correction. IEEE Trans Biomed Eng 2024; 71:700-711. [PMID: 38241137 DOI: 10.1109/tbme.2023.3316290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
OBJECTIVE Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.
Collapse
|
7
|
Condino S, Cutolo F, Carbone M, Cercenelli L, Badiali G, Montemurro N, Ferrari V. Registration Sanity Check for AR-guided Surgical Interventions: Experience From Head and Face Surgery. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 12:258-267. [PMID: 38410181 PMCID: PMC10896424 DOI: 10.1109/jtehm.2023.3332088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/19/2023] [Accepted: 11/08/2023] [Indexed: 02/28/2024]
Abstract
Achieving and maintaining proper image registration accuracy is an open challenge of image-guided surgery. This work explores and assesses the efficacy of a registration sanity check method for augmented reality-guided navigation (AR-RSC), based on the visual inspection of virtual 3D models of landmarks. We analyze the AR-RSC sensitivity and specificity by recruiting 36 subjects to assess the registration accuracy of a set of 114 AR images generated from camera images acquired during an AR-guided orthognathic intervention. Translational or rotational errors of known magnitude up to ±1.5 mm/±15.5°, were artificially added to the image set in order to simulate different registration errors. This study analyses the performance of AR-RSC when varying (1) the virtual models selected for misalignment evaluation (e. g., the model of brackets, incisor teeth, and gingival margins in our experiment), (2) the type (translation/rotation) of registration error, and (3) the level of user experience in using AR technologies. Results show that: 1) the sensitivity and specificity of the AR-RSC depends on the virtual models (globally, a median true positive rate of up to 79.2% was reached with brackets, and a median true negative rate of up to 64.3% with incisor teeth), 2) there are error components that are more difficult to identify visually, 3) the level of user experience does not affect the method. In conclusion, the proposed AR-RSC, tested also in the operating room, could represent an efficient method to monitor and optimize the registration accuracy during the intervention, but special attention should be paid to the selection of the AR data chosen for the visual inspection of the registration accuracy.
Collapse
Affiliation(s)
- Sara Condino
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Marina Carbone
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Laura Cercenelli
- EDIMES Laboratory of BioengineeringDepartment of Experimental, Diagnostic and Specialty MedicineUniversity of Bologna40138BolognaItaly
| | - Giovanni Badiali
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| | - Nicola Montemurro
- Department of NeurosurgeryAzienda Ospedaliera Universitaria Pisana (AOUP)56127PisaItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of Pisa56126PisaItaly
| |
Collapse
|
8
|
Yao JF, Yang Y, Wang XC, Zhang XP. Systematic review of digital twin technology and applications. Vis Comput Ind Biomed Art 2023; 6:10. [PMID: 37249731 DOI: 10.1186/s42492-023-00137-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 05/18/2023] [Indexed: 05/31/2023] Open
Abstract
As one of the most important applications of digitalization, intelligence, and service, the digital twin (DT) breaks through the constraints of time, space, cost, and security on physical entities, expands and optimizes the relevant functions of physical entities, and enhances their application value. This phenomenon has been widely studied in academia and industry. In this study, the concept and definition of DT, as utilized by scholars and researchers in various fields of industry, are summarized. The internal association between DT and related technologies is explained. The four stages of DT development history are identified. The fundamentals of the technology, evaluation indexes, and model frameworks are reviewed. Subsequently, a conceptual ternary model of DT based on time, space, and logic is proposed. The technology and application status of typical DT systems are described. Finally, the current technical challenges of DT technology are analyzed, and directions for future development are discussed.
Collapse
Affiliation(s)
- Jun-Feng Yao
- Center for Digital Media Computing, School of Film, Xiamen University, Xiamen 361005, China.
- School of Informatics, Xiamen University, Xiamen 361005, China.
- Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage of Fujian and Taiwan, Ministry of Culture and Tourism, Xiamen 361005, China.
| | - Yong Yang
- Center for Digital Media Computing, School of Film, Xiamen University, Xiamen 361005, China
| | - Xue-Cheng Wang
- Center for Digital Media Computing, School of Film, Xiamen University, Xiamen 361005, China
| | - Xiao-Peng Zhang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, the Institute of Automation, Chinese Academy of Sciences, Beijing 101408, China
| |
Collapse
|
9
|
Chen Y, Liu L, Qiu S, Hu C, Wang L, Li Y, Tan X, Gao Y, Huang D. Application of Real-Time Augmented Reality-Guided Osteotomy and Apex Location in Endodontic Microsurgery: A Surgical Simulation Study Based on 3D-Printed Alveolar Bone Model. J Endod 2023:S0099-2399(23)00281-9. [PMID: 37211311 DOI: 10.1016/j.joen.2023.05.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/04/2023] [Accepted: 05/09/2023] [Indexed: 05/23/2023]
Abstract
INTRODUCTION Augmented reality (AR) is a novel visualization technique in which pre-generated virtual 3D content is superimposed on surgical sites. This study aimed to validate the viability of AR-guided endodontic microsurgery (ARG) and compare the changes in objective and subjective outcomes of surgical simulation using ARG and freehand endodontic microsurgery (FH) on customized 3D-printed models. METHODS We created and printed a customized 3D alveolar bone model with artificial periapical lesions (APLs) based on cone-beam computed tomography (CBCT). Eight models with 96 APLs were equally divided into ARG and FH groups. We planned surgical trajectories on re-scanned printed models. Four inexperienced residents (IRs) performed ARG and FH on the models and completed pre-and intraoperative confidence questionnaires for the subjective outcome. Postoperative CBCT scans of the models were reconstructed and analyzed, and all procedures were timed. We used pairwise Wilcoxon rank sum tests to compare objective outcomes. Kruskal-Wallis tests and post-hoc pairwise Wilcoxon rank sum tests were used to compare subjective outcomes. RESULTS Compared to the FH group, the ARG group significantly reduced deviation of the volume of bone removal, root-end resection, and deviation of bevel angle, with improved confidence of the IRs (P < 0.05); it also significantly increased surgical time and volume of unremoved APL (P < 0.05). CONCLUSIONS We customized an APL model through 3D printing and developed and validated a low-cost AR application framework, based on free AR software, for endodontic microsurgery. ARG allowed IRs to perform more conservative and precise surgical procedures with enhanced confidence.
Collapse
Affiliation(s)
- Yue Chen
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Liu Liu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China; Department of Conservative Dentistry and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Shenghao Qiu
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China; Department of Conservative Dentistry and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Chengsi Hu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Liu Wang
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China; Department of Conservative Dentistry and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Yantong Li
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Xinqiao Tan
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Yuan Gao
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China; Department of Conservative Dentistry and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, China.
| | - Dingming Huang
- State Key Laboratory of Oral Diseases, National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, China; Department of Conservative Dentistry and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, China.
| |
Collapse
|
10
|
Zhao R, Zhu Z, Shao L, Meng F, Lei Z, Li X, Zhang T. Augmented reality guided in reconstruction of mandibular defect with fibular flap: A cadaver study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2022; 124:101318. [PMID: 36280109 DOI: 10.1016/j.jormas.2022.10.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Revised: 10/05/2022] [Accepted: 10/19/2022] [Indexed: 11/21/2022]
Abstract
BACKGROUND Augmented reality (AR) navigation has been developed in recent years and can overcome some limitations of existing technologies. This study aimed to investigate a novel method of fibula free flap (FFF) osteotomy based on AR technology through a cadaver study. METHODS One mandible, seven fibulas, and seven lower limb specimens underwent computed tomography (CT) examination. We used the professional software Proplan CMF 3.0 to design a defective mandible model and created fourteen virtual reconstruction plans using the fibulas and lower limb specimens. The AR-based intraoperative guidance software prototype was developed using the Unity Real-Time Development Platform, and virtual plans were transferred into this software prototype. We used AR-based surgical navigation to guide the FFF osteotomy and used these fibular segments to reconstruct the defective mandible model. After reconstruction, all segments were scanned by CT. Osteotomy accuracy was evaluated by measuring the length and angular deviation between the virtual plan and the final result. The reconstruction precision was reflected by the volume overlap rate and average surface distance between the planned and obtained reconstruction. RESULTS The length difference, angular deviation, volume overlap rate and average surface distance of the in vitro group were 1.03±0.68 mm, 5.04±2.61°, 95.35±1.81%, and 1.02±0.27 mm, respectively. Those of the in vivo group were 1.18±0.84 mm, 5.45±1.47°, 95.31±2.09%, and 1.22±0.12 mm. CONCLUSIONS Due to the ideal result of cadaver experiments, an AR-based FFF osteotomy guided system may become a novel approach to assist FFF osteotomy for the reconstruction of defective mandibles.
Collapse
|