1
|
Suenaga H, Sakakibara A, Koyama J, Hoshi K. A clinical presentation of markerless augmented reality assisted surgery for resection of a dentigerous cyst in the maxillary sinus. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2024; 125:101767. [PMID: 38246585 DOI: 10.1016/j.jormas.2024.101767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 01/18/2024] [Indexed: 01/23/2024]
Abstract
Dentigerous cysts (DC) in the maxillary sinus are rare and pose challenges for effective treatment. Despite various available surgical techniques, a definitive approach remains debated. This study introduces a markerless Augmented Reality Assisted Surgery (ARAS) system that utilizes tooth image recognition and surgical simulation to enhance the precision of maxillary sinus DC extractions. Using advanced technology, such as 3-dimensional (3D) intraoral scanning and CT imaging for accurate data capture, the system aligns virtual models with patient anatomy without external markers, demonstrating a minimally invasive surgical solution. The ARAS system enabled precise surgical planning and realization of a DC extraction in the maxillary sinus by creating a bone window in direct contact with the cyst, assisting in complete removal with minimal risk to adjacent structures. The ARAS system may aid surgeons in visualizing patient anatomy during surgery, with overlays of relevant medical images, aiding in precise localization and minimizing tissue damage.
Collapse
Affiliation(s)
- Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan.
| | - Ayuko Sakakibara
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Juri Koyama
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| | - Kazuto Hoshi
- Department of Oral-Maxillofacial Surgery and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
2
|
Lee D, Choi A, Mun JH. Deep Learning-Based Fine-Tuning Approach of Coarse Registration for Ear-Nose-Throat (ENT) Surgical Navigation Systems. Bioengineering (Basel) 2024; 11:941. [PMID: 39329683 PMCID: PMC11428421 DOI: 10.3390/bioengineering11090941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/12/2024] [Accepted: 09/17/2024] [Indexed: 09/28/2024] Open
Abstract
Accurate registration between medical images and patient anatomy is crucial for surgical navigation systems in minimally invasive surgeries. This study introduces a novel deep learning-based refinement step to enhance the accuracy of surface registration without disrupting established workflows. The proposed method integrates a machine learning model between conventional coarse registration and ICP fine registration. A deep-learning model was trained using simulated anatomical landmarks with introduced localization errors. The model architecture features global feature-based learning, an iterative prediction structure, and independent processing of rotational and translational components. Validation with silicon-masked head phantoms and CT imaging compared the proposed method to both conventional registration and a recent deep-learning approach. The results demonstrated significant improvements in target registration error (TRE) across different facial regions and depths. The average TRE for the proposed method (1.58 ± 0.52 mm) was significantly lower than that of the conventional (2.37 ± 1.14 mm) and previous deep-learning (2.29 ± 0.95 mm) approaches (p < 0.01). The method showed a consistent performance across various facial regions and enhanced registration accuracy for deeper areas. This advancement could significantly enhance precision and safety in minimally invasive surgical procedures.
Collapse
Affiliation(s)
- Dongjun Lee
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Ahnryul Choi
- Department of Biomedical Engineering, College of Medicine, Chungbuk National Univeristy, Cheongju 28644, Republic of Korea
| | - Joung Hwan Mun
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
3
|
Wilkat M, Saigo L, Kübler N, Rana M, Schrader F. Intraoral Scanning Enables Virtual-Splint-Based Non-Invasive Registration Protocol for Maxillofacial Surgical Navigation. J Clin Med 2024; 13:5196. [PMID: 39274408 PMCID: PMC11396243 DOI: 10.3390/jcm13175196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 08/30/2024] [Accepted: 08/30/2024] [Indexed: 09/16/2024] Open
Abstract
Background/Objectives: Surgical navigation has advanced maxillofacial surgery since the 1990s, bringing benefits for various indications. Traditional registration methods use fiducial markers that are either invasively bone-anchored or attached to a dental vacuum splint and offer high accuracy but necessitate additional imaging with increased radiation exposure. We propose a novel, non-invasive registration protocol using a CAD/CAM dental splint based on high-resolution intraoral scans. Methods: The effectiveness of this method was experimentally evaluated with an ex vivo 3D-printed skull measuring the target registration error (TRE). Surgical application is demonstrated in two clinical cases. Results: In the ex vivo model, the new CAD/CAM-splint-based method achieved a mean TRE across the whole facial skull of 0.97 ± 0.29 mm, which was comparable to traditional techniques like using bone-anchored screws (1.02 ± 0.23 mm) and dental vacuum splints (1.01 ± 0.33 mm), while dental anatomical landmarks showed a lower accuracy with a mean TRE of 1.84 ± 0.44 mm. Multifactorial ANOVA confirmed significant differences in TRE based on the registration method and the navigated level of the facial skull (p < 0.001). In clinical applications, the presented method demonstrated high accuracy for both midfacial and mandibular surgeries. Conclusions: Our results suggest that this non-invasive CAD/CAM-splint-based method is a viable alternative to traditional fiducial marker techniques, with the potential for broad application in maxillofacial surgery. This approach retains high accuracy while eliminating the need for supplementary imaging and reduces patient radiation exposure. Further clinical trials are necessary to confirm these findings and optimize splint design for enhanced navigational accuracy.
Collapse
Affiliation(s)
- Max Wilkat
- Department of Oral and Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Leonardo Saigo
- Department of Oral and Maxillofacial Surgery, National Dental Centre Singapore, 5 Second Hospital Ave., Singapore 168938, Singapore
| | - Norbert Kübler
- Department of Oral and Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Majeed Rana
- Department of Oral and Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| | - Felix Schrader
- Department of Oral and Maxillofacial Surgery, Heinrich Heine University Hospital Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany
| |
Collapse
|
4
|
Ha HG, Gu K, Jeung D, Hong J, Lee H. Simulated augmented reality-based calibration of optical see-through head mound display for surgical navigation. Int J Comput Assist Radiol Surg 2024; 19:1647-1657. [PMID: 38777946 DOI: 10.1007/s11548-024-03164-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024]
Abstract
PURPOSE Calibration of an optical see-through head-mounted display is critical for augmented reality-based surgical navigation. While conventional methods have advanced, calibration errors remain significant. Moreover, prior research has focused primarily on calibration accuracy and procedure, neglecting the impact on the overall surgical navigation system. Consequently, these enhancements do not necessarily translate to accurate augmented reality in the optical see-through head mount due to systemic errors, including those in calibration. METHOD This study introduces a simulated augmented reality-based calibration to address these issues. By replicating the augmented reality that appeared in the optical see-through head mount, the method achieves calibration that compensates for augmented reality errors, thereby reducing them. The process involves two distinct calibration approaches, followed by adjusting the transformation matrix to minimize displacement in the simulated augmented reality. RESULTS The efficacy of this method was assessed through two accuracy evaluations: registration accuracy and augmented reality accuracy. Experimental results showed an average translational error of 2.14 mm and rotational error of 1.06° across axes in both approaches. Additionally, augmented reality accuracy, measured by the overlay regions' ratio, increased to approximately 95%. These findings confirm the enhancement in both calibration and augmented reality accuracy with the proposed method. CONCLUSION The study presents a calibration method using simulated augmented reality, which minimizes augmented reality errors. This approach, requiring minimal manual intervention, offers a more robust and precise calibration technique for augmented reality applications in surgical navigation.
Collapse
Affiliation(s)
- Ho-Gun Ha
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Kyeongmo Gu
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Hyunki Lee
- Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science and Technology (DGIST), 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea.
| |
Collapse
|
5
|
Finos K, Datta S, Sedrakyan A, Milsom JW, Pua BB. Mixed reality in interventional radiology: a focus on first clinical use of XR90 augmented reality-based visualization and navigation platform. Expert Rev Med Devices 2024; 21:679-688. [PMID: 39054630 DOI: 10.1080/17434440.2024.2379925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Accepted: 06/28/2024] [Indexed: 07/27/2024]
Abstract
INTRODUCTION Augmented reality (AR) and virtual reality (VR) are emerging tools in interventional radiology (IR), enhancing IR education, preprocedural planning, and intraprocedural guidance. AREAS COVERED This review identifies current applications of AR/VR in IR, with a focus on studies that assess the clinical impact of AR/VR. We outline the relevant technology and assess current limitations and future directions in this space. We found that the use of AR in IR lags other surgical fields, and the majority of the data exists in case series or small-scale studies. Educational use of AR/VR improves learning anatomy, procedure steps, and procedural learning curves. Preprocedural use of AR/VR decreases procedure times, especially in complex procedures. Intraprocedural AR for live tracking is accurate within 5 mm live patients and has up to 0.75 mm in phantoms, offering decreased procedure time and radiation exposure. Challenges include cost, ergonomics, rapid segmentation, and organ motion. EXPERT OPINION The use of AR/VR in interventional radiology may lead to safer and more efficient procedures. However, more data from larger studies is needed to better understand where AR/VR is confers the most benefit in interventional radiology clinical practice.
Collapse
Affiliation(s)
- Kyle Finos
- Division of Interventional Radiology, New York Presbyterian Hospital/Weill Cornell Medicine, New York, USA
| | - Sanjit Datta
- Division of Interventional Radiology, New York Presbyterian Hospital/Weill Cornell Medicine, New York, USA
| | - Art Sedrakyan
- Population Health Science, New York Presbyterian Hospital/Weill Cornell Medicine, New York, USA
| | - Jeffrey W Milsom
- Division of Colorectal Surgery, New York Presbyterian Hospital/Weill Cornell Medicine, New York, USA
| | - Bradley B Pua
- Division of Interventional Radiology, New York Presbyterian Hospital/Weill Cornell Medicine, New York, USA
| |
Collapse
|
6
|
Taleb A, Leclerc S, Hussein R, Lalande A, Bozorg-Grayeli A. Registration of preoperative temporal bone CT-scan to otoendoscopic video for augmented-reality based on convolutional neural networks. Eur Arch Otorhinolaryngol 2024; 281:2921-2930. [PMID: 38200355 DOI: 10.1007/s00405-023-08403-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
PURPOSE Patient-to-image registration is a preliminary step required in surgical navigation based on preoperative images. Human intervention and fiducial markers hamper this task as they are time-consuming and introduce potential errors. We aimed to develop a fully automatic 2D registration system for augmented reality in ear surgery. METHODS CT-scans and corresponding oto-endoscopic videos were collected from 41 patients (58 ears) undergoing ear examination (vestibular schwannoma before surgery, profound hearing loss requiring cochlear implant, suspicion of perilymphatic fistula, contralateral ears in cases of unilateral chronic otitis media). Two to four images were selected from each case. For the training phase, data from patients (75% of the dataset) and 11 cadaveric specimens were used. Tympanic membranes and malleus handles were contoured on both video images and CT-scans by expert surgeons. The algorithm used a U-Net network for detecting the contours of the tympanic membrane and the malleus on both preoperative CT-scans and endoscopic video frames. Then, contours were processed and registered through an iterative closest point algorithm. Validation was performed on 4 cases and testing on 6 cases. Registration error was measured by overlaying both images and measuring the average and Hausdorff distances. RESULTS The proposed registration method yielded a precision compatible with ear surgery with a 2D mean overlay error of 0.65 ± 0.60 mm for the incus and 0.48 ± 0.32 mm for the round window. The average Hausdorff distance for these 2 targets was 0.98 ± 0.60 mm and 0.78 ± 0.34 mm respectively. An outlier case with higher errors (2.3 mm and 1.5 mm average Hausdorff distance for incus and round window respectively) was observed in relation to a high discrepancy between the projection angle of the reconstructed CT-scan and the video image. The maximum duration for the overall process was 18 s. CONCLUSIONS A fully automatic 2D registration method based on a convolutional neural network and applied to ear surgery was developed. The method did not rely on any external fiducial markers nor human intervention for landmark recognition. The method was fast and its precision was compatible with ear surgery.
Collapse
Affiliation(s)
- Ali Taleb
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France.
| | - Sarah Leclerc
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
| | | | - Alain Lalande
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- Medical Imaging Department, Dijon University Hospital, 21000, Dijon, France
| | - Alexis Bozorg-Grayeli
- ICMUB Laboratory UMR CNRS 6302, University of Burgundy Franche Comte, 21000, Dijon, France
- ENT Department, Dijon University Hospital, 21000, Dijon, France
| |
Collapse
|