1
|
Li W, Fan J, Li S, Zheng Z, Tian Z, Ai D, Song H, Chen X, Yang J. An incremental registration method for endoscopic sinus and skull base surgery navigation: From phantom study to clinical trials. Med Phys 2023; 50:226-239. [PMID: 35997999 DOI: 10.1002/mp.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 01/27/2023] Open
Abstract
PURPOSE Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Xiaohong Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
2
|
Lan K, Tao B, Wang F, Wu Y. Accuracy evaluation of 3D-printed noninvasive adhesive marker for dynamic navigation implant surgery in a maxillary edentulous model: An in vitro study. Med Eng Phys 2022; 103:103783. [PMID: 35500986 DOI: 10.1016/j.medengphy.2022.103783] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 01/30/2022] [Accepted: 02/19/2022] [Indexed: 10/18/2022]
Abstract
Dynamic computer-aided implant surgery (DCAIS) can improve dental implantation accuracy and reduce surgical risks. In the registration procedure of DCAIS, the type and the number of registration markers significantly impact the accuracy of DCAIS. One problem of DCAIS in clinical application is that only invasive screw markers can be used for implantation in edentulous patients. It could cause additional trauma, scar formation and usually increase patient discomfort. In this experiment, a personalized 3D-printed edentulous maxillary model was used for simulating clinical situations, and a 3D-printed noninvasive adhesive marker (3D-PNAM) was designed to figure out the above problem. In this research, six target screws were implanted into the model's maxillary alveolar ridge as targets for accuracy analysis. This study used target registration error (TRE) as an index to evaluate the accuracy of invasive screw makers and noninvasive adhesive markers. Results showed that 3D-PNAMs had the same accuracy as screw markers, and placing at least six registration markers in the maxilla was needed for good registration accuracy. The registration markers should be further improved and designed according to application areas' clinical needs and anatomical characteristics in future clinical studies.
Collapse
Affiliation(s)
- Kengliang Lan
- Graduate student, Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China
| | - Baoxin Tao
- Graduate student, Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China
| | - Feng Wang
- Associated Professor, Department of Oral Implantology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China
| | - Yiqun Wu
- Professor, Department of Second Dental Center, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, College of Stomatology, Shanghai Jiao Tong University, National Center for Stomatology, National Clinical Research Center for Oral Diseases, Shanghai Key Laboratory of Stomatology, Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, China.
| |
Collapse
|
3
|
Augmented reality visualization in brain lesions: a prospective randomized controlled evaluation of its potential and current limitations in navigated microneurosurgery. Acta Neurochir (Wien) 2022; 164:3-14. [PMID: 34904183 PMCID: PMC8761141 DOI: 10.1007/s00701-021-05045-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 09/01/2021] [Indexed: 11/16/2022]
Abstract
Background Augmented reality (AR) has the potential to support complex neurosurgical interventions by including visual information seamlessly. This study examines intraoperative visualization parameters and clinical impact of AR in brain tumor surgery. Methods Fifty-five intracranial lesions, operated either with AR-navigated microscope (n = 39) or conventional neuronavigation (n = 16) after randomization, have been included prospectively. Surgical resection time, duration/type/mode of AR, displayed objects (n, type), pointer-based navigation checks (n), usability of control, quality indicators, and overall surgical usefulness of AR have been assessed. Results AR display has been used in 44.4% of resection time. Predominant AR type was navigation view (75.7%), followed by target volumes (20.1%). Predominant AR mode was picture-in-picture (PiP) (72.5%), followed by 23.3% overlay display. In 43.6% of cases, vision of important anatomical structures has been partially or entirely blocked by AR information. A total of 7.7% of cases used MRI navigation only, 30.8% used one, 23.1% used two, and 38.5% used three or more object segmentations in AR navigation. A total of 66.7% of surgeons found AR visualization helpful in the individual surgical case. AR depth information and accuracy have been rated acceptable (median 3.0 vs. median 5.0 in conventional neuronavigation). The mean utilization of the navigation pointer was 2.6 × /resection hour (AR) vs. 9.7 × /resection hour (neuronavigation); navigation effort was significantly reduced in AR (P < 0.001). Conclusions The main benefit of HUD-based AR visualization in brain tumor surgery is the integrated continuous display allowing for pointer-less navigation. Navigation view (PiP) provides the highest usability while blocking the operative field less frequently. Visualization quality will benefit from improvements in registration accuracy and depth impression. German clinical trials registration number. DRKS00016955. Supplementary Information The online version contains supplementary material available at 10.1007/s00701-021-05045-1.
Collapse
|
4
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
5
|
Wang J, Liu H, Ke J, Hu L, Zhang S, Yang B, Sun S, Guo N, Ma F. Image-guided cochlear access by non-invasive registration: a cadaveric feasibility study. Sci Rep 2020; 10:18318. [PMID: 33110188 PMCID: PMC7591497 DOI: 10.1038/s41598-020-75530-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/15/2020] [Indexed: 11/09/2022] Open
Abstract
Image-guided cochlear implant surgery is expected to reduce volume of mastoidectomy, accelerate recovery, and improve safety. The purpose of this study was to investigate the safety and effectiveness of image-guided cochlear implant surgery by a non-invasive registration method, in a cadaveric study. We developed a visual positioning frame that can utilize the maxillary dentition as a registration tool and completed the tunnels experiment on 5 cadaver specimens (8 cases in total). The accuracy of the entry point and the target point were 0.471 ± 0.276 mm and 0.671 ± 0.268 mm, respectively. The shortest distance from the margin of the tunnel to the facial nerve and the ossicular chain were 0.790 ± 0.709 mm and 1.960 ± 0.630 mm, respectively. All facial nerves, tympanic membranes, and ossicular chains were completely preserved. Using this approach, high accuracy was achieved in this preliminary study, suggesting that the non-invasive registration method can meet the accuracy requirements for cochlear implant surgery. Based on the above accuracy, we speculate that our method can also be applied to neurosurgery, orbitofacial surgery, lateral skull base surgery, and anterior skull base surgery with satisfactory accuracy.
Collapse
Affiliation(s)
- Jiang Wang
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Hongsheng Liu
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Jia Ke
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Lei Hu
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Shaoxing Zhang
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Biao Yang
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Shilong Sun
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China
| | - Na Guo
- The Robotics Institute, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Furong Ma
- Department of Otorhinolaryngology - Head and Neck Surgery, Peking University Third Hospital, Peking University, No. 49 North Garden Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|