1
|
Lee D, Choi A, Mun JH. Deep Learning-Based Fine-Tuning Approach of Coarse Registration for Ear-Nose-Throat (ENT) Surgical Navigation Systems. Bioengineering (Basel) 2024; 11:941. [PMID: 39329683 PMCID: PMC11428421 DOI: 10.3390/bioengineering11090941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Revised: 09/12/2024] [Accepted: 09/17/2024] [Indexed: 09/28/2024] Open
Abstract
Accurate registration between medical images and patient anatomy is crucial for surgical navigation systems in minimally invasive surgeries. This study introduces a novel deep learning-based refinement step to enhance the accuracy of surface registration without disrupting established workflows. The proposed method integrates a machine learning model between conventional coarse registration and ICP fine registration. A deep-learning model was trained using simulated anatomical landmarks with introduced localization errors. The model architecture features global feature-based learning, an iterative prediction structure, and independent processing of rotational and translational components. Validation with silicon-masked head phantoms and CT imaging compared the proposed method to both conventional registration and a recent deep-learning approach. The results demonstrated significant improvements in target registration error (TRE) across different facial regions and depths. The average TRE for the proposed method (1.58 ± 0.52 mm) was significantly lower than that of the conventional (2.37 ± 1.14 mm) and previous deep-learning (2.29 ± 0.95 mm) approaches (p < 0.01). The method showed a consistent performance across various facial regions and enhanced registration accuracy for deeper areas. This advancement could significantly enhance precision and safety in minimally invasive surgical procedures.
Collapse
Affiliation(s)
- Dongjun Lee
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Ahnryul Choi
- Department of Biomedical Engineering, College of Medicine, Chungbuk National Univeristy, Cheongju 28644, Republic of Korea
| | - Joung Hwan Mun
- Department of Biomechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
2
|
Sadahiro H, Fujitsuku S, Sugimoto K, Kawano A, Fujii N, Nomura S, Takahashi M, Ishihara H. Bony Surface-Matching Registration of Neuronavigation with Sectioned 3-Dimensional Skull in Prone Position. World Neurosurg 2024; 187:236-242.e1. [PMID: 38750893 DOI: 10.1016/j.wneu.2024.05.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 05/06/2024] [Indexed: 06/03/2024]
Abstract
BACKGROUND Neuronavigation has become an essential system for brain tumor resections. It is sometimes difficult to obtain accurate registration of the neuronavigation with the patient in the prone position. Bony surface-matching registration should be more precise than skin surface-matching registration; however, it is difficult to establish bony registration with limited exposed bone. We created a new bony surface-matching method to a sectioned 3-dimensional (3D) virtual skull in a neuronavigation system and registered with a sectioned 3D skull. In this study, the bony surface-matching with sectioned 3D registration is applied to provide precise registration for brain tumor resection in the prone position. METHODS From May 2023 to April 2024, 17 patients who underwent brain tumor resection in the prone position were enrolled. The navigation system StealthStation S8 (Medtronic, Dublin, Ireland) was used. Bony surface-matching registration with a whole 3D skull in a neuronavigation system was performed. Next, a sectioned 3D skull was made according to the surgical location to compare with the whole 3D skull registration. A phantom model was also used to validate the whole and sectioned 3D skull registration. RESULTS Whole 3D skull registration was successful for only 2 patients (11.8%). However, sectioned 3D skull registration was successful for 16 patients (94.1%). The examinations with a phantom skull model also showed superiority of sectioned 3D skull registration to whole 3D skull registration. CONCLUSIONS Sectioned 3D skull registration was superior to whole 3D skull registration. The sectioned 3D skull method could provide accurate registration with limited exposed bone.
Collapse
Affiliation(s)
- Hirokazu Sadahiro
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan.
| | - Shunsuke Fujitsuku
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| | - Kazutaka Sugimoto
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| | - Akiko Kawano
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| | - Natsumi Fujii
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| | - Sadahiro Nomura
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| | - Masakazu Takahashi
- Graduate School of Innovation of Technology Management, Yamaguchi University, Yamaguchi, Japan
| | - Hideyuki Ishihara
- Department of Neurosurgery and Clinical Neuroscience, Yamaguchi University School of Medicine, Yamaguchi, Japan
| |
Collapse
|
3
|
Cai D, Wang X, Hu W, Mo J, Liu H, Li X, Zheng X, Ding X, An J, Hua Y, Zhang J, Zhang K, Zhang C. The 3-Dimensional Intelligent Structured Light Technique: A New Registration Method in Stereotactic Neurosurgery. Oper Neurosurg (Hagerstown) 2024:01787389-990000000-01145. [PMID: 38687040 DOI: 10.1227/ons.0000000000001184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 02/28/2024] [Indexed: 05/02/2024] Open
Abstract
BACKGROUND AND OBJECTIVES Surface-based facial scanning registration emerged as an essential registration method in the robot-assisted neuronavigation surgery, providing a marker-free way to align a patient's facial surface with the imaging data. The 3-dimensional (3D) structured light was developed as an advanced registration method based on surface-based facial scanning registration. We aspire to introduce the 3D structured light as a new registration method in the procedure of the robot-assisted neurosurgery and assess the accuracy, efficiency, and safety of this method by analyzing the relative operative results. METHODS We analyzed the results of 47 patients who underwent Ommaya reservoir implantation (n = 17) and stereotactic biopsy (n = 30) assisted by 3D structured light at our hospital from January 2022 to May 2023. The accuracy and additional operative results were analyzed. RESULTS For the Ommaya reservoir implantation, the target point error was 3.2 ± 2.2 mm and the entry point error was 3.3 ± 2.4 mm, while the operation duration was 35.8 ± 8.3 minutes. For the stereotactic biopsy, the target point error was 2.3 ± 1.3 mm and the entry point error was 2.7 ± 1.2 mm, while the operation duration was 24.5 ± 6.3 minutes. CONCLUSION The 3D structured light technique reduces the patients' discomfort and offers the advantage of a simpler procedure, which can improve the clinical efficiency with the sufficient accuracy and safety to meet the clinical requirements of the puncture and navigation.
Collapse
Affiliation(s)
- Du Cai
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiu Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Wenhan Hu
- Department of Neuroelectrophysiology, Beijing Neurosurgical Institute, Beijing, China
- Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Jiajie Mo
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Huanguang Liu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Xiaoyan Li
- Department of Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xixi Zheng
- Department of Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Xiaosheng Ding
- Department of Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Juan An
- Department of Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yichun Hua
- Department of Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jianguo Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Kai Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| | - Chao Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
- Stereotactic and Functional Neurosurgery Laboratory, Beijing Neurosurgical Institute, Capital Medical University, Beijing, China
| |
Collapse
|
4
|
Zhai S, Wei Z, Wu X, Xing L, Yu J, Qian J. Feasibility evaluation of radiotherapy positioning system guided by augmented reality and point cloud registration. J Appl Clin Med Phys 2024; 25:e14243. [PMID: 38229472 PMCID: PMC11005969 DOI: 10.1002/acm2.14243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/16/2023] [Accepted: 12/04/2023] [Indexed: 01/18/2024] Open
Abstract
PURPOSE To develop a radiotherapy positioning system based on Point Cloud Registration (PCR) and Augmented Reality (AR), and to verify its feasibility. METHODS The optimal steps of PCR were investigated, and virtual positioning experiments were designed to evaluate its accuracy and speed. AR was implemented by Unity 3D and Vuforia for initial position correction, and PCR for precision registration, to achieve the proposed radiotherapy positioning system. Feasibility of the proposed system was evaluated through phantom positioning tests as well as real human positioning tests. Real human tests involved breath-holding positioning and free-breathing positioning tests. Evaluation metrics included 6 Degree of Freedom (DOF) deviations and Distance (D) errors. Additionally, the interaction between CBCT and the proposed system was envisaged through CBCT and optical cross-source PCR. RESULTS Point-to-plane iterative Closest Point (ICP), statistical filtering, uniform down-sampling, and optimal sampling ratio were determined for PCR procedure. In virtual positioning tests, a single registration took only 0.111 s, and the average D error for 15 patients was 0.015 ± 0.029 mm., Errors of phantom tests were at the sub-millimeter level, with an average D error of 0.6 ± 0.2 mm. In the real human positioning tests, the average accuracy of breath-holding positioning was still at the sub-millimeter level, where the errors of X, Y and Z axes were 0.59 ± 0.12 mm, 0.54 ± 0.12 mm, and 0.52 ± 0.09 mm, and the average D error was 0.96 ± 0.22 mm. In the free-breathing positioning, the average errors in X, Y, and Z axes were still less than 1 mm. Although the mean D error was 1.93 ± 0.36 mm, it still falls within a clinically acceptable error margin. CONCLUSION The AR and PCR-guided radiotherapy positioning system enables markerless, radiation-free and high-accuracy positioning, which is feasible in real-world scenarios.
Collapse
Affiliation(s)
- Shaozhuang Zhai
- School of Basic Medical SciencesAnhui Medical UniversityHefeiP.R. China
- Anhui Province Key Laboratory of Medical Physics and TechnologyInstitute of Health and Medical TechnologyHefei Institutes of Physical ScienceHefei Cancer Hospital, Chinese Academy of SciencesHefeiP.R. China
| | - Ziwen Wei
- Anhui Province Key Laboratory of Medical Physics and TechnologyInstitute of Health and Medical TechnologyHefei Institutes of Physical ScienceHefei Cancer Hospital, Chinese Academy of SciencesHefeiP.R. China
| | - Xiaolong Wu
- Anhui Province Key Laboratory of Medical Physics and TechnologyInstitute of Health and Medical TechnologyHefei Institutes of Physical ScienceHefei Cancer Hospital, Chinese Academy of SciencesHefeiP.R. China
| | - Ligang Xing
- Department of Radiation Oncology, School of Medicine, Shandong UniversityShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanShandongChina
| | - Jinming Yu
- Department of Radiation Oncology, School of Medicine, Shandong UniversityShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanShandongChina
| | - Junchao Qian
- School of Basic Medical SciencesAnhui Medical UniversityHefeiP.R. China
- Anhui Province Key Laboratory of Medical Physics and TechnologyInstitute of Health and Medical TechnologyHefei Institutes of Physical ScienceHefei Cancer Hospital, Chinese Academy of SciencesHefeiP.R. China
- Department of Radiation Oncology, School of Medicine, Shandong UniversityShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanShandongChina
| |
Collapse
|
5
|
Watanabe G, Conching A, Nishioka S, Steed T, Matsunaga M, Lozanoff S, Noh T. Themes in neuronavigation research: A machine learning topic analysis. World Neurosurg X 2023; 18:100182. [PMID: 37013107 PMCID: PMC10066551 DOI: 10.1016/j.wnsx.2023.100182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 02/22/2023] [Accepted: 03/16/2023] [Indexed: 03/19/2023] Open
Abstract
Objective To understand trends in neuronavigation we employed machine learning methods to perform a broad literature review which would be impractical by manual inspection. Methods PubMed was queried for articles with "Neuronavigation" in any field from inception-2020. Articles were designated neuronavigation-focused (NF) if "Neuronavigation" was a major MeSH. The latent dirichlet allocation topic modeling technique was used to identify themes of NF research. Results There were 3896 articles of which 1727 (44%) were designated as NF. Between 1999-2009 and 2010-2020, the number of NF publications experienced 80% growth. Between 2009-2014 and 2015-2020, there was a 0.3% decline. Eleven themes covered 1367 (86%) NF articles. "Resection of Eloquent Lesions" comprised the highest number of articles (243), followed by "Accuracy and Registration" (242), "Patient Outcomes" (156), "Stimulation and Mapping" (126), "Planning and Visualization" (123), "Intraoperative Tools" (104), "Placement of Ventricular Catheters" (86), "Spine Surgery" (85), "New Systems" (80), "Guided Biopsies" (61), and "Surgical Approach" (61). All topics except for "Planning and Visualization", "Intraoperative Tools", and "New Systems" exhibited a monotonic positive trend. When analyzing subcategories, there were a greater number of clinical assessments or usage of existing neuronavigation systems (77%) rather than modification or development of new apparatuses (18%). Conclusion NF research appears to focus on the clinical assessment of neuronavigation and to a lesser extent on the development of new systems. Although neuronavigation has made significant strides, NF research output appears to have plateaued in the last decade.
Collapse
|
6
|
Li W, Fan J, Li S, Zheng Z, Tian Z, Ai D, Song H, Chen X, Yang J. An incremental registration method for endoscopic sinus and skull base surgery navigation: From phantom study to clinical trials. Med Phys 2023; 50:226-239. [PMID: 35997999 DOI: 10.1002/mp.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 01/27/2023] Open
Abstract
PURPOSE Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Xiaohong Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
7
|
Yoo H, Sim T. Automated Machine Learning (AutoML)-based Surface Registration Methodology for Image-guided Surgical Navigation System. Med Phys 2022; 49:4845-4860. [PMID: 35543150 DOI: 10.1002/mp.15696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 04/05/2022] [Accepted: 04/19/2022] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND While the surface registration technique has the advantage of being relatively safe and the operation time is short, it generally has the disadvantage of low accuracy. PURPOSE This research proposes automated machine learning (AutoML)-based surface registration to improve the accuracy of image-guided surgical navigation systems. METHODS The state-of-the-art surface registration concept is that first, using a neural network model, a new point-cloud that matches the facial information acquired by a passive probe of an optical tracking system (OTS) is extracted from the facial information obtained by computerized tomography (CT). Target registration error (TRE) representing the accuracy of surface registration is then calculated by applying the iterative closest point (ICP) algorithm to the newly extracted point-cloud and OTS information. In this process, the hyperparameters used in the neural network model and ICP algorithm are automatically optimized using Bayesian Optimization with Expected Improvement to yield improved registration accuracy. RESULTS Using the proposed surface registration methodology, the average TRE for the targets located in the sinus space and nasal cavity of the soft phantoms is (0.939 ± 0.375) mm, which shows 57.8 % improvement compared to the average TRE of (2.227 ± 0.193) mm calculated by the conventional surface registration method (p < 0.01). The performance of the proposed methodology is evaluated, and the average TREs computed by the proposed methodology and the conventional method are (0.767 ± 0.132) mm and (2.615 ± 0.378) mm, respectively. Additionally, for one healthy adult, the clinical applicability of the AutoML-based surface registration is also presented. CONCLUSION Our findings showed that the registration accuracy could be improved while maintaining the advantages of the surface registration technique. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hakje Yoo
- Korea University Research Institute for Medical Bigdata Science, College of Medicine, Korea University, 73 Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea
| | - Taeyong Sim
- Department of Artificial Intelligence, Sejong University, 209, Neungdong-ro, Gwangjin-gu, Seoul, 05006, Republic of Korea
| |
Collapse
|
8
|
Li W, Fan J, Li S, Tian Z, Ai D, Song H, Yang J. Homography-based robust pose compensation and fusion imaging for augmented reality based endoscopic navigation system. Comput Biol Med 2021; 138:104864. [PMID: 34634638 DOI: 10.1016/j.compbiomed.2021.104864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/23/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Augmented reality (AR) based fusion imaging in endoscopic surgeries rely on the quality of image-to-patient registration and camera calibration, and these two offline steps are usually performed independently to get the target transformation separately. The optimal solution can be obtained under independent conditions but may not be globally optimal. All residual errors will be accumulated and eventually lead to inaccurate AR fusion. METHODS After a careful analysis of the principle of AR imaging, a robust online calibration framework was proposed for an endoscopic camera to enable accurate AR fusion. A 2D checkerboard-based homography estimation algorithm was proposed to estimate the local pose of the endoscopic camera, and the least square method was used to calculate the compensation matrix in combination with the optical tracking system. RESULTS In comparison with conventional methods, the proposed compensation method improved the performance of AR fusion, which reduced physical error by up to 82%, reduced pixel error by up to 83%, and improved target coverage by up to 6%. Experimental results of simulating mechanical noise revealed that the proposed compensation method effectively corrected the fusion errors caused by the rotation of the endoscopic tube without recalibrating the camera. Furthermore, the simulation results revealed the robustness of the proposed compensation method to noises. CONCLUSIONS Overall, the experiment results proved the effectiveness of the proposed compensation method and online calibration framework, and revealed a considerable potential in clinical practice.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
9
|
The status of medical physics in radiotherapy in China. Phys Med 2021; 85:147-157. [PMID: 34010803 DOI: 10.1016/j.ejmp.2021.05.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 01/09/2023] Open
Abstract
PURPOSE To present an overview of the status of medical physics in radiotherapy in China, including facilities and devices, occupation, education, research, etc. MATERIALS AND METHODS: The information about medical physics in clinics was obtained from the 9-th nationwide survey conducted by the China Society for Radiation Oncology in 2019. The data of medical physics in education and research was collected from the publications of the official and professional organizations. RESULTS By 2019, there were 1463 hospitals or institutes registered to practice radiotherapy and the number of accelerators per million population was 1.5. There were 4172 medical physicists working in clinics of radiation oncology. The ratio between the numbers of radiation oncologists and medical physicists is 3.51. Approximately, 95% of medical physicists have an undergraduate or graduate degrees in nuclear physics and biomedical engineering. 86% of medical physicists have certificates issued by the Chinese Society of Medical Physics. There has been a fast growth of publications by authors from mainland of China in the top international medical physics and radiotherapy journals since 2018. CONCLUSIONS Demand for medical physicists in radiotherapy increased quickly in the past decade. The distribution of radiotherapy facilities in China became more balanced. High quality continuing education and training programs for medical physicists are deficient in most areas. The role of medical physicists in the clinic has not been clearly defined and their contributions have not been fully recognized by the community.
Collapse
|
10
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|