1
|
Mancino AV, Milano FE, Risk MR, Ritacco LE. Open-source navigation system for tracking dissociated parts with multi-registration. Int J Comput Assist Radiol Surg 2023; 18:2167-2177. [PMID: 36881354 DOI: 10.1007/s11548-023-02853-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 02/08/2023] [Indexed: 03/08/2023]
Abstract
PURPOSE During reconstructive surgery, knee and hip replacements, and orthognathic surgery, small misalignments in the pose of prosthesis and bones can lead to severe complications. Hence, the translational and angular accuracies are critical. However, traditional image-based surgical navigation lacks orientation data between structures, and imageless systems are unsuitable for cases of deformed anatomy. We introduce an open-source navigation system using a multiple registration approach that can track instruments, implants, and bones to precisely guide the surgeon in emulating a preoperative plan. METHODS We derived the analytical error of our method and designed a set of phantom experiments to measure its precision and accuracy. Additionally, we trained two classification models to predict the system reliability from fiducial points and surface matching registration data. Finally, to demonstrate the procedure feasibility, we conducted a complete workflow for a real clinical case of a patient with fibrous dysplasia and anatomical misalignment of the right femur using plastic bones. RESULTS The system is able to track the dissociated fragments of the clinical case and average alignment errors in the anatomical phantoms of [Formula: see text] mm and [Formula: see text]. While the fiducial-points registration showed satisfactory results given enough points and covered volume, we acknowledge that the surface refinement step is mandatory when attempting surface matching registrations. CONCLUSION We believe that our device could bring significant advantages for the personalized treatment of complex surgical cases and that its multi-registration attribute is convenient for intraoperative registration loosening cases.
Collapse
Affiliation(s)
- A V Mancino
- Instituto Tecnológico de Buenos Aires, Buenos Aires, Argentina.
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina.
- Instituto de Medicina Traslacional e Ingeniería Biomédica, Buenos Aires, Argentina.
- Computer Assisted Surgery Unit, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina.
| | - F E Milano
- Instituto Tecnológico de Buenos Aires, Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
| | - M R Risk
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
- Instituto de Medicina Traslacional e Ingeniería Biomédica, Buenos Aires, Argentina
| | - L E Ritacco
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
- Instituto de Medicina Traslacional e Ingeniería Biomédica, Buenos Aires, Argentina
- Computer Assisted Surgery Unit, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
2
|
Fan X, Zhu Q, Tu P, Joskowicz L, Chen X. A review of advances in image-guided orthopedic surgery. Phys Med Biol 2023; 68. [PMID: 36595258 DOI: 10.1088/1361-6560/acaae9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022]
Abstract
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Collapse
Affiliation(s)
- Xingqi Fan
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Qiyang Zhu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
3
|
Shi J, Liu S, Zhu Z, Deng Z, Bian G, He B. Augmented reality for oral and maxillofacial surgery: The feasibility of a marker‐free registration method. Int J Med Robot 2022; 18:e2401. [DOI: 10.1002/rcs.2401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 02/28/2022] [Accepted: 03/31/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Jiafeng Shi
- School of Mechanical Engineering and Automation Fuzhou University Fuzhou China
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering Fuzhou China
| | - Shaofeng Liu
- Department of Oral and Maxillofacial Surgery The First Affiliated Hospital, Laboratory of Facial Plastic and Reconstruction Fujian Medical University Fuzhou China
| | - Zhaoju Zhu
- School of Mechanical Engineering and Automation Fuzhou University Fuzhou China
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering Fuzhou China
- Institute of Automation Chinese Academy of Sciences Beijing China
| | - Zhen Deng
- School of Mechanical Engineering and Automation Fuzhou University Fuzhou China
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering Fuzhou China
| | - Guibin Bian
- Institute of Automation Chinese Academy of Sciences Beijing China
| | - Bingwei He
- School of Mechanical Engineering and Automation Fuzhou University Fuzhou China
- Fujian Engineering Research Center of Joint Intelligent Medical Engineering Fuzhou China
| |
Collapse
|
4
|
Chen Z, Wang Y, Li X, Wang K, Li Z, Yang P. An automatic measurement system of distal femur morphological parameters using 3D slicer software. Bone 2022; 156:116300. [PMID: 34958998 DOI: 10.1016/j.bone.2021.116300] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 12/08/2021] [Accepted: 12/10/2021] [Indexed: 11/29/2022]
Abstract
In the field of joint surgery, the computer-aided design of knee prostheses suitable for the Chinese population requires a large quantity of anatomical knee data. In this study, we propose a new method that uses 3D Slicer software to automatically measure the morphological parameters of the distal femur. First, 141 femur samples were segmented from CT data to establish the femoral shape library. Next, balanced iterative reducing and clustering using hierarchies (BIRCH) combined with iterative closest point (ICP) and generalised procrustes analysis (GPA) were used to achieve fast registration of the femur samples. The statistical model was automatically calculated from the registered femur samples, and an orthopaedic surgeon marked the points on the statistical model. Finally, we developed an automatic measurement system using 3D Slicer software, and a deformable model matching method was applied to establish the point correspondence between the statistical model and the other samples. By matching points on the statistical model to corresponding points in other samples, we measured all other samples. We marked six points and measured eight parameters. We evaluated the performance of automatic matching by comparing the points marked manually with those matched automatically and verified the accuracy of the system by comparing the manual and automatic measurement results. The results indicated that the average error of the automatic matching points was 1.03 mm, and the average length error and average angle error measured automatically by the system were 0.37 mm and 0.63°, respectively. These errors were smaller than the intra-rater and inter-rater errors measured manually by two different surgeons, which showed that the accuracy of our automatic method was high. Taken together, this study established an accurate and automatic measurement system for the distal femur based on the secondary development of 3D Slicer software to assist orthopaedic surgeons in completing the measurements of big data and further promote the improved design of Chinese-specific knee prostheses.
Collapse
Affiliation(s)
- Zhen Chen
- College of Computer Science, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, PR China
| | - Yagang Wang
- College of Computer Science, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, PR China
| | - Xinghua Li
- Department of Radiology, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China
| | - Kunzheng Wang
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China
| | - Zhe Li
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China.
| | - Pei Yang
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China.
| |
Collapse
|
5
|
3D-Slicer Software-Assisted Neuroendoscopic Surgery in the Treatment of Hypertensive Cerebral Hemorrhage. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7156598. [PMID: 35222690 PMCID: PMC8881139 DOI: 10.1155/2022/7156598] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/23/2022] [Accepted: 01/31/2022] [Indexed: 11/17/2022]
Abstract
Objective. To explore the 3D-slicer software-assisted endoscopic treatment for patients with hypertensive cerebral hemorrhage. Methods. A total of 120 patients with hypertensive cerebral hemorrhage were selected and randomly divided into control group and 3D-slicer group with 60 cases each. Patients in the control group underwent traditional imaging positioning craniotomy, and patients in the 3D-slicer group underwent 3D-slicer followed by precision puncture treatment. In this paper, we evaluate the hematoma clearance rate, nerve function, ability of daily living, complication rate, and prognosis. Results. The 3D-slicer group is better than the control group in various indicators. Compared with the control group, the 3D-slicer group has lower complications, slightly higher hematoma clearance rate, and better recovery of nerve function and daily living ability before and after surgery. The incidence of poor prognosis is low. Conclusion. The 3D-slicer software-assisted endoscopic treatment for patients with hypertensive intracerebral hemorrhage has a better hematoma clearance effect, which is beneficial to the patient’s early recovery and reduces the damage to the brain nerve of the patient.
Collapse
|
6
|
Yang R, Li C, Tu P, Ahmed A, Ji T, Chen X. Development and Application of Digital Maxillofacial Surgery System Based on Mixed Reality Technology. Front Surg 2022; 8:719985. [PMID: 35174201 PMCID: PMC8841731 DOI: 10.3389/fsurg.2021.719985] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 12/16/2021] [Indexed: 11/23/2022] Open
Abstract
Objective To realize the three-dimensional visual output of surgical navigation information by studying the cross-linking of mixed reality display devices and high-precision optical navigators. Methods Applying quaternion-based point alignment algorithms to realize the positioning configuration of mixed reality display devices, high-precision optical navigators, real-time patient tracking and calibration technology; based on open source SDK and development tools, developing mixed reality surgery based on visual positioning and tracking system. In this study, four patients were selected for mixed reality-assisted tumor resection and reconstruction and re-examined 1 month after the operation. We reconstructed postoperative CT and use 3DMeshMetric to form the error distribution map, and completed the error analysis and quality control. Results Realized the cross-linking of mixed reality display equipment and high-precision optical navigator, developed a digital maxillofacial surgery system based on mixed reality technology and successfully implemented mixed reality-assisted tumor resection and reconstruction in 4 cases. Conclusions The maxillofacial digital surgery system based on mixed reality technology can superimpose and display three-dimensional navigation information in the surgeon's field of vision. Moreover, it solves the problem of visual conversion and space conversion of the existing navigation system. It improves the work efficiency of digitally assisted surgery, effectively reduces the surgeon's dependence on spatial experience and imagination, and protects important anatomical structures during surgery. It is a significant clinical application value and potential.
Collapse
Affiliation(s)
- Rong Yang
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Chenyao Li
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Puxun Tu
- School of Mechanical and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Abdelrehem Ahmed
- Department of Craniomaxillofacial and Plastic Surgery, Faculty of Dentistry, Alexandria University, Alexandria, Egypt
| | - Tong Ji
- Shanghai Key Laboratory of Stomatology/Shanghai Institute of Stomatology, Department of Oral and Maxillofacial Head and Neck Oncology, National Clinical Research Center for Oral Diseases, School of Medicine, The Ninth People's Hospital, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Tong Ji
| | - Xiaojun Chen
- School of Mechanical and Engineering, Shanghai Jiaotong University, Shanghai, China
- Xiaojun Chen
| |
Collapse
|
7
|
Zhang R, Chung ACS. MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation. Med Image Anal 2021; 73:102200. [PMID: 34416578 DOI: 10.1016/j.media.2021.102200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 06/30/2021] [Accepted: 07/26/2021] [Indexed: 10/20/2022]
Abstract
Implementing deep convolutional neural networks (CNNs) with boolean arithmetic is ideal for eliminating the notoriously high computational expense of deep learning models. However, although lossless model compression via weight-only quantization has been achieved in previous works, it is still an open problem about how to reduce the computation precision of CNNs without losing performance, especially for medical image segmentation tasks where data dimension is high and annotation is scarce. This paper presents a novel CNN quantization framework that can squeeze a deep model (both parameters and activation) to extremely low bitwidth, e.g., 1∼2 bits, while maintaining its high performance. In the new method, we first design a strong baseline quantizer with an optimizable quantization range. Then, to relieve the back-propagation difficulty caused by the discontinuous quantization function, we design a radical residual connection scheme that allows gradients to flow through every quantized layer freely. Moreover, a tanh-based derivative function is used to further boost gradient flow and a distributional loss is employed to regularize the model output. Extensive experiments and ablation studies are conducted on two well-established public 3D segmentation datasets, i.e., BRATS2020 and LiTS. Experimental results evidence that our framework not only outperforms state-of-the-art quantization approaches significantly, but also achieves lossless performance on both datasets with ternary (2-bit) quantization.
Collapse
Affiliation(s)
- Rongzhao Zhang
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Albert C S Chung
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| |
Collapse
|
8
|
A hybrid feature-based patient-to-image registration method for robot-assisted long bone osteotomy. Int J Comput Assist Radiol Surg 2021; 16:1507-1516. [PMID: 34176070 DOI: 10.1007/s11548-021-02439-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The purpose of this study is to provide a simple, feasible and effective patient-to-image registration method for robot-assisted long bone osteotomy, which has rarely been systematically reported. The practical requirement is to meet the accuracy of 1 mm or even higher without bone-implanted markers. METHODS A hybrid feature-based registration method termed CR-RAMSICP is proposed. Point-based coarse registration (CR) is accomplished relying on the optical retro-reflective markers attached to the tracked rigid body fixed out of the bone. In surface-based fine registration, an improved iterative closest point (ICP) algorithm based on the range-adaptive matching strategy (termed RAMSICP) is presented to cope with the robust precise matching between the asymmetric patient and image point clouds, which avoids converging to a local minimum. RESULTS A series of registration experiments based on the isolated porcine iliums are carried out. The results illustrate that CR-RAMSICP not only significantly outperforms CR and CR-ICP in the accuracy and reproducibility, but also exhibits better robustness to the CR errors and less sensitiveness to the distribution and number of fiducial points located in the patient point cloud than CR-ICP. CONCLUSION The proposed registration method CR-RAMSICP can stably satisfy the desired registration accuracy without the use of bone-implanted markers like fiducial screws. Besides, the RAMSICP algorithm used in fine registration is convenient for programming because any complex metrics or models are not involved.
Collapse
|
9
|
Gao Y, Qin C, Tao B, Hu J, Wu Y, Chen X. An electromagnetic tracking implantation navigation system in dentistry with virtual calibration. Int J Med Robot 2021; 17:e2215. [PMID: 33369868 DOI: 10.1002/rcs.2215] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 12/20/2022]
Abstract
BACKGROUND Dental implant placement navigation systems based on optical tracking have been widely used in clinics. However, electromagnetic (EM) navigation method that does not suffer from problems of hidden line-of-light has not yet been described. METHODS This work proposes an EM-guided navigation method named TianShu-ESNS with virtual calibration. Model (12 implants) and animal experiments (pig head: six implants) were conducted to evaluate its performance and stability. RESULT The mean virtual calibration error was 0.83 ± 0.20 mm. The mean deviations at the entry point, end point and angle in the phantom experiment of TianShu-ESNS were 1.23 ± 0.17 mm, 1.59 ± 0.20 mm and 1.83 ± 0.27°, respectively. In the animal experiment, the same deviations were 1.25 ± 0.07 mm, 1.57 ± 0.35 mm and 1.90 ± 0.60°, respectively. CONCLUSIONS The experimental results show that TianShu-ESNS with the virtual calibration method could serve as a promising tool to eliminate the line-of-light hidden problem and simplify operation procedure in dental implant placement.
Collapse
Affiliation(s)
- Yao Gao
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chunxia Qin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Baoxin Tao
- Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Junlei Hu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yiqun Wu
- Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
10
|
Secondary development based on 3D Slicer extension modules. JOURNAL OF COMPLEXITY IN HEALTH SCIENCES 2020. [DOI: 10.21595/chs.2020.21267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
11
|
Wu Y, Wang F, Huang W, Fan S. Real-Time Navigation in Zygomatic Implant Placement. Oral Maxillofac Surg Clin North Am 2019; 31:357-367. [DOI: 10.1016/j.coms.2019.03.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
12
|
The development of non-contact user interface of a surgical navigation system based on multi-LSTM and a phantom experiment for zygomatic implant placement. Int J Comput Assist Radiol Surg 2019; 14:2147-2154. [PMID: 31300964 DOI: 10.1007/s11548-019-02031-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 07/05/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE Image-guided surgical navigation system (SNS) has proved to be an increasingly important assistance tool for mini-invasive surgery. However, using standard devices such as keyboard and mouse as human-computer interaction (HCI) is a latent vector of infectious medium, causing risks to patients and surgeons. To solve the human-computer interaction problem, we proposed an optimized structure of LSTM based on a depth camera to recognize gestures and applied it to an in-house oral and maxillofacial surgical navigation system (Qin et al. in Int J Comput Assist Radiol Surg 14(2):281-289, 2019). METHODS The proposed optimized structure of LSTM named multi-LSTM allows multiple input layers and takes into account the relationships between inputs. To combine the gesture recognition with the SNS, four left-hand signs waving along four directions were designed to correspond to four operations of the mouse, and the motion of right hand was used to control the movement of the cursor. Finally, a phantom study for zygomatic implant placement was conducted to evaluate the feasibility of multi-LSTM as HCI.
RESULTS: 3D hand trajectories of both wrist and elbow from 10 participants were collected to train the recognition network. Then tenfold cross-validation was performed for judging signs, and the mean accuracy was 96% ± 3%. In the phantom study, four implants were successfully placed, and the average deviations of planned-placed implants were 1.22 mm and 1.70 mm for the entry and end points, respectively, while the angular deviation ranged from 0.4° to 2.9°. CONCLUSION The results showed that this non-contact user interface based on multi-LSTM could be used as a promising tool to eliminate the disinfection problem in operation room and alleviate manipulation complexity of surgical navigation system.
Collapse
|
13
|
Towards More Structure: Comparing TNM Staging Completeness and Processing Time of Text-Based Reports versus Fully Segmented and Annotated PET/CT Data of Non-Small-Cell Lung Cancer. CONTRAST MEDIA & MOLECULAR IMAGING 2018; 2018:5693058. [PMID: 30515067 PMCID: PMC6236664 DOI: 10.1155/2018/5693058] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 09/10/2018] [Accepted: 09/26/2018] [Indexed: 12/25/2022]
Abstract
Results of PET/CT examinations are communicated as text-based reports which are frequently not fully structured. Incomplete or missing staging information can be a significant source of staging and treatment errors. We compared standard text-based reports to a manual full 3D-segmentation-based approach with respect to TNM completeness and processing time. TNM information was extracted retrospectively from 395 reports. Moreover, the RIS time stamps of these reports were analyzed. 2995 lesions using a set of 41 classification labels (TNM features + location) were manually segmented on the corresponding image data. Information content and processing time of reports and segmentations were compared using descriptive statistics and modelling. The TNM/UICC stage was mentioned explicitly in only 6% (n=22) of the text-based reports. In 22% (n=86), information was incomplete, most frequently affecting T stage (19%, n=74), followed by N stage (6%, n=22) and M stage (2%, n=9). Full NSCLC-lesion segmentation required a median time of 13.3 min, while the median of the shortest estimator of the text-based reporting time (R1) was 18.1 min (p=0.01). Tumor stage (UICC I/II: 5.2 min, UICC III/IV: 20.3 min, p < 0.001), lesion size (p < 0.001), and lesion count (n=1: 4.4 min, n=12: 37.2 min, p < 0.001) correlated significantly with the segmentation time, but not with the estimators of text-based reporting time. Numerous text-based reports are lacking staging information. A segmentation-based reporting approach tailored to the staging task improves report quality with manageable processing time and helps to avoid erroneous therapy decisions based on incomplete reports. Furthermore, segmented data may be used for multimedia enhancement and automatization.
Collapse
|
14
|
An oral and maxillofacial navigation system for implant placement with automatic identification of fiducial points. Int J Comput Assist Radiol Surg 2018; 14:281-289. [DOI: 10.1007/s11548-018-1870-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 10/04/2018] [Indexed: 10/28/2022]
|
15
|
Lee CY, Chan H, Ujiie H, Fujino K, Kinoshita T, Irish JC, Yasufuku K. Novel Thoracoscopic Navigation System With Augmented Real-Time Image Guidance for Chest Wall Tumors. Ann Thorac Surg 2018; 106:1468-1475. [PMID: 30120940 DOI: 10.1016/j.athoracsur.2018.06.062] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 04/24/2018] [Accepted: 06/18/2018] [Indexed: 02/06/2023]
Abstract
BACKGROUND We developed a thoracoscopic surgical navigation system with real-time augmented image guidance to assess the potential benefits for minimally invasive resection of chest wall tumors. The accuracy of localization of tumor and resection margin and the effect on task workload and confidence were evaluated in a chest wall tumor phantom. METHODS After scanning a realistic tumor phantom by cone-beam computed tomography and registering the data into the system, three-dimensional contoured tumor and resection margin was displayed. Fifteen surgeons were asked to localize the tumor margin and surgical margins with the thoracoscope alone. The same procedure was performed with the surgical navigation system activated, and results were compared between each attempt. A questionnaire and National Aeronautics and Space Administration Task Load Index were completed after. RESULTS The surgical navigation system significantly reduced localization error for the medial (p = 0.002) and superior tumor margin (p < 0.001), which was difficult to visualize by thoracoscopy alone. All surgical resection margins were improved circumferentially, including margins that were readily visible by thoracoscopy. National Aeronautics and Space Administration Task Load Index response scores showed a statistically significant reduction in workload in all subscales. There was a more than 50% mean reduction in workload for performance (10.1 vs 4.4, p = 0.001) and frustration (13.0 vs 5.4, p = 0.001). CONCLUSIONS This study showed that the thoracoscopic surgical navigation system providing augmented image guidance decreased tumor localization error for regions difficult to visualize thoracoscopically and also reduced surgical margin error circumferentially, regardless of thoracoscopic visibility. This system also reduced workload and increased surgeon's confidence in localizing challenging chest wall tumors.
Collapse
Affiliation(s)
- Chang Young Lee
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada; Department of Thoracic and Cardiovascular Surgery, Yonsei University College of Medicine, Seoul, Korea
| | - Harley Chan
- Princess Margaret Cancer Centre and Guided Therapeutics Program-TECHNA Institute, University Health Network, Toronto, Ontario, Canada
| | - Hideki Ujiie
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada
| | - Kosuke Fujino
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada
| | - Tomonari Kinoshita
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada
| | - Jonathan C Irish
- Princess Margaret Cancer Centre and Guided Therapeutics Program-TECHNA Institute, University Health Network, Toronto, Ontario, Canada; Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Kazuhiro Yasufuku
- Division of Thoracic Surgery, Toronto General Hospital, University Health Network, Toronto, Ontario, Canada; Princess Margaret Cancer Centre and Guided Therapeutics Program-TECHNA Institute, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
16
|
Li Q, Huang C, Yao Z, Chen Y, Ma L. Continuous dynamic gesture spotting algorithm based on Dempster-Shafer Theory in the augmented reality human computer interaction. Int J Med Robot 2018; 14:e1931. [PMID: 29956447 DOI: 10.1002/rcs.1931] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 05/13/2018] [Accepted: 05/22/2018] [Indexed: 11/08/2022]
Abstract
BACKGROUND Human-computer interaction (HCI) is an important feature of augmented reality (AR) technology. The naturalness is the inevitable trend of HCI. Gesture is the most natural and frequently used body auxiliary interaction mode in daily interactions except for language. However, there are often meaningless, subconscious gesture intervals between the two adjacent dynamic gestures. So, continuous dynamic gesture spotting is the premise and basis of dynamic gesture recognition, but there is no mature and unified algorithm to solve this problem. AIMS In order to realize the natural HCI based on gesture recognition entirely, a general AR application development platform is presented in this paper. METHODS According to the position and pose tracking data of the user's hand, the dynamic gesture spotting algorithm based on evidence theory is proposed. Firstly, Through analysis of the speed change of hand motion during the dynamic gestures, three knowledge rules are summed up. Then, accurate dynamic gesture spotting is realized with the application of evidence reasoning. Moreover, this algorithm first detects the starting point of gesture in the rising trend of hand motion speed, eliminates the delay between spotting and recognition, and thus ensures real-time performance. Finally, the algorithm is verified in several AR applications developed on the platform. RESULTS There are two main experimental results. First, there are six users participating in the dynamic gesture spotting experiment, and the gesture spotting accuracy can meet the demand. Second, The accuracy of recognition after spotting is higher than that of the simultaneous recognition and spotting. CONCLUSION So, It can be concluded that the proposed continuous dynamic gesture spotting algorithm based on Dempster-Shafer theory can extract almost all the effective dynamic gestures in the HCI of our AR platform, and on this basis, it can effectively improve the accuracy of the subsequent dynamic gesture recognition.
Collapse
Affiliation(s)
- Qiming Li
- Department of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China.,Department of Computer Science and Technology, Shanghai Maritime University, Shanghai, China
| | - Chen Huang
- Department of Computer Science and Technology, Shanghai University, Shanghai, China
| | - Zhengwei Yao
- Digital Media and HCI Research Center, Hangzhou Normal University, Hangzhou, China
| | - Yimin Chen
- Department of Computer Science and Technology, Shanghai University, Shanghai, China
| | - Lizhuang Ma
- Department of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| |
Collapse
|
17
|
An Human-Computer Interactive Augmented Reality System for Coronary Artery Diagnosis Planning and Training. J Med Syst 2017; 41:159. [DOI: 10.1007/s10916-017-0805-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Accepted: 08/24/2017] [Indexed: 10/18/2022]
|