1
|
Vendries V, Ungi T, Harry J, Kunz M, Podlipská J, MacKenzie L, Venne G. Three-dimensional ultrasound for knee osteophyte depiction: a comparative study to computed tomography. Int J Comput Assist Radiol Surg 2021; 16:1749-1759. [PMID: 34313914 PMCID: PMC8580923 DOI: 10.1007/s11548-021-02456-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 07/06/2021] [Indexed: 11/29/2022]
Abstract
Purpose Osteophytes are common radiographic markers of osteoarthritis. However, they are not accurately depicted using conventional imaging, thus hampering surgical interventions that rely on pre-operative images. Studies have shown that ultrasound (US) is promising at detecting osteophytes and monitoring the progression of osteoarthritis. Furthermore, three-dimensional (3D) ultrasound reconstructions may offer a means to quantify osteophytes. The purpose of this study was to compare the accuracy of osteophyte depiction in the knee joint between 3D US and conventional computed tomography (CT). Methods Eleven human cadaveric knees were pre-screened for the presence of osteophytes. Three osteoarthritic knees were selected, and then, 3D US and CT images were obtained, segmented, and digitally reconstructed in 3D. After dissection, high-resolution structured light scanner (SLS) images of the joint surfaces were obtained. Surface matching and root mean square (RMS) error analyses of surface distances were performed to assess the accuracy of each modality in capturing osteophytes. The RMS errors were compared between 3D US, CT and SLS models. Results Average RMS error comparisons for 3D US versus SLS and CT versus SLS models were 0.87 mm ± 0.33 mm (average ± standard deviation) and 0.95 mm ± 0.32 mm, respectively. No statistical difference was found between 3D US and CT. Comparative observations of imaging modalities suggested that 3D US better depicted osteophytes with cartilage and fibrocartilage tissue characteristics compared to CT. Conclusion Using 3D US can improve the depiction of osteophytes with a cartilaginous portion compared to CT. It can also provide useful information about the presence and extent of osteophytes. Whilst algorithm improvements for automatic segmentation and registration of US are needed to provide a more robust investigation of osteophyte depiction accuracy, this investigation puts forward the potential application for 3D US in routine diagnostic evaluations and pre-operative planning of osteoarthritis.
Collapse
Affiliation(s)
- Valeria Vendries
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada.
| | - Tamas Ungi
- School of Computing, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Jordan Harry
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada
| | - Manuela Kunz
- School of Computing, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Jana Podlipská
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Les MacKenzie
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada
| | - Gabriel Venne
- Department of Anatomy and Cell Biology, McGill University, Montreal, QC, H3A 0G4, Canada
| |
Collapse
|
2
|
Li Z, Chen X, Zhang X, Yan J, Song Y, Huo Y, Lin J. Better precision of a new robotically assisted system for total knee arthroplasty compared to conventional techniques: A sawbone model study. Int J Med Robot 2021; 17:e2263. [PMID: 33837616 DOI: 10.1002/rcs.2263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 02/15/2021] [Accepted: 02/23/2021] [Indexed: 11/12/2022]
Abstract
BACKGROUND The purpose of this study was to compare the accuracy of this new HURWA robotic-assisted total knee arthroplasty (TKA) technique to the accuracy of the conventional technique in a sawbone model. METHODS The HURWA robotic-assisted TKA system was applied in the robotic group. After bone resection, all of these sawbones were scanned by the use of a structured light scanning system. Measurements of bone resections, femoral coronal and sagittal measurements, and tibial coronal and sagittal measurements were recorded. RESULTS Compared to the conventional technique, the HURWA robotic-assisted system significantly improved the accuracy of the bone resection levels and angles. In the robotic group, the accuracy of all of the bone resection levels was below 0.6 mm (with standard deviation [SD] below 0.6 mm), and all of the bone resection angles were below 0.6° (with SD below 0.4°). CONCLUSION Our data suggest that this novel HURWA robotic-assisted system can significantly improve the accuracy of bone resection levels and angles.
Collapse
Affiliation(s)
- Zheng Li
- Department of Orthopaedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xin Chen
- Department of Orthopaedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xiaofeng Zhang
- BEIJING HURWA-ROBOT Medical Technology Co.Ltd, Beijing, China
| | - Jun Yan
- BEIJING HURWA-ROBOT Medical Technology Co.Ltd, Beijing, China
| | - Youdong Song
- BEIJING HURWA-ROBOT Medical Technology Co.Ltd, Beijing, China
| | - Yujia Huo
- BEIJING HURWA-ROBOT Medical Technology Co.Ltd, Beijing, China
| | - Jin Lin
- Department of Orthopaedic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
3
|
He G, Mustahsan VM, Bielski MR, Kao I, Khan FA. Report on a novel bone registration method: A rapid, accurate, and radiation-free technique for computer- and robotic-assisted orthopedic surgeries. J Orthop 2021; 23:227-232. [PMID: 33613005 DOI: 10.1016/j.jor.2021.01.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 01/24/2021] [Indexed: 12/01/2022] Open
Abstract
Introduction Computer- and robotic-assisted technologies have recently been introduced into orthopedic surgery to improve accuracy. Each requires intraoperative "bone registration," but existing methods are time consuming, often inaccurate, and/or require bulky and costly equipment that produces substantial radiation. Methods We developed a novel method of bone registration using a compact 3D structured light surface scanner that can scan thousands of points simultaneously without any ionizing radiation.Visible light is projected in a specific pattern onto a 3 × 3 cm2 area of exposed bone, which deforms the pattern in a way determined by the local bone geometry. A quantitative analysis reconstructs this local geometry and compares it to the preoperative imaging, thereby effecting rapid bone registration.A registration accuracy study using our novel method was conducted on 24 CT-scanned femur Sawbones®. We simulated exposures typically seen during knee/hip arthroplasty and common bone tumor resections. The registration accuracy of our technique was quantified by measuring the discrepancy of known points (i.e., pre-drilled holes) on the bone. Results Our technique demonstrated a registration accuracy of 0.44 ± 0.22 mm. This compared favorably with literature-reported values of 0.68 ± 0.14 mm (p-value = 0.001) for the paired-point technique13 and 0.86 ± 0.38 mm for the intraoperative CT based techniques 14 (not enough reported data to calculate p-value). Conclusion We have developed a novel method of bone registration for computer and robotic-assisted surgery using 3D surface scanning technology that is rapid, compact, and radiation-free. We have demonstrated increased accuracy compared to existing methods (using historical controls).
Collapse
Affiliation(s)
- Guangyu He
- Department of Mechanical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Vamiq M Mustahsan
- Department of Mechanical Engineering, Stony Brook University, Stony Brook, NY, USA
| | | | - Imin Kao
- Department of Mechanical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Fazel A Khan
- Department of Orthopedics, Stony Brook University Hospital, Stony Brook, NY, USA
| |
Collapse
|
4
|
Beaulieu K, Alkins R, Ellis RE, Kunz M. Technical report: Rapid intraoperative reconstruction of cranial implants using additively manufactured moulds. Proc Inst Mech Eng H 2020; 234:1011-1017. [PMID: 32627709 DOI: 10.1177/0954411920936051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
During craniotomies, a portion of the calvarium or skull is removed to gain access to the intracranial space. When it is not possible to re-implant the flap, surgeons may repair the defect intraoperatively or at a later date. With larger defects being more difficult to repair intraoperatively, we investigated a method for the creation of patient-specific moulds for ad hoc bone flap reconstruction using rapid prototyping. Patient-specific moulds were created based on light scanned models of the defect, using custom software and rapid prototyping. Polymethylmethacrylate bone implants were created for three retrospective craniotomy cases and evaluated based on original flap and skull reconstruction accuracy. Bone implants created using our moulding method reconstruct the original flap and skull with an average reconstruction accuracy of 0.82 and 1.3 mm, respectively. Average skull reconstruction accuracy obtained by surgeons performing freehand implant reconstruction was 1.49 mm. Time needed to generate moulds was between 2 h and 45 min and 6 h and 20 min. Improvements to current printing technology will make this procedure technically feasible for future cranial procedures.
Collapse
Affiliation(s)
| | - Ryan Alkins
- Department of Surgery, Kingston Health Sciences Center, Queen's University, Kingston, ON, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.,Center for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Randy E Ellis
- School of Computing, Queen's University, Kingston, ON, Canada.,Department of Surgery, Kingston Health Sciences Center, Queen's University, Kingston, ON, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.,Department of Mechanical and Materials Engineering, Queen's University, Kingston, ON, Canada
| | - Manuela Kunz
- School of Computing, Queen's University, Kingston, ON, Canada.,Department of Surgery, Kingston Health Sciences Center, Queen's University, Kingston, ON, Canada
| |
Collapse
|
5
|
Chan B, Rudan JF, Mousavi P, Kunz M. Intraoperative integration of structured light scanning for automatic tissue classification: a feasibility study. Int J Comput Assist Radiol Surg 2020; 15:641-649. [PMID: 32144629 DOI: 10.1007/s11548-020-02129-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 02/17/2020] [Indexed: 11/26/2022]
Abstract
PURPOSE Structured light scanning is a promising inexpensive and accurate intraoperative imaging modality. Integration of these scanners in surgical workflows has the potential to enable rapid registration and augment preoperative imaging, in a practical and timely manner in the operating theatre. Previously, we have demonstrated the intraoperative feasibility of such scanners to capture anatomical surface information with high accuracy. The purpose of this study was to investigate the feasibility of automatically characterizing anatomical tissues from textural and spatial information captured by such scanners using machine learning. Assisted or automatic identification of relevant components of a captured scan is essential for effective integration of the technology in surgical workflow. METHODS During a clinical study, 3D surface scans for seven total knee arthroplasty patients were collected, and textural and spatial features for cartilage, bone, and ligament tissue were collected and annotated. These features were used to train and evaluate machine learning models. As part of our preliminary preparation, three fresh-frozen knee cadaver specimens were also used where 3D surface scans with texture information were collected during different dissection stages. The resulting models were manually segmented to isolate texture information for muscles, tendon, cartilage, and bone. This information, and detailed labels from dissections, provided an in-depth, finely annotated dataset for building machine learning classifiers. RESULTS For characterizing bone, cartilage, and ligament in the intraoperative surface models, random forest and neural network-based models achieved an accuracy of close to 80%, whereas an accuracy of close to 90% was obtained when only characterizing bone and cartilage. Average accuracy of 76-82% was reached for cadaver data in two-, three-, and four-class tissue separation. CONCLUSIONS The results of this project demonstrate the feasibility of machine learning methods to accurately classify multiple types of anatomical tissue. The ability to automatically characterize tissues in intraoperatively collected surface models would streamline the surgical workflow of using structured light scanners-paving the way to applications such as 3D documentation of surgery in addition to rapid registration and augmentation of preoperative imaging.
Collapse
Affiliation(s)
- Brandon Chan
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada
| | - John F Rudan
- Department of Surgery, Kingston Health Sciences Centre, Queen's University, 76 Stuart Street, Kingston, ON, K7L 2V7, Canada
| | - Parvin Mousavi
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada.
| | - Manuela Kunz
- School of Computing, Queen's University, 557 Goodwin Hall, Kingston, ON, K7L 2N8, Canada.
- National Research Council Canada, 1200 Montreal Rd, Building M-50, Ottawa, ON, K1A 0R6, Canada.
| |
Collapse
|
6
|
Replicating Skull Base Anatomy With 3D Technologies: A Comparative Study Using 3D-scanned and 3D-printed Models of the Temporal Bone. Otol Neurotol 2020; 41:e392-e403. [DOI: 10.1097/mao.0000000000002524] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
7
|
Yang X, Narasimhan S, Luo M, Thompson RC, Chambless LB, Morone PJ, He L, Dawant BM, Miga MI. Development and evaluation of a "trackerless" surgical planning and guidance system based on 3D Slicer. J Med Imaging (Bellingham) 2019; 6:035002. [PMID: 31528660 DOI: 10.1117/1.jmi.6.3.035002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Accepted: 08/02/2019] [Indexed: 11/14/2022] Open
Abstract
Conventional optical tracking systems use cameras sensitive to near-infrared (NIR) light and NIR illuminated/active-illuminating markers to localize instrumentation and the patient in the operating room (OR) physical space. This technology is widely used within the neurosurgical theater and is a staple in the standard of care for craniotomy planning. To accomplish, planning is largely conducted at the time of the procedure in the OR with the patient in a fixed head orientation. We propose a framework to achieve this in the OR without conventional tracking technology, i.e., a "trackerless" approach. Briefly, we investigate an extension of the 3D Slicer which combines surgical planning and craniotomy designation. While taking advantage of the well-developed 3D Slicer platform, we implement advanced features to aid the neurosurgeon in planning the location of the anticipated craniotomy relative to the preoperatively imaged tumor in a physical-to-virtual setup, and then subsequently aid the true physical procedure by correlating that physical-to-virtual plan with an intraoperative magnetic resonance imaging-to-physical registered field-of-view display. These steps are done such that the craniotomy can be designated without the use of a conventional optical tracking technology. To test this approach, four experienced neurosurgeons performed experiments on five different surgical cases using our 3D Slicer module as well as the conventional procedure for comparison. The results suggest that our planning system provides a simple, cost-efficient, and reliable solution for surgical planning and delivery without the use of conventional tracking technologies. We hypothesize that the combination of this craniotomy planning approach and our past developments in cortical surface registration and deformation tracking using stereo-pair data from the surgical microscope may provide a fundamental realization of an integrated trackerless surgical guidance platform.
Collapse
Affiliation(s)
- Xiaochen Yang
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Saramati Narasimhan
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Ma Luo
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Reid C Thompson
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Lola B Chambless
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Peter J Morone
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Le He
- Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States.,Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
| | - Michael I Miga
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Neurological Surgery, Nashville, Tennessee, United States.,Vanderbilt University Medical Center, Department of Radiology, Nashville, Tennessee, United States
| |
Collapse
|