1
|
Canton SP, Austin CN, Steuer F, Dadi S, Sharma N, Kass NM, Fogg D, Clayton E, Cunningham O, Scott D, LaBaze D, Andrews EG, Biehl JT, Hogan MV. Feasibility and Usability of Augmented Reality Technology in the Orthopaedic Operating Room. Curr Rev Musculoskelet Med 2024; 17:117-128. [PMID: 38607522 PMCID: PMC11068703 DOI: 10.1007/s12178-024-09888-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/06/2024] [Indexed: 04/13/2024]
Abstract
PURPOSE OF REVIEW Augmented reality (AR) has gained popularity in various sectors, including gaming, entertainment, and healthcare. The desire for improved surgical navigation within orthopaedic surgery has led to the evaluation of the feasibility and usability of AR in the operating room (OR). However, the safe and effective use of AR technology in the OR necessitates a proper understanding of its capabilities and limitations. This review aims to describe the fundamental elements of AR, highlight limitations for use within the field of orthopaedic surgery, and discuss potential areas for development. RECENT FINDINGS To date, studies have demonstrated evidence that AR technology can be used to enhance navigation and performance in orthopaedic procedures. General hardware and software limitations of the technology include the registration process, ergonomics, and battery life. Other limitations are related to the human response factors such as inattentional blindness, which may lead to the inability to see complications within the surgical field. Furthermore, the prolonged use of AR can cause eye strain and headache due to phenomena such as the vergence-convergence conflict. AR technology may prove to be a better alternative to current orthopaedic surgery navigation systems. However, the current limitations should be mitigated to further improve the feasibility and usability of AR in the OR setting. It is important for both non-clinicians and clinicians to work in conjunction to guide the development of future iterations of AR technology and its implementation into the OR workflow.
Collapse
Affiliation(s)
- Stephen P Canton
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA.
| | | | - Fritz Steuer
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Srujan Dadi
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | - Nikhil Sharma
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Nicolás M Kass
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - David Fogg
- Texas Tech University Health Sciences Center El Paso, El Paso, TX, USA
| | - Elizabeth Clayton
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Onaje Cunningham
- University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Devon Scott
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Dukens LaBaze
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| | - Edward G Andrews
- Department of Neurological Surgery University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Jacob T Biehl
- School of Computing and Information, University of Pittsburgh, Pittsburgh, PA, USA
| | - MaCalus V Hogan
- Department of Orthopaedic Surgery, University of Pittsburgh, 3471 Fifth Ave, Pittsburgh, PA, 15213, USA
| |
Collapse
|
2
|
Bian D, Lin Z, Lu H, Zhong Q, Wang K, Tang X, Zang J. The application of extended reality technology-assisted intraoperative navigation in orthopedic surgery. Front Surg 2024; 11:1336703. [PMID: 38375409 PMCID: PMC10875025 DOI: 10.3389/fsurg.2024.1336703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
Extended reality (XR) technology refers to any situation where real-world objects are enhanced with computer technology, including virtual reality, augmented reality, and mixed reality. Augmented reality and mixed reality technologies have been widely applied in orthopedic clinical practice, including in teaching, preoperative planning, intraoperative navigation, and surgical outcome evaluation. The primary goal of this narrative review is to summarize the effectiveness and superiority of XR-technology-assisted intraoperative navigation in the fields of trauma, joint, spine, and bone tumor surgery, as well as to discuss the current shortcomings in intraoperative navigation applications. We reviewed titles of more than 200 studies obtained from PubMed with the following search terms: extended reality, mixed reality, augmented reality, virtual reality, intraoperative navigation, and orthopedic surgery; of those 200 studies, 69 related papers were selected for abstract review. Finally, the full text of 55 studies was analyzed and reviewed. They were classified into four groups-trauma, joint, spine, and bone tumor surgery-according to their content. Most of studies that we reviewed showed that XR-technology-assisted intraoperative navigation can effectively improve the accuracy of implant placement, such as that of screws and prostheses, reduce postoperative complications caused by inaccurate implantation, facilitate the achievement of tumor-free surgical margins, shorten the surgical duration, reduce radiation exposure for patients and surgeons, minimize further damage caused by the need for visual exposure during surgery, and provide richer and more efficient intraoperative communication, thereby facilitating academic exchange, medical assistance, and the implementation of remote healthcare.
Collapse
Affiliation(s)
- Dongxiao Bian
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| | - Zhipeng Lin
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Hao Lu
- Traumatic Orthopedic Department, Peking University People’s Hospital, Beijing, China
| | - Qunjie Zhong
- Arthritis Clinic and Research Center, Peking University People’s Hospital, Beijing, China
| | - Kaifeng Wang
- Spinal Surgery Department, Peking University People’s Hospital, Beijing, China
| | - Xiaodong Tang
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| | - Jie Zang
- Department of Musculoskeletal Tumor, Peking University People’s Hospital, Beijing, China
| |
Collapse
|
3
|
Chen Z, Cruciani L, Lievore E, Fontana M, De Cobelli O, Musi G, Ferrigno G, De Momi E. Spatio-temporal layers based intra-operative stereo depth estimation network via hierarchical prediction and progressive training. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107937. [PMID: 38006707 DOI: 10.1016/j.cmpb.2023.107937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/18/2023] [Accepted: 11/19/2023] [Indexed: 11/27/2023]
Abstract
BACKGROUND AND OBJECTIVE Safety of robotic surgery can be enhanced through augmented vision or artificial constraints to the robotl motion, and intra-operative depth estimation is the cornerstone of these applications because it provides precise position information of surgical scenes in 3D space. High-quality depth estimation of endoscopic scenes has been a valuable issue, and the development of deep learning provides more possibility and potential to address this issue. METHODS In this paper, a deep learning-based approach is proposed to recover 3D information of intra-operative scenes. To this aim, a fully 3D encoder-decoder network integrating spatio-temporal layers is designed, and it adopts hierarchical prediction and progressive learning to enhance prediction accuracy and shorten training time. RESULTS Our network gets the depth estimation accuracy of MAE 2.55±1.51 (mm) and RMSE 5.23±1.40 (mm) using 8 surgical videos with a resolution of 1280×1024, which performs better compared with six other state-of-the-art methods that were trained on the same data. CONCLUSIONS Our network can implement a promising depth estimation performance in intra-operative scenes using stereo images, allowing the integration in robot-assisted surgery to enhance safety.
Collapse
Affiliation(s)
- Ziyang Chen
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy.
| | - Laura Cruciani
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Elena Lievore
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Matteo Fontana
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| | - Ottavio De Cobelli
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, Italy
| | - Gennaro Musi
- European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy; University of Milan, Department of Oncology and Onco-haematology, Faculty of Medicine and Surgery, Milan, Italy
| | - Giancarlo Ferrigno
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy
| | - Elena De Momi
- Politecnico di Milano, Department of Electronics, Information and Bioengineering, Milano, 20133, Italy; European Institute of Oncology, Department of Urology, IRCCS, Milan, 20141, Italy
| |
Collapse
|
4
|
Ying M, Wang Y, Yang K, Wang H, Liu X. A deep learning knowledge distillation framework using knee MRI and arthroscopy data for meniscus tear detection. Front Bioeng Biotechnol 2024; 11:1326706. [PMID: 38292305 PMCID: PMC10825958 DOI: 10.3389/fbioe.2023.1326706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/22/2023] [Indexed: 02/01/2024] Open
Abstract
Purpose: To construct a deep learning knowledge distillation framework exploring the utilization of MRI alone or combing with distilled Arthroscopy information for meniscus tear detection. Methods: A database of 199 paired knee Arthroscopy-MRI exams was used to develop a multimodal teacher network and an MRI-based student network, which used residual neural networks architectures. A knowledge distillation framework comprising the multimodal teacher network T and the monomodal student network S was proposed. We optimized the loss functions of mean squared error (MSE) and cross-entropy (CE) to enable the student network S to learn arthroscopic information from the teacher network T through our deep learning knowledge distillation framework, ultimately resulting in a distilled student network S T. A coronal proton density (PD)-weighted fat-suppressed MRI sequence was used in this study. Fivefold cross-validation was employed, and the accuracy, sensitivity, specificity, F1-score, receiver operating characteristic (ROC) curves and area under the receiver operating characteristic curve (AUC) were used to evaluate the medial and lateral meniscal tears detection performance of the models, including the undistilled student model S, the distilled student model S T and the teacher model T. Results: The AUCs of the undistilled student model S, the distilled student model S T, the teacher model T for medial meniscus (MM) tear detection and lateral meniscus (LM) tear detection are 0.773/0.672, 0.792/0.751 and 0.834/0.746, respectively. The distilled student model S T had higher AUCs than the undistilled model S. After undergoing knowledge distillation processing, the distilled student model demonstrated promising results, with accuracy (0.764/0.734), sensitivity (0.838/0.661), and F1-score (0.680/0.754) for both medial and lateral tear detection better than the undistilled one with accuracy (0.734/0.648), sensitivity (0.733/0.607), and F1-score (0.620/0.673). Conclusion: Through the knowledge distillation framework, the student model S based on MRI benefited from the multimodal teacher model T and achieved an improved meniscus tear detection performance.
Collapse
Affiliation(s)
- Mengjie Ying
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yufan Wang
- Engineering Research Center for Digital Medicine of the Ministry of Education, Shanghai, China
- School of Biomedical Engineering and Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Kai Yang
- Department of Radiology, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haoyuan Wang
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xudong Liu
- Department of Orthopedics, Shanghai Sixth People’s Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
5
|
He F, Qi X, Feng Q, Zhang Q, Pan N, Yang C, Liu S. Research on augmented reality navigation of in vitro fenestration of stent-graft based on deep learning and virtual-real registration. Comput Assist Surg (Abingdon) 2023; 28:2289339. [PMID: 38059572 DOI: 10.1080/24699322.2023.2289339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVES In vitro fenestration of stent-graft (IVFS) demands high-precision navigation methods to achieve optimal surgical outcomes. This study aims to propose an augmented reality (AR) navigation method for IVFS, which can provide in situ overlay display to locate fenestration positions. METHODS We propose an AR navigation method to assist doctors in performing IVFS. A deep learning-based aorta segmentation algorithm is used to achieve automatic and rapid aorta segmentation. The Vuforia-based virtual-real registration and marker recognition algorithm are integrated to ensure accurate in situ AR image. RESULTS The proposed method can provide three-dimensional in situ AR image, and the fiducial registration error after virtual-real registration is 2.070 mm. The aorta segmentation experiment obtains dice similarity coefficient of 91.12% and Hausdorff distance of 2.59, better than conventional algorithms before improvement. CONCLUSIONS The proposed method can intuitively and accurately locate fenestration positions, and therefore can assist doctors in performing IVFS.
Collapse
Affiliation(s)
- Fengfeng He
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaoyu Qi
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qingmin Feng
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiang Zhang
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ning Pan
- School of Biomedical Engineering, South-Central Minzu University, Wuhan, China
| | - Chao Yang
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shenglin Liu
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
6
|
Yavari E, Moosa S, Cohen D, Cantu-Morales D, Nagai K, Hoshino Y, de Sa D. Technology-assisted anterior cruciate ligament reconstruction improves tunnel placement but leads to no change in clinical outcomes: a systematic review and meta-analysis. Knee Surg Sports Traumatol Arthrosc 2023; 31:4299-4311. [PMID: 37329370 DOI: 10.1007/s00167-023-07481-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 06/02/2023] [Indexed: 06/19/2023]
Abstract
PURPOSE To investigate the effect of technology-assisted Anterior Cruciate Ligament Reconstruction (ACLR) on post-operative clinical outcomes and tunnel placement compared to conventional arthroscopic ACLR. METHODS CENTRAL, MEDLINE, and Embase were searched from January 2000 to November 17, 2022. Articles were included if there was intraoperative use of computer-assisted navigation, robotics, diagnostic imaging, computer simulations, or 3D printing (3DP). Two reviewers searched, screened, and evaluated the included studies for data quality. Data were abstracted using descriptive statistics and pooled using relative risk ratios (RR) or mean differences (MD), both with 95% confidence intervals (CI), where appropriate. RESULTS Eleven studies were included with total 775 patients and majority male participants (70.7%). Ages ranged from 14 to 54 years (391 patients) and follow-up ranged from 12 to 60 months (775 patients). Subjective International Knee Documentation Committee (IKDC) scores increased in the technology-assisted surgery group (473 patients; P = 0.02; MD 1.97, 95% CI 0.27 to 3.66). There was no difference in objective IKDC scores (447 patients; RR 1.02, 95% CI 0.98 to 1.06), Lysholm scores (199 patients; MD 1.14, 95% CI - 1.03 to 3.30) or negative pivot-shift tests (278 patients; RR 1.07, 95% CI 0.97 to 1.18) between the two groups. When using technology-assisted surgery, 6 (351 patients) of 8 (451 patients) studies reported more accurate femoral tunnel placement and 6 (321 patients) of 10 (561 patients) studies reported more accurate tibial tunnel placement in at least one measure. One study (209 patients) demonstrated a significant increase in cost associated with use of computer-assisted navigation (mean 1158€) versus conventional surgery (mean 704€). Of the two studies using 3DP templates, production costs ranging from $10 to $42 USD were cited. There was no difference in adverse events between the two groups. CONCLUSION Clinical outcomes do not differ between technology-assisted surgery and conventional surgery. Computer-assisted navigation is more expensive and time consuming while 3DP is inexpensive and does not lead to greater operating times. ACLR tunnels can be more accurately located in radiologically ideal places by using technology, but anatomic placement is still undetermined because of variability and inaccuracy of the evaluation systems utilized. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Ehsan Yavari
- Michael G. DeGroote School of Medicine, McMaster University, Waterloo Regional Campus, Kitchener, ON, N2G 1C5, Canada.
| | - Sabreena Moosa
- Michael G. DeGroote School of Medicine, McMaster University, Waterloo Regional Campus, Kitchener, ON, N2G 1C5, Canada
| | - Dan Cohen
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, Hamilton, ON, Canada
| | | | - Kanto Nagai
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Yuichi Hoshino
- Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe, Japan
| | - Darren de Sa
- Division of Orthopaedic Surgery, Department of Surgery, McMaster University, 1280 Main Street West, MUMC 4E14, Hamilton, ON, L8S 4L8, Canada
| |
Collapse
|
7
|
Fang C, Mo P, Chan H, Cheung J, Wong JSH, Wong TM, Mak YK, Ching K, Ho G, Leung F. Can a Wireless Full-HD Head Mounted Display System Improve Knee Arthroscopy Performance? - A Randomized Study Using a Knee Simulator. Surg Innov 2023; 30:477-485. [PMID: 36448618 PMCID: PMC10403956 DOI: 10.1177/15533506221142960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
INTRODUCTION Our prototype wireless full-HD Augmented Reality Head-Mounted Display (AR-HMD) aims to eliminate surgeon head turning and reduce theater clutter. Learning and performance versus TV Monitors (TVM) is evaluated in simulated knee arthroscopy. METHODS 19 surgeons and 19 novices were randomized into either the control group (A) or intervention group (B) and tasked to perform 5 simulated loose-body retrieval procedures on a bench-top knee arthroscopy simulator. A cross-over study design was adopted whereby subjects alternated between devices during trials 1-3, deemed the "Unfamiliar" phase, and then used the same device consecutively in trials 4-5, to assess performance in a more "Familiarized" state. Measured outcomes were time-to-completion and incidence of bead drops. RESULTS In the unfamiliar phase, HMD had 67% longer mean time-to-completion than TVM (194.7 ± 152.6s vs 116.7 ± 78.7s, P < .001). Once familiarized, HMD remained inferior to TVM, with 48% longer completion times (133.8 ± 123.3s vs 90.6 ± 55s, P = .052). Cox regression revealed device type (OR = 0.526, CI 0.391-0.709, P < .001) and number of procedure repetitions (OR = 1.186, CI 1.072-1.311, P = .001) are significantly and independently related to faster time-to-completion. However, experience is not a significant factor (OR = 1.301, CI 0.971-1.741, P = .078). Bead drops were similar between the groups in both unfamiliar (HMD: 27 vs TVM: 22, P = .65) and familiarized phases (HMD: 11 vs TVM: 17, P = .97). CONCLUSION Arthroscopic procedures continue to be better performed under conventional TVM. However, similar quality levels can be reached by HMD when given more time. Given the theoretical advantages, further research into improving HMD designs is advocated.
Collapse
Affiliation(s)
- Christian Fang
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Pinky Mo
- The University of Hong Kong, Hong Kong
| | - Holy Chan
- The University of Hong Kong, Hong Kong
| | - Jake Cheung
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Janus Siu Him Wong
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Tak-Man Wong
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Yan-Kit Mak
- Department of Orthopaedics and Traumatology, Pamela Youde Nethersole Eastern Hospital, Hong Kong
| | - Kathine Ching
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Grace Ho
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| | - Frankie Leung
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong
| |
Collapse
|
8
|
Pierzchajlo N, Stevenson TC, Huynh H, Nguyen J, Boatright S, Arya P, Chakravarti S, Mehrki Y, Brown NJ, Gendreau J, Lee SJ, Chen SG. Augmented Reality in Minimally Invasive Spinal Surgery: A Narrative Review of Available Technology. World Neurosurg 2023; 176:35-42. [PMID: 37059357 DOI: 10.1016/j.wneu.2023.04.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 04/08/2023] [Indexed: 04/16/2023]
Abstract
INTRODUCTION Spine surgery has undergone significant changes in approach and technique. With the adoption of intraoperative navigation, minimally invasive spinal surgery (MISS) has arguably become the gold standard. Augmented reality (AR) has now emerged as a front-runner in anatomical visualization and narrower operative corridors. In effect, AR is poised to revolutionize surgical training and operative outcomes. Our study examines the current literature on AR-assisted MISS, synthesizes findings, and creates a narrative highlighting the history and future of AR in spine surgery. MATERIAL AND METHODS Relevant literature was gathered using the PubMed (Medline) database from 1975 to 2023. Pedicle screw placement models were the primary intervention in AR. These were compared to the outcomes of traditional MISS RESULTS: We found that AR devices on the market show promising clinical outcomes in preoperative training and intraoperative use. Three prominent systems were as follows: XVision, HoloLens, and ImmersiveTouch. In the studies, surgeons, residents, and medical students had opportunities to operate AR systems, showcasing their educational potential across each phase of learning. Specifically, one facet described training with cadaver models to gauge accuracy in pedicle screw placement. AR-MISS exceeded free-hand methods without unique complications or contraindications. CONCLUSIONS While still in its infancy, AR has already proven beneficial for educational training and intraoperative MISS applications. We believe that with continued research and advancement of this technology, AR is poised to become a dominant player within the fundamentals of surgical education and MISS operative technique.
Collapse
Affiliation(s)
| | | | - Huey Huynh
- Mercer University, School of Medicine, Savannah, GA, USA
| | - Jimmy Nguyen
- Mercer University, School of Medicine, Savannah, GA, USA
| | | | - Priya Arya
- Mercer University, School of Medicine, Savannah, GA, USA
| | | | - Yusuf Mehrki
- Department of Neurosurgery, University of Florida, Jacksonville, FL, USA
| | - Nolan J Brown
- Department of Neurosurgery, University of California Irvine, Orange, CA, USA
| | - Julian Gendreau
- Department of Biomedical Engineering, Johns Hopkins Whiting School of Engineering, Baltimore, MD, USA
| | - Seung Jin Lee
- Department of Neurosurgery, Mayo Clinic, Jacksonville, FL, USA
| | - Selby G Chen
- Department of Neurosurgery, Mayo Clinic, Jacksonville, FL, USA
| |
Collapse
|
9
|
León-Muñoz VJ, Moya-Angeler J, López-López M, Lisón-Almagro AJ, Martínez-Martínez F, Santonja-Medina F. Integration of Square Fiducial Markers in Patient-Specific Instrumentation and Their Applicability in Knee Surgery. J Pers Med 2023; 13:jpm13050727. [PMID: 37240897 DOI: 10.3390/jpm13050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/23/2023] [Accepted: 04/23/2023] [Indexed: 05/28/2023] Open
Abstract
Computer technologies play a crucial role in orthopaedic surgery and are essential in personalising different treatments. Recent advances allow the usage of augmented reality (AR) for many orthopaedic procedures, which include different types of knee surgery. AR assigns the interaction between virtual environments and the physical world, allowing both to intermingle (AR superimposes information on real objects in real-time) through an optical device and allows personalising different processes for each patient. This article aims to describe the integration of fiducial markers in planning knee surgeries and to perform a narrative description of the latest publications on AR applications in knee surgery. Augmented reality-assisted knee surgery is an emerging set of techniques that can increase accuracy, efficiency, and safety and decrease the radiation exposure (in some surgical procedures, such as osteotomies) of other conventional methods. Initial clinical experience with AR projection based on ArUco-type artificial marker sensors has shown promising results and received positive operator feedback. Once initial clinical safety and efficacy have been demonstrated, the continued experience should be studied to validate this technology and generate further innovation in this rapidly evolving field.
Collapse
Affiliation(s)
- Vicente J León-Muñoz
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Joaquín Moya-Angeler
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
- Instituto de Cirugía Avanzada de la Rodilla (ICAR), 30005 Murcia, Spain
| | - Mirian López-López
- Subdirección General de Tecnologías de la Información, Servicio Murciano de Salud, 30100 Murcia, Spain
| | - Alonso J Lisón-Almagro
- Department of Orthopaedic Surgery and Traumatology, Hospital General Universitario Reina Sofía, 30003 Murcia, Spain
| | - Francisco Martínez-Martínez
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
| | - Fernando Santonja-Medina
- Department of Orthopaedic Surgery and Traumatology, Hospital Clínico Universitario Virgen de la Arrixaca, 30120 Murcia, Spain
- Department of Surgery, Pediatrics and Obstetrics & Gynecology, Faculty of Medicine, University of Murcia, 30120 Murcia, Spain
| |
Collapse
|
10
|
Brockmeyer P, Wiechens B, Schliephake H. The Role of Augmented Reality in the Advancement of Minimally Invasive Surgery Procedures: A Scoping Review. Bioengineering (Basel) 2023; 10:bioengineering10040501. [PMID: 37106688 PMCID: PMC10136262 DOI: 10.3390/bioengineering10040501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/13/2023] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
The purpose of this review was to analyze the evidence on the role of augmented reality (AR) in the improvement of minimally invasive surgical (MIS) procedures. A scoping literature search of the PubMed and ScienceDirect databases was performed to identify articles published in the last five years that addressed the direct impact of AR technology on MIS procedures or that addressed an area of education or clinical care that could potentially be used for MIS development. A total of 359 studies were screened and 31 articles were reviewed in depth and categorized into three main groups: Navigation, education and training, and user-environment interfaces. A comparison of studies within the different application groups showed that AR technology can be useful in various disciplines to advance the development of MIS. Although AR-guided navigation systems do not yet offer a precision advantage, benefits include improved ergonomics and visualization, as well as reduced surgical time and blood loss. Benefits can also be seen in improved education and training conditions and improved user-environment interfaces that can indirectly influence MIS procedures. However, there are still technical challenges that need to be addressed to demonstrate added value to patient care and should be evaluated in clinical trials with sufficient patient numbers or even in systematic reviews or meta-analyses.
Collapse
Affiliation(s)
- Phillipp Brockmeyer
- Department of Oral and Maxillofacial Surgery, University Medical Center Goettingen, D-37075 Goettingen, Germany
| | - Bernhard Wiechens
- Department of Orthodontics, University Medical Center Goettingen, D-37075 Goettingen, Germany
| | - Henning Schliephake
- Department of Oral and Maxillofacial Surgery, University Medical Center Goettingen, D-37075 Goettingen, Germany
| |
Collapse
|
11
|
Jeung D, Jung K, Lee HJ, Hong J. Augmented reality-based surgical guidance for wrist arthroscopy with bone-shift compensation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107323. [PMID: 36608430 DOI: 10.1016/j.cmpb.2022.107323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 08/17/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES Intraoperative joint condition is different from preoperative CT/MR due to the motion applied during surgery, inducing an inaccurate approach to surgical targets. This study aims to provide real-time augmented reality (AR)-based surgical guidance for wrist arthroscopy based on a bone-shift model through an in vivo computed tomography (CT) study. METHODS To accurately visualize concealed wrist bones on the intra-articular arthroscopic image, we propose a surgical guidance system with a novel bone-shift compensation method using noninvasive fiducial markers. First, to measure the effect of traction during surgery, two noninvasive fiducial markers were attached before surgery. In addition, two virtual link models connecting the wrist bones were implemented. When wrist traction occurs during the operation, the movement of the fiducial marker is measured, and bone-shift compensation is applied to move the virtual links in the direction of the traction. The proposed bone-shift compensation method was verified with the in vivo CT data of 10 participants. Finally, to introduce AR, camera calibration for the arthroscope parameters was performed, and a patient-specific template was used for registration between the patient and the wrist bone model. As a result, a virtual bone model with three-dimensional information could be accurately projected on a two-dimensional arthroscopic image plane. RESULTS The proposed method was possible to estimate the position of wrist bone in the traction state with an accuracy of 1.4 mm margin. After bone-shift compensation was applied, the target point error was reduced by 33.6% in lunate, 63.3% in capitate, 55.0% in scaphoid, and 74.8% in trapezoid than those in preoperative wrist CT. In addition, a phantom experiment was introduced simulating the real surgical environment. AR display allowed to expand the field of view (FOV) of the arthroscope and helped in visualizing the anatomical structures around the bones. CONCLUSIONS This study demonstrated the successful handling of AR error caused by wrist traction using the proposed method. In addition, the method allowed accurate AR visualization of the concealed bones and expansion of the limited FOV of the arthroscope. The proposed bone-shift compensation can also be applied to other joints, such as the knees or shoulders, by representing their bone movements using corresponding virtual links. In addition, the movement of the joint skin during surgery can be measured using noninvasive fiducial markers in the same manner as that used for the wrist joint.
Collapse
Affiliation(s)
- Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
| | - Kyunghwa Jung
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea; Korea Research Institute of Standards and Science, Daejeon, South Korea
| | - Hyun-Joo Lee
- Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, South Korea.
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea.
| |
Collapse
|
12
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
13
|
Abstract
Augmented reality (AR) is an innovative system that enhances the real world by superimposing virtual objects on reality. The aim of this study was to analyze the application of AR in medicine and which of its technical solutions are the most used. We carried out a scoping review of the articles published between 2019 and February 2022. The initial search yielded a total of 2649 articles. After applying filters, removing duplicates and screening, we included 34 articles in our analysis. The analysis of the articles highlighted that AR has been traditionally and mainly used in orthopedics in addition to maxillofacial surgery and oncology. Regarding the display application in AR, the Microsoft HoloLens Optical Viewer is the most used method. Moreover, for the tracking and registration phases, the marker-based method with a rigid registration remains the most used system. Overall, the results of this study suggested that AR is an innovative technology with numerous advantages, finding applications in several new surgery domains. Considering the available data, it is not possible to clearly identify all the fields of application and the best technologies regarding AR.
Collapse
|
14
|
Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094295] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols.
Collapse
|
15
|
Pan J, Yu D, Li R, Huang X, Wang X, Zheng W, Zhu B, Liu X. Multi-Modality guidance based surgical navigation for percutaneous endoscopic transforaminal discectomy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106460. [PMID: 34736173 DOI: 10.1016/j.cmpb.2021.106460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 10/06/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Fluoroscopic guidance is a critical step for the puncture procedure in percutaneous endoscopic transforaminal discectomy (PETD). However, two-dimensional observations of the three-dimensional anatomic structure suffer from the effects of projective simplification. To accurately assess the spatial relations between the patient vertebra tissues and puncture needle, a considerable number of fluoroscopic images from different orientations need to be acquired by the surgeons. This process significantly increases the radiation risk for both the patient and surgeons. METHODS In this paper, we propose an augmented reality (AR) surgical navigation system for PETD based on multi-modality information, which contains fluoroscopy, optical tracking, and depth camera. To register the fluoroscopic image with the intraoperative video, we design a lightweight non-invasive fiducial with markers and detect the markers based on the deep learning method. It can display the intraoperative video fused with the registered fluoroscopic images. We also present a self-adaptive calibration and transformation method between a 6-DOF optical tracking device and a depth camera, which are in different coordinate systems. RESULTS With the substantially reduced frequency of fluoroscopy imaging, the system can accurately track and superimpose the virtual puncture needle on fluoroscopy images in real-time. From operating theatre in vivo animal experiments, the results illustrate that the system average positioning accuracy can reach 1.98mm and the orientation accuracy can reach 1.19∘. From the clinical validation results, the system significantly lower the frequency of fluoroscopy imaging (42.7%) and reduce the radiation risk for both the patient and surgeons. CONCLUSION Coupled with the user study, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice. Compared with the existing navigation systems, which are usually equipped with a variety of large and high-cost medical equipments, such as O-arm, cone-beam CT, and robots, our navigation system does not need special equipment and can be implemented with common equipment in the operating room, such as C-arm, desktop, etc., even in small hospitals.
Collapse
Affiliation(s)
- Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Dongfang Yu
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Ranyang Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Xin Huang
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xinliang Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Wenhao Zheng
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Bin Zhu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xiaoguang Liu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| |
Collapse
|
16
|
Huang T, Li R, Li Y, Zhang X, Liao H. Augmented reality-based autostereoscopic surgical visualization system for telesurgery. Int J Comput Assist Radiol Surg 2021; 16:1985-1997. [PMID: 34363583 DOI: 10.1007/s11548-021-02463-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 07/15/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE The visualization of remote surgical scenes is the key to realizing the remote operation of surgical robots. However, current non-endoscopic surgical robot systems lack an effective visualization tool to offer sufficient surgical scene information and depth perception. METHODS We propose a novel autostereoscopic surgical visualization system integrating 3D intraoperative scene reconstruction, autostereoscopic 3D display, and augmented reality-based image fusion. The preoperative organ structure and the intraoperative surface point cloud are obtained from medical imaging and the RGB-D camera, respectively, and aligned by an automatic marker-free intraoperative registration algorithm. After registration, preoperative meshes with precalculated illumination and intraoperative textured point cloud are blended in real time. Finally, the fused image is shown on a 3D autostereoscopic display device to achieve depth perception. RESULTS A prototype of the autostereoscopic surgical visualization system was built. The system had a horizontal image resolution of 1.31 mm, a vertical image resolution of 0.82 mm, an average rendering rate of 33.1 FPS, an average registration rate of 20.5 FPS, and average registration errors of approximately 3 mm. A telesurgical robot prototype based on 3D autostereoscopic display was built. The quantitative evaluation experiments showed that our system achieved similar operational accuracy (1.79 ± 0.87 mm) as the conventional system (1.95 ± 0.71 mm), while having advantages in terms of completion time (with 34.11% reduction) and path length (with 35.87% reduction). Post-experimental questionnaires indicated that the system was user-friendly for novices and experts. CONCLUSION We propose a 3D surgical visualization system with augmented instruction and depth perception for telesurgery. The qualitative and quantitative evaluation results illustrate the accuracy and efficiency of the proposed system. Therefore, it shows great prospects in robotic surgery and telesurgery.
Collapse
Affiliation(s)
- Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Ruiyang Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yangxi Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xinran Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|