1
|
Kos TM, Colombo E, Bartels LW, Robe PA, van Doormaal TPC. Evaluation Metrics for Augmented Reality in Neurosurgical Preoperative Planning, Surgical Navigation, and Surgical Treatment Guidance: A Systematic Review. Oper Neurosurg (Hagerstown) 2023; 26:01787389-990000000-01007. [PMID: 38146941 PMCID: PMC11008635 DOI: 10.1227/ons.0000000000001009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/10/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Recent years have shown an advancement in the development of augmented reality (AR) technologies for preoperative visualization, surgical navigation, and intraoperative guidance for neurosurgery. However, proving added value for AR in clinical practice is challenging, partly because of a lack of standardized evaluation metrics. We performed a systematic review to provide an overview of the reported evaluation metrics for AR technologies in neurosurgical practice and to establish a foundation for assessment and comparison of such technologies. METHODS PubMed, Embase, and Cochrane were searched systematically for publications on assessment of AR for cranial neurosurgery on September 22, 2022. The findings were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS The systematic search yielded 830 publications; 114 were screened full text, and 80 were included for analysis. Among the included studies, 5% dealt with preoperative visualization using AR, with user perception as the most frequently reported metric. The majority (75%) researched AR technology for surgical navigation, with registration accuracy, clinical outcome, and time measurements as the most frequently reported metrics. In addition, 20% studied the use of AR for intraoperative guidance, with registration accuracy, task outcome, and user perception as the most frequently reported metrics. CONCLUSION For quality benchmarking of AR technologies in neurosurgery, evaluation metrics should be specific to the risk profile and clinical objectives of the technology. A key focus should be on using validated questionnaires to assess user perception; ensuring clear and unambiguous reporting of registration accuracy, precision, robustness, and system stability; and accurately measuring task performance in clinical studies. We provided an overview suggesting which evaluation metrics to use per AR application and innovation phase, aiming to improve the assessment of added value of AR for neurosurgical practice and to facilitate the integration in the clinical workflow.
Collapse
Affiliation(s)
- Tessa M. Kos
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Elisa Colombo
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
| | - L. Wilbert Bartels
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Pierre A. Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Tristan P. C. van Doormaal
- Department of Neurosurgery, Clinical Neuroscience Center, Universitätsspital Zürich, Zurich, The Netherlands
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
2
|
Jung S, Moon Y, Kim J, Kim K. Deep Neural Network-Based Visual Feedback System for Nasopharyngeal Swab Sampling. SENSORS (BASEL, SWITZERLAND) 2023; 23:8443. [PMID: 37896536 PMCID: PMC10610820 DOI: 10.3390/s23208443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 09/22/2023] [Accepted: 10/06/2023] [Indexed: 10/29/2023]
Abstract
During the 2019 coronavirus disease pandemic, robotic-based systems for swab sampling were developed to reduce burdens on healthcare workers and their risk of infection. Teleoperated sampling systems are especially appreciated as they fundamentally prevent contact with suspected COVID-19 patients. However, the limited field of view of the installed cameras prevents the operator from recognizing the position and deformation of the swab inserted into the nasal cavity, which highly decreases the operating performance. To overcome this limitation, this study proposes a visual feedback system that monitors and reconstructs the shape of an NP swab using augmented reality (AR). The sampling device contained three load cells and measured the interaction force applied to the swab, while the shape information was captured using a motion-tracking program. These datasets were used to train a one-dimensional convolution neural network (1DCNN) model, which estimated the coordinates of three feature points of the swab in 2D X-Y plane. Based on these points, the virtual shape of the swab, reflecting the curvature of the actual one, was reconstructed and overlaid on the visual display. The accuracy of the 1DCNN model was evaluated on a 2D plane under ten different bending conditions. The results demonstrate that the x-values of the predicted points show errors of under 0.590 mm from P0, while those of P1 and P2 show a biased error of about -1.5 mm with constant standard deviations. For the y-values, the error of all feature points under positive bending is uniformly estimated with under 1 mm of difference, when the error under negative bending increases depending on the amount of deformation. Finally, experiments using a collaborative robot validate its ability to visualize the actual swab's position and deformation on the camera image of 2D and 3D phantoms.
Collapse
Affiliation(s)
- Suhun Jung
- Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea;
| | - Yonghwan Moon
- School of Mechanical Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, Republic of Korea
- Augmented Safety System with Intelligence Sensing and Tracking, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
| | - Jeongryul Kim
- Artificial Intelligence and Robot Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea;
| | - Keri Kim
- Augmented Safety System with Intelligence Sensing and Tracking, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
- Division of Bio-Medical Science and Technology, University of Science and Technology, 217 Gajeong-ro, Yuseong-gu, Daejeon 34113, Republic of Korea
| |
Collapse
|
3
|
Ragnhildstveit A, Li C, Zimmerman MH, Mamalakis M, Curry VN, Holle W, Baig N, Uğuralp AK, Alkhani L, Oğuz-Uğuralp Z, Romero-Garcia R, Suckling J. Intra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg 2023; 10:1245851. [PMID: 37671031 PMCID: PMC10476869 DOI: 10.3389/fsurg.2023.1245851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
Background Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential. Methods A systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery. Results The search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as "development" or "intervention" based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results. Conclusions AR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit.
Collapse
Affiliation(s)
- Anya Ragnhildstveit
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Chao Li
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, England
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England
| | | | - Michail Mamalakis
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Victoria N. Curry
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Willis Holle
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Physics and Astronomy, The University of Utah, Salt Lake City, UT, United States
| | - Noor Baig
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | | | - Layth Alkhani
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Biology, Stanford University, Stanford, CA, United States
| | | | - Rafael Romero-Garcia
- Department of Psychiatry, University of Cambridge, Cambridge, England
- Instituto de Biomedicina de Sevilla (IBiS) HUVR/CSIC/Universidad de Sevilla/CIBERSAM, ISCIII, Dpto. de Fisiología Médica y Biofísica
| | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge, England
| |
Collapse
|
4
|
Tsui D, Jo M, Nguyen B, Ahadian F, Talke FE. Optical Surgical Navigation: A Promising Low-cost Alternative . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082918 DOI: 10.1109/embc40787.2023.10340384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
State-of-the-art computer-assisted surgery relies on infrared-based cameras for precise positional measurements. However, the cost of purchasing these systems acts as a barrier for smaller healthcare facilities to adopt them. Recently, low-cost optical tracking with cameras has emerged as a promising alternative, but differences in operating room conditions and patient anatomy can cause inconsistencies between procedures. Therefore, it is essential to identify and evaluate individual factors that may affect a procedure. In this study, we evaluate fiducial ArUco markers as a low-cost alternative to traditional markers. To evaluate their effectiveness, we designed a ground truth testing platform, which enables us to measure the real-time difference between the predicted and actual positions. We investigated the effects of warping, line-of-sight obstruction, and operating room lighting as variables that could influence marker tracking in the operating room. Each variable was isolated and simplified to quantifiable modifications to the physical marker and X-Y platform environment. We find that our navigation system is a promising approach for use in computer-navigated surgery, and future work will focus on implementing image processing techniques to improve the accuracy of optical marker tracking.
Collapse
|
5
|
Kögl FV, Léger É, Haouchine N, Torio E, Juvekar P, Navab N, Kapur T, Pieper S, Golby A, Frisken S. A Tool-free Neuronavigation Method based on Single-view Hand Tracking. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1307-1315. [PMID: 37457380 PMCID: PMC10348700 DOI: 10.1080/21681163.2022.2163428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/30/2022]
Abstract
This work presents a novel tool-free neuronavigation method that can be used with a single RGB commodity camera. Compared with freehand craniotomy placement methods, the proposed system is more intuitive and less error prone. The proposed method also has several advantages over standard neuronavigation platforms. First, it has a much lower cost, since it doesn't require the use of an optical tracking camera or electromagnetic field generator, which are typically the most expensive parts of a neuronavigation system, making it much more accessible. Second, it requires minimal setup, meaning that it can be performed at the bedside and in circumstances where using a standard neuronavigation system is impractical. Our system relies on machine-learning-based hand pose estimation that acts as a proxy for optical tool tracking, enabling a 3D-3D pre-operative to intra-operative registration. Qualitative assessment from clinical users showed that the concept is clinically relevant. Quantitative assessment showed that on average a target registration error (TRE) of 1.3cm can be achieved. Furthermore, the system is framework-agnostic, meaning that future improvements to hand-tracking frameworks would directly translate to a higher accuracy.
Collapse
Affiliation(s)
- Fryderyk Victor Kögl
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Étienne Léger
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Steve Pieper
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Isomics, Inc., Cambridge, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| |
Collapse
|
6
|
de Almeida AGC, Fernandes de Oliveira Santos B, Oliveira JLM. A Neuronavigation System Using a Mobile Augmented Reality Solution. World Neurosurg 2022; 167:e1261-e1267. [PMID: 36089274 DOI: 10.1016/j.wneu.2022.09.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Accepted: 09/04/2022] [Indexed: 10/31/2022]
Abstract
BACKGROUND Image-guided surgery has shown great utility in neurosurgery, especially in allowing for more accurate surgical planning and navigation. The current gold standard for image-guided neurosurgery is neuronavigation, which provides millimetric accuracy on such tasks. However, these approaches often require a complicated setup and have high cost, hindering their potential in low- and middle-income countries. The aim of this study was to develop and evaluate the performance of a mobile-based augmented reality neuronavigation solution under different conditions in a preclinical environment. METHODS The application was developed using the Swift programming language and was tested on a replica of a human scalp under variable lighting, with different numbers of registration points and target point position conditions. For each condition, reference points were input into the application, and the target points were computed for 10 iterations. The mean registration error and target error were used to assess the performance of the application. RESULTS In the best-case scenario, the proposed solution had a mean target error of 2.6 ± 1.6 mm. CONCLUSIONS Our approach provides a viable, low-cost, easy-to-use, portable method for locating points on the scalp surface with an accuracy of 2.6 ± 1.6 mm in the best-case scenario.
Collapse
Affiliation(s)
| | - Bruno Fernandes de Oliveira Santos
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, Sergipe, Brazil; Department of Neurosurgery, Fundação de Beneficência Hospital de Cirurgia, Aracaju, Sergipe, Brazil.
| | - Joselina L M Oliveira
- Department of Medicine, Federal University of Sergipe, Aracaju, Sergipe, Brazil; Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, Sergipe, Brazil
| |
Collapse
|
7
|
Chiou SY, Zhang ZY, Liu HL, Yan JL, Wei KC, Chen PY. Augmented Reality Surgical Navigation System for External Ventricular Drain. Healthcare (Basel) 2022; 10:healthcare10101815. [PMID: 36292263 PMCID: PMC9601392 DOI: 10.3390/healthcare10101815] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/14/2022] [Accepted: 09/19/2022] [Indexed: 12/02/2022] Open
Abstract
Augmented reality surgery systems are playing an increasing role in the operating room, but applying such systems to neurosurgery presents particular challenges. In addition to using augmented reality technology to display the position of the surgical target position in 3D in real time, the application must also display the scalpel entry point and scalpel orientation, with accurate superposition on the patient. To improve the intuitiveness, efficiency, and accuracy of extra-ventricular drain surgery, this paper proposes an augmented reality surgical navigation system which accurately superimposes the surgical target position, scalpel entry point, and scalpel direction on a patient’s head and displays this data on a tablet. The accuracy of the optical measurement system (NDI Polaris Vicra) was first independently tested, and then complemented by the design of functions to help the surgeon quickly identify the surgical target position and determine the preferred entry point. A tablet PC was used to display the superimposed images of the surgical target, entry point, and scalpel on top of the patient, allowing for correct scalpel orientation. Digital imaging and communications in medicine (DICOM) results for the patient’s computed tomography were used to create a phantom and its associated AR model. This model was then imported into the application, which was then executed on the tablet. In the preoperative phase, the technician first spent 5–7 min to superimpose the virtual image of the head and the scalpel. The surgeon then took 2 min to identify the intended target position and entry point position on the tablet, which then dynamically displayed the superimposed image of the head, target position, entry point position, and scalpel (including the scalpel tip and scalpel orientation). Multiple experiments were successfully conducted on the phantom, along with six practical trials of clinical neurosurgical EVD. In the 2D-plane-superposition model, the optical measurement system (NDI Polaris Vicra) provided highly accurate visualization (2.01 ± 1.12 mm). In hospital-based clinical trials, the average technician preparation time was 6 min, while the surgeon required an average of 3.5 min to set the target and entry-point positions and accurately overlay the orientation with an NDI surgical stick. In the preparation phase, the average time required for the DICOM-formatted image processing and program import was 120 ± 30 min. The accuracy of the designed augmented reality optical surgical navigation system met clinical requirements, and can provide a visual and intuitive guide for neurosurgeons. The surgeon can use the tablet application to obtain real-time DICOM-formatted images of the patient, change the position of the surgical entry point, and instantly obtain an updated surgical path and surgical angle. The proposed design can be used as the basis for various augmented reality brain surgery navigation systems in the future.
Collapse
Affiliation(s)
- Shin-Yan Chiou
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Department of Nuclear Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Zhi-Yue Zhang
- Department of Electrical Engineering, College of Engineering, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei 106, Taiwan
| | - Jiun-Lin Yan
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
| | - Kuo-Chen Wei
- Department of Neurosurgery, New Taipei City TuCheng Hospital, New Taipei City 236, Taiwan
| | - Pin-Yuan Chen
- Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204, Taiwan
- School of Medicine, Chang Gung University, Kwei-Shan, Taoyuan 333, Taiwan
- Correspondence: ; Tel.: +886-2-2431-3131
| |
Collapse
|
8
|
Zhang R, Alpdogan S, Kong S, Muhammad S. Application of computer-aided image reconstruction and image guide in parasagittal meningioma resection. EGYPTIAN JOURNAL OF NEUROSURGERY 2022. [DOI: 10.1186/s41984-022-00157-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
In recent years, smaller-sized (diameter < 2.5 cm) meningiomas are diagnosed due to increased cranial imaging. Symptomatic meningiomas need to be removed surgically. Therefore, it is extremely important to locate the lesion exactly to tailor the craniotomy especially if the neuro-navigation system is not available. Many hospitals located in the underdeveloped countries cannot afford the high costs of neuro-navigation equipment. Hence, it is relevant to discover low-cost associated and effective methods for lesion localization for surgery.
Methods
The use of localization markers in advance can help to acquire preoperative CT images of the patients to create and calculate a three-dimensional (3D) virtual graph using a computer. With the 3D graph, spatial distance of the tumor from the markers is calculated and the tumor location projected on the scalp by the Triangle Pythagorean theorem. This enables precise localization of intracranial microlesions preoperatively.
Results
The location of the tumor was consistent with that of the pre-operative virtual image, and the craniotomy was exact. The patient was discharged 3 days later without any neurological deficits.
Conclusions
This method is simple and reliable, inexpensive, and accurate in the location of small-sized lesions, which can partially compensate for the lack of neuro-navigation and is suitable for widespread application in hospitals in developing countries.
Collapse
|
9
|
Real-time augmented reality application in presurgical planning and lesion scalp localization by a smartphone. Acta Neurochir (Wien) 2022; 164:1069-1078. [PMID: 34448914 DOI: 10.1007/s00701-021-04968-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 08/08/2021] [Indexed: 10/20/2022]
Abstract
OBJECTIVE A smartphone augmented reality (AR) application (app) was explored for clinical use in presurgical planning and lesion scalp localization. METHODS We programmed an AR App on a smartphone. The accuracy of the AR app was tested on a 3D-printed head model, using the Euclidean distance of displacement of virtual objects. For clinical validation, 14 patients with brain tumors were included in the study. Preoperative MRI images were used to generate 3D models for AR contents. The 3D models were then transferred to the smartphone AR app. Tumor scalp localization was marked, and a surgical corridor was planned on the patient's head by viewing AR images on the smartphone screen. Standard neuronavigation was applied to evaluate the accuracy of the smartphone. Max-margin distance (MMD) and area overlap ratio (AOR) were measured to quantitatively validate the clinical accuracy of the smartphone AR technique. RESULTS In model validation, the total mean Euclidean distance of virtual object displacement using the smartphone AR app was 4.7 ± 2.3 mm. In clinical validation, the mean duration of AR app usage was 168.5 ± 73.9 s. The total mean MMD was 6.7 ± 3.7 mm, and total mean AOR was 79%. CONCLUSIONS The smartphone AR app provides a new way of experience to observe intracranial anatomy in situ, and it makes surgical planning more intuitive and efficient. Localization accuracy is satisfactory with lesions larger than 15 mm.
Collapse
|
10
|
Moon HC, Park SJ, Kim YD, Kim KM, Kang H, Lee EJ, Kim MS, Kim JW, Kim YH, Park CK, Kim YG, Dho YS. Navigation of frameless fixation for gamma knife radiosurgery using fixed augmented reality. Sci Rep 2022; 12:4486. [PMID: 35296720 PMCID: PMC8927150 DOI: 10.1038/s41598-022-08390-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 03/07/2022] [Indexed: 11/25/2022] Open
Abstract
Augmented reality (AR) offers a new medical treatment approach. We aimed to evaluate frameless (mask) fixation navigation using a 3D-printed patient model with fixed-AR technology for gamma knife radiosurgery (GKRS). Fixed-AR navigation was developed using the inside-out method with visual inertial odometry algorithms, and the flexible Quick Response marker was created for object-feature recognition. Virtual 3D-patient models for AR-rendering were created via 3D-scanning utilizing TrueDepth and cone-beam computed tomography (CBCT) to generate a new GammaKnife Icon™ model. A 3D-printed patient model included fiducial markers, and virtual 3D-patient models were used to validate registration accuracy. Registration accuracy between initial frameless fixation and re-fixation navigated fixed-AR was validated through visualization and quantitative method. The quantitative method was validated through set-up errors, fiducial marker coordinates, and high-definition motion management (HDMM) values. A 3D-printed model and virtual models were correctly overlapped under frameless fixation. Virtual models from both 3D-scanning and CBCT were enough to tolerate the navigated frameless re-fixation. Although the CBCT virtual model consistently delivered more accurate results, 3D-scanning was sufficient. Frameless re-fixation accuracy navigated in virtual models had mean set-up errors within 1 mm and 1.5° in all axes. Mean fiducial marker differences from coordinates in virtual models were within 2.5 mm in all axes, and mean 3D errors were within 3 mm. Mean HDMM difference values in virtual models were within 1.5 mm of initial HDMM values. The variability from navigation fixed-AR is enough to consider repositioning frameless fixation without CBCT scanning for treating patients fractionated with large multiple metastases lesions (> 3 cm) who have difficulty enduring long beam-on time. This system could be applied to novel GKRS navigation for frameless fixation with reduced preparation time.
Collapse
Affiliation(s)
- Hyeong Cheol Moon
- Department of Neurosurgery, Chungbuk National University Hospital, Cheongju, Republic of Korea
| | | | | | - Kyung Min Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Ho Kang
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Eun Jung Lee
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Min-Sung Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jin Wook Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Yong Hwy Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Young Gyu Kim
- Department of Neurosurgery, Chungbuk National University Hospital, Cheongju, Republic of Korea.,Department of Neurosurgery, Chungbuk National University College of Medicine, Cheongju, Republic of Korea
| | - Yun-Sik Dho
- Department of Neurosurgery, Chungbuk National University Hospital, Cheongju, Republic of Korea. .,Department of Neurosurgery, Chungbuk National University College of Medicine, Cheongju, Republic of Korea.
| |
Collapse
|
11
|
Pan J, Yu D, Li R, Huang X, Wang X, Zheng W, Zhu B, Liu X. Multi-Modality guidance based surgical navigation for percutaneous endoscopic transforaminal discectomy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106460. [PMID: 34736173 DOI: 10.1016/j.cmpb.2021.106460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 10/06/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Fluoroscopic guidance is a critical step for the puncture procedure in percutaneous endoscopic transforaminal discectomy (PETD). However, two-dimensional observations of the three-dimensional anatomic structure suffer from the effects of projective simplification. To accurately assess the spatial relations between the patient vertebra tissues and puncture needle, a considerable number of fluoroscopic images from different orientations need to be acquired by the surgeons. This process significantly increases the radiation risk for both the patient and surgeons. METHODS In this paper, we propose an augmented reality (AR) surgical navigation system for PETD based on multi-modality information, which contains fluoroscopy, optical tracking, and depth camera. To register the fluoroscopic image with the intraoperative video, we design a lightweight non-invasive fiducial with markers and detect the markers based on the deep learning method. It can display the intraoperative video fused with the registered fluoroscopic images. We also present a self-adaptive calibration and transformation method between a 6-DOF optical tracking device and a depth camera, which are in different coordinate systems. RESULTS With the substantially reduced frequency of fluoroscopy imaging, the system can accurately track and superimpose the virtual puncture needle on fluoroscopy images in real-time. From operating theatre in vivo animal experiments, the results illustrate that the system average positioning accuracy can reach 1.98mm and the orientation accuracy can reach 1.19∘. From the clinical validation results, the system significantly lower the frequency of fluoroscopy imaging (42.7%) and reduce the radiation risk for both the patient and surgeons. CONCLUSION Coupled with the user study, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice. Compared with the existing navigation systems, which are usually equipped with a variety of large and high-cost medical equipments, such as O-arm, cone-beam CT, and robots, our navigation system does not need special equipment and can be implemented with common equipment in the operating room, such as C-arm, desktop, etc., even in small hospitals.
Collapse
Affiliation(s)
- Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Dongfang Yu
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Ranyang Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; PENG CHENG Laboratory, Shenzhen 518000, China.
| | - Xin Huang
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xinliang Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Wenhao Zheng
- State Key Laboratory of Virtual Reality Technology and Systems, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
| | - Bin Zhu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| | - Xiaoguang Liu
- The Pain Medicine Center, Peking University Third Hospital, Beijing, China
| |
Collapse
|
12
|
Chidambaram S, Stifano V, Demetres M, Teyssandier M, Palumbo MC, Redaelli A, Olivi A, Apuzzo MLJ, Pannullo SC. Applications of augmented reality in the neurosurgical operating room: A systematic review of the literature. J Clin Neurosci 2021; 91:43-61. [PMID: 34373059 DOI: 10.1016/j.jocn.2021.06.032] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/15/2022]
Abstract
Advancements in imaging techniques are key forces of progress in neurosurgery. The importance of accurate visualization of intraoperative anatomy cannot be overemphasized and is commonly delivered through traditional neuronavigation. Augmented Reality (AR) technology has been tested and applied widely in various neurosurgical subspecialties in intraoperative, clinical use and shows promise for the future. This systematic review of the literature explores the ways in which AR technology has been successfully brought into the operating room (OR) and incorporated into clinical practice. A comprehensive literature search was performed in the following databases from inception-April 2020: Ovid MEDLINE, Ovid EMBASE, and The Cochrane Library. Studies retrieved were then screened for eligibility against predefined inclusion/exclusion criteria. A total of 54 articles were included in this systematic review. The studies were sub- grouped into brain and spine subspecialties and analyzed for their incorporation of AR in the neurosurgical clinical setting. AR technology has the potential to greatly enhance intraoperative visualization and guidance in neurosurgery beyond the traditional neuronavigation systems. However, there are several key challenges to scaling the use of this technology and bringing it into standard operative practice including accurate and efficient brain segmentation of magnetic resonance imaging (MRI) scans, accounting for brain shift, reducing coregistration errors, and improving the AR device hardware. There is also an exciting potential for future work combining AR with multimodal imaging techniques and artificial intelligence to further enhance its impact in neurosurgery.
Collapse
Affiliation(s)
| | - Vito Stifano
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Institute of Neurosurgery, Catholic University, Rome, Italy
| | - Michelle Demetres
- Samuel J. Wood & C.V. Starr Biomedical Information Center, Weill Cornell Medical, College/New York Presbyterian Hospital, New York, NY, USA
| | | | - Maria Chiara Palumbo
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Alberto Redaelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Alessandro Olivi
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Institute of Neurosurgery, Catholic University, Rome, Italy
| | | | - Susan C Pannullo
- Department of Neurosurgery, Weill Cornell Medical College, NY, USA.
| |
Collapse
|
13
|
Fernandes de Oliveira Santos B, de Araujo Paz D, Fernandes VM, Dos Santos JC, Chaddad-Neto FEA, Sousa ACS, Oliveira JLM. Minimally invasive supratentorial neurosurgical approaches guided by Smartphone app and compass. Sci Rep 2021; 11:6778. [PMID: 33762597 PMCID: PMC7991647 DOI: 10.1038/s41598-021-85472-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 03/02/2021] [Indexed: 01/19/2023] Open
Abstract
The precise location in the scalp of specifically planned points can help to achieve less invasive approaches. This study aims to develop a smartphone app, evaluate the precision and accuracy of the developed tool, and describe a series of cases using the referred technique. The application was developed with the React Native framework for Android and iOS. A phantom was printed based on the patient's CT scan, which was used for the calculation of accuracy and precision of the method. The points of interest were marked with an "x" on the patient's head, with the aid of the app and a compass attached to a skin marker pen. Then, two experienced neurosurgeons checked the plausibility of the demarcations based on the anatomical references. Both evaluators marked the frontal, temporal and parietal targets with a difference of less than 5 mm from the corresponding intended point, in all cases. The overall average accuracy observed was 1.6 ± 1.0 mm. The app was used in the surgical planning of trepanations for ventriculoperitoneal (VP) shunts and for drainage of abscesses, and in the definition of craniotomies for meningiomas, gliomas, brain metastases, intracranial hematomas, cavernomas, and arteriovenous malformation. The sample consisted of 88 volunteers who exhibited the following pathologies: 41 (46.6%) had brain tumors, 17 (19.3%) had traumatic brain injuries, 16 (18.2%) had spontaneous intracerebral hemorrhages, 2 (2.3%) had cavernomas, 1 (1.1%) had arteriovenous malformation (AVM), 4 (4.5%) had brain abscesses, and 7 (7.9%) had a VP shunt placement. In cases approached by craniotomy, with the exception of AVM, straight incisions and minicraniotomy were performed. Surgical planning with the aid of the NeuroKeypoint app is feasible and reliable. It has enabled neurological surgeries by craniotomy and trepanation in an accurate, precise, and less invasive manner.
Collapse
Affiliation(s)
- Bruno Fernandes de Oliveira Santos
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil. .,Unimed Sergipe Hospital, Aracaju, SE, Brazil. .,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil. .,Department of Neurosurgery, Hospital de Cirurgia, Aracaju, SE, Brazil.
| | - Daniel de Araujo Paz
- Department of Neurology and Neurosurgery, Universidade Federal de São Paulo, São Paulo, SP, Brazil
| | | | | | | | - Antonio Carlos Sobral Sousa
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil.,Department of Internal Medicine, Federal University of Sergipe, Aracaju, SE, Brazil.,Division of Cardiology, University Hospital, Federal University of Sergipe, Aracaju, SE, Brazil.,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil
| | - Joselina Luzia Menezes Oliveira
- Health Sciences Graduate Program, Federal University of Sergipe, Aracaju, SE, Brazil.,Department of Internal Medicine, Federal University of Sergipe, Aracaju, SE, Brazil.,Division of Cardiology, University Hospital, Federal University of Sergipe, Aracaju, SE, Brazil.,Clinic and Hospital São Lucas / Rede D`Or São Luiz, Aracaju, SE, Brazil
| |
Collapse
|
14
|
Mondal SB, Achilefu S. Virtual and Augmented Reality Technologies in Molecular and Anatomical Imaging. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00066-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
15
|
Augmented Reality Interface for Complex Anatomy Learning in the Central Nervous System: A Systematic Review. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:8835544. [PMID: 32963749 PMCID: PMC7501559 DOI: 10.1155/2020/8835544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 08/14/2020] [Accepted: 08/21/2020] [Indexed: 01/17/2023]
Abstract
The medical system is facing the transformations with augmentation in the use of medical information systems, electronic records, smart, wearable devices, and handheld. The central nervous system function is to control the activities of the mind and the human body. Modern speedy development in medical and computational growth in the field of the central nervous system enables practitioners and researchers to extract and visualize insight from these systems. The function of augmented reality is to incorporate virtual and real objects, interactively running in a real-time and real environment. The role of augmented reality in the central nervous system becomes a thought-provoking task. Gesture interaction approach-based augmented reality in the central nervous system has enormous impending for reducing the care cost, quality refining of care, and waste and error reducing. To make this process smooth, it would be effective to present a comprehensive study report of the available state-of-the-art-work for enabling doctors and practitioners to easily use it in the decision making process. This comprehensive study will finally summarise the outputs of the published materials associate to gesture interaction-based augmented reality approach in the central nervous system. This research uses the protocol of systematic literature which systematically collects, analyses, and derives facts from the collected papers. The data collected range from the published materials for 10 years. 78 papers were selected and included papers based on the predefined inclusion, exclusion, and quality criteria. The study supports to identify the studies related to augmented reality in the nervous system, application of augmented reality in the nervous system, technique of augmented reality in the nervous system, and the gesture interaction approaches in the nervous system. The derivations from the studies show that there is certain amount of rise-up in yearly wise articles, and numerous studies exist, related to augmented reality and gestures interaction approaches to different systems of the human body, specifically to the nervous system. This research organises and summarises the existing associated work, which is in the form of published materials, and are related to augmented reality. This research will help the practitioners and researchers to sight most of the existing studies subjected to augmented reality-based gestures interaction approaches for the nervous system and then can eventually be followed as support in future for complex anatomy learning.
Collapse
|
16
|
Liu T, Tai Y, Zhao C, Wei L, Zhang J, Pan J, Shi J. Augmented reality in neurosurgical navigation: a survey. Int J Med Robot 2020; 16:e2160. [PMID: 32890440 DOI: 10.1002/rcs.2160] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/19/2020] [Accepted: 08/29/2020] [Indexed: 11/12/2022]
Abstract
BACKGROUND Neurosurgery has exceptionally high requirements for minimally invasive and safety. This survey attempts to analyze the practical application of AR in neurosurgical navigation. Also, this survey describes future trends in augmented reality neurosurgical navigation systems. METHODS In this survey, we searched related keywords "augmented reality", "virtual reality", "neurosurgery", "surgical simulation", "brain tumor surgery", "neurovascular surgery", "temporal bone surgery", and "spinal surgery" through Google Scholar, World Neurosurgery, PubMed and Science Direct. We collected 85 articles published over the past five years in areas related to this survey. RESULTS Detailed study has been conducted on the application of AR in neurosurgery and found that AR is constantly improving the overall efficiency of doctor training and treatment, which can help neurosurgeons learn and practice surgical procedures with zero risks. CONCLUSIONS Neurosurgical navigation is essential in neurosurgery. Despite certain technical limitations, it is still a necessary tool for the pursuit of maximum security and minimal intrusiveness. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Tao Liu
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Yonghang Tai
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Chengming Zhao
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Lei Wei
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC, Australia
| | - Jun Zhang
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Junsheng Shi
- Yunnan Key Lab of Opto-electronic Information Technology, Yunnan Normal University, Kunming, China
| |
Collapse
|
17
|
Léger É, Reyes J, Drouin S, Popa T, Hall JA, Collins DL, Kersten-Oertel M. MARIN: an open-source mobile augmented reality interactive neuronavigation system. Int J Comput Assist Radiol Surg 2020; 15:1013-1021. [DOI: 10.1007/s11548-020-02155-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 04/03/2020] [Indexed: 12/20/2022]
|
18
|
Tornari C, Tedla M, Surda P. Rhinology: Simulation Training (Part 1). CURRENT OTORHINOLARYNGOLOGY REPORTS 2020. [DOI: 10.1007/s40136-020-00272-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Abstract
Purpose of Review
Recently, there has been an expansion of novel technologies in simulation training. Different models target different aspects of training. The aim of this review was to examine existing evidence about training simulators in rhinology, their incorporation into real training programmes and translation of these skills into the operating room. The first part focuses on the virtual and augmented reality simulators. The second part describes the role of physical (i.e. non-computer-based) models of endoscopic sinus surgery.
Recent Findings
Virtual reality simulators are still evolving and facing challenges due to their inherent cost and lack of realism in terms of the type of haptic feedback they provide. On the other hand, augmented reality seems to be a promising platform with a growing number of applications in preoperative planning, intraoperative navigation and education. Limitations in validity, registration error and level of evidence prevent the adoption of augmented reality on a wider scale or in clinical practice.
Summary
Simulation training is a maturing field that shows reasonable evidence for a number of models. The incorporation of these models into real training programmes requires further evaluation to ensure that training opportunities are being maximized.
Collapse
|
19
|
Using a Smartphone as an Exoscope Where an Operating Microscope is not Available. World Neurosurg 2019; 132:114-117. [DOI: 10.1016/j.wneu.2019.08.137] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2019] [Revised: 08/20/2019] [Accepted: 08/22/2019] [Indexed: 11/21/2022]
|
20
|
Carl B, Bopp M, Saß B, Voellger B, Nimsky C. Implementation of augmented reality support in spine surgery. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2019; 28:1697-1711. [DOI: 10.1007/s00586-019-05969-4] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/02/2018] [Accepted: 04/02/2019] [Indexed: 01/07/2023]
|
21
|
Tan SY, Arshad H, Abdullah A. Distinctive accuracy measurement of binary descriptors in mobile augmented reality. PLoS One 2019; 14:e0207191. [PMID: 30605474 PMCID: PMC6317785 DOI: 10.1371/journal.pone.0207191] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 10/28/2018] [Indexed: 11/18/2022] Open
Abstract
Mobile Augmented Reality (MAR) requires a descriptor that is robust to changes in viewing conditions in real time application. Many different descriptors had been proposed in the literature for example floating-point descriptors (SIFT and SURF) and binary descriptors (BRIEF, ORB, BRISK and FREAK). According to literature, floating-point descriptors are not suitable for real-time application because its operating speed does not satisfy real-time constraints. Binary descriptors have been developed with compact sizes and lower computation requirements. However, it is unclear which binary descriptors are more appropriate for MAR. Hence, a distinctive and efficient accuracy measurement of four state-of-the-art binary descriptors, namely, BRIEF, ORB, BRISK and FREAK were performed using the Mikolajczyk dataset and ALOI dataset to identify the most appropriate descriptor for MAR in terms of computation time and robustness to brightness, scale and rotation changes. The obtained results showed that FREAK is the most appropriate descriptor for MAR application as it able to produce an application that are efficient (shortest computation time) and robust towards scale, rotation and brightness changes.
Collapse
Affiliation(s)
- Siok Yee Tan
- Center for Artificial Intelligence and Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
- * E-mail:
| | - Haslina Arshad
- Center for Artificial Intelligence and Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Azizi Abdullah
- Center for Artificial Intelligence and Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| |
Collapse
|
22
|
Léger É, Drouin S, Collins DL, Popa T, Kersten-Oertel M. Quantifying attention shifts in augmented reality image-guided neurosurgery. Healthc Technol Lett 2017; 4:188-192. [PMID: 29184663 PMCID: PMC5683248 DOI: 10.1049/htl.2017.0062] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 07/27/2017] [Indexed: 11/20/2022] Open
Abstract
Image-guided surgery (IGS) has allowed for more minimally invasive procedures, leading to better patient outcomes, reduced risk of infection, less pain, shorter hospital stays and faster recoveries. One drawback that has emerged with IGS is that the surgeon must shift their attention from the patient to the monitor for guidance. Yet both cognitive and motor tasks are negatively affected with attention shifts. Augmented reality (AR), which merges the realworld surgical scene with preoperative virtual patient images and plans, has been proposed as a solution to this drawback. In this work, we studied the impact of two different types of AR IGS set-ups (mobile AR and desktop AR) and traditional navigation on attention shifts for the specific task of craniotomy planning. We found a significant difference in terms of the time taken to perform the task and attention shifts between traditional navigation, but no significant difference between the different AR set-ups. With mobile AR, however, users felt that the system was easier to use and that their performance was better. These results suggest that regardless of where the AR visualisation is shown to the surgeon, AR may reduce attention shifts, leading to more streamlined and focused procedures.
Collapse
Affiliation(s)
- Étienne Léger
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| | - Simon Drouin
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - D. Louis Collins
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - Tiberiu Popa
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering & Perform Centre, Concordia University, Montreal, Canada
| |
Collapse
|
23
|
Presurgical Planning for Supratentorial Lesions with Free Slicer Software and Sina App. World Neurosurg 2017; 106:193-197. [DOI: 10.1016/j.wneu.2017.06.146] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 06/21/2017] [Accepted: 06/24/2017] [Indexed: 11/24/2022]
|
24
|
Hernandez D, Garimella R, Eltorai AEM, Daniels AH. Computer-assisted Orthopaedic Surgery. Orthop Surg 2017; 9:152-158. [PMID: 28589561 PMCID: PMC6584434 DOI: 10.1111/os.12323] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2015] [Accepted: 07/28/2016] [Indexed: 11/26/2022] Open
Abstract
Nowadays, operating rooms can be inefficient and overcrowded. Patient data and images are at times not well integrated and displayed in a timely fashion. This lack of coordination may cause further reductions in efficiency, jeopardize patient safety, and increase costs. Fortunately, technology has much to offer the surgical disciplines and the ongoing and recent operating room innovations have advanced preoperative planning and surgical procedures by providing visual, navigational, and mechanical computerized assistance. The field of computer-assisted surgery (CAS) broadly refers to surgical interface between surgeons and machines. It is also part of the ongoing initiatives to move away from invasive to less invasive or even noninvasive procedures. CAS can be applied preoperatively, intraoperatively, and/or postoperatively to improve the outcome of orthopaedic surgical procedures as it has the potential for greater precision, control, and flexibility in carrying out surgical tasks, and enables much better visualization of the operating field than conventional methods have afforded. CAS is an active research discipline, which brings together orthopaedic practitioners with traditional technical disciplines such as engineering, computer science, and robotics. However, to achieve the best outcomes, teamwork, open communication, and willingness to adapt and adopt new skills and processes are critical. Because of the relatively short time period over which CAS has developed, long-term follow-up studies have not yet been possible. Consequently, this review aims to outline current CAS applications, limitations, and promising future developments that will continue to impact the operating room (OR) environment and the OR in the future, particularly within orthopedic and spine surgery.
Collapse
MESH Headings
- Anterior Cruciate Ligament Reconstruction/instrumentation
- Anterior Cruciate Ligament Reconstruction/methods
- Arthroplasty, Replacement, Hip/instrumentation
- Arthroplasty, Replacement, Hip/methods
- Arthroplasty, Replacement, Knee/instrumentation
- Arthroplasty, Replacement, Knee/methods
- Equipment Design
- Forecasting
- Fractures, Bone/surgery
- Humans
- Orthopedic Procedures/instrumentation
- Orthopedic Procedures/methods
- Prosthesis Design
- Robotic Surgical Procedures/instrumentation
- Robotic Surgical Procedures/methods
- Spinal Diseases/surgery
- Surgery, Computer-Assisted/instrumentation
- Surgery, Computer-Assisted/methods
Collapse
Affiliation(s)
- David Hernandez
- Department of Orthopaedic SurgeryWarren Alpert Medical School of Brown UniversityProvidenceRhode IslandUSA
| | - Roja Garimella
- Department of Orthopaedic SurgeryWarren Alpert Medical School of Brown UniversityProvidenceRhode IslandUSA
| | - Adam E M Eltorai
- Department of Orthopaedic SurgeryWarren Alpert Medical School of Brown UniversityProvidenceRhode IslandUSA
| | - Alan H Daniels
- Department of Orthopaedic SurgeryWarren Alpert Medical School of Brown UniversityProvidenceRhode IslandUSA
| |
Collapse
|