1
|
Villa M, Sancho J, Rosa G, Chavarrias M, Juarez E, Sanz C. HyperMRI: hyperspectral and magnetic resonance fusion methodology for neurosurgery applications. Int J Comput Assist Radiol Surg 2024; 19:1367-1374. [PMID: 38761318 PMCID: PMC11230967 DOI: 10.1007/s11548-024-03102-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Magnetic resonance imaging (MRI) is a common technique in image-guided neurosurgery (IGN). Recent research explores the integration of methods like ultrasound and tomography, among others, with hyperspectral (HS) imaging gaining attention due to its non-invasive real-time tissue classification capabilities. The main challenge is the registration process, often requiring manual intervention. This work introduces an automatic, markerless method for aligning HS images with MRI. METHODS This work presents a multimodal system that combines RGB-Depth (RGBD) and HS cameras. The RGBD camera captures the patient's facial geometry, which is used for registration with the preoperative MR through ICP. Once MR-depth registration is complete, the integration of HS data is achieved using a calibrated homography transformation. The incorporation of external tracking with a novel calibration method allows camera mobility from the registration position to the craniotomy area. This methodology streamlines the fusion of RGBD, HS and MR images within the craniotomy area. RESULTS Using the described system and an anthropomorphic phantom head, the system has been characterised by registering the patient's face in 25 positions and 5 positions resulted in a fiducial registration error of 1.88 ± 0.19 mm and a target registration error of 4.07 ± 1.28 mm, respectively. CONCLUSIONS This work proposes a new methodology to automatically register MR and HS information with a sufficient accuracy. It can support the neurosurgeons to guide the diagnosis using multimodal data over an augmented reality representation. However, in its preliminary prototype stage, this system exhibits significant promise, driven by its cost-effectiveness and user-friendly design.
Collapse
Affiliation(s)
- Manuel Villa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Jaime Sancho
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Gonzalo Rosa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | | | - Eduardo Juarez
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain.
| | - Cesar Sanz
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| |
Collapse
|
2
|
Taha BA, Addie AJ, Kadhim AC, Azzahran AS, Haider AJ, Chaudhary V, Arsad N. Photonics-powered augmented reality skin electronics for proactive healthcare: multifaceted opportunities. Mikrochim Acta 2024; 191:250. [PMID: 38587660 DOI: 10.1007/s00604-024-06314-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/18/2024] [Indexed: 04/09/2024]
Abstract
Rapid technological advancements have created opportunities for new solutions in various industries, including healthcare. One exciting new direction in this field of innovation is the combination of skin-based technologies and augmented reality (AR). These dermatological devices allow for the continuous and non-invasive measurement of vital signs and biomarkers, enabling the real-time diagnosis of anomalies, which have applications in telemedicine, oncology, dermatology, and early diagnostics. Despite its many potential benefits, there is a substantial information vacuum regarding using flexible photonics in conjunction with augmented reality for medical purposes. This review explores the current state of dermal augmented reality and flexible optics in skin-conforming sensing platforms by examining the obstacles faced thus far, including technical hurdles, demanding clinical validation standards, and problems with user acceptance. Our main areas of interest are skills, chiroptical properties, and health platform applications, such as optogenetic pixels, spectroscopic imagers, and optical biosensors. My skin-enhanced spherical dichroism and powerful spherically polarized light enable thorough physical inspection with these augmented reality devices: diabetic tracking, skin cancer diagnosis, and cardiovascular illness: preventative medicine, namely blood pressure screening. We demonstrate how to accomplish early prevention using case studies and emergency detection. Finally, it addresses real-world obstacles that hinder fully realizing these materials' extraordinary potential in advancing proactive and preventative personalized medicine, including technical constraints, clinical validation gaps, and barriers to widespread adoption.
Collapse
Affiliation(s)
- Bakr Ahmed Taha
- Photonics Technology Lab, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, 43600, Bangi, Malaysia.
| | - Ali J Addie
- Center of Advanced Materials/Directorate of Materials Research/Ministry of Science and Technology, Baghdad, Iraq
| | - Ahmed C Kadhim
- Communication Engineering Department, University of Technology, Baghdad, Iraq
| | - Ahmad S Azzahran
- Electrical Engineering Department, Northern Border University, Arar, Kingdom of Saudi Arabia.
| | - Adawiya J Haider
- Applied Sciences Department/Laser Science and Technology Branch, University of Technology, Baghdad, Iraq
| | - Vishal Chaudhary
- Research Cell &, Department of Physics, Bhagini Nivedita College, University of Delhi, New Delhi, 110045, India
| | - Norhana Arsad
- Photonics Technology Lab, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, UKM, 43600, Bangi, Malaysia.
| |
Collapse
|
3
|
Cui Y, Zhou Y, Zhang H, Yuan Y, Wang J, Zhang Z. Application of Glasses-Free Augmented Reality Localization in Neurosurgery. World Neurosurg 2023; 180:e296-e301. [PMID: 37757949 DOI: 10.1016/j.wneu.2023.09.064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 09/15/2023] [Accepted: 09/16/2023] [Indexed: 09/29/2023]
Abstract
OBJECTIVE The accurate localization of intracranial lesions is critical in neurosurgery. Most surgeons locate the vast majority of neurosurgical sites through skull surface markers, combined with neuroimaging examination and marking lines. This project's primary purpose was to develop an augmented reality (AR) technology or tool that can be used for surgical positioning using the naked eye. METHODS Brain models were predesigned with intracranial lesions using computerized tomography scan, and Digital Imaging and Communications in Medicine data were segmented and modeled by 3D slicer software. The processed data were imported into a smartphone 3D viewing software application (Persp 3D) and were used by a Remebot surgical robot. The localization of intracranial lesions was performed, and the AR localization error was calculated compared with standard robot localization. RESULTS After mastering the AR localization registration method, surgeons achieved an average localization error of 1.39 ± 0.82 mm. CONCLUSIONS The error of AR positioning technology in surgical simulation tests based on brain modeling was millimeter level, which has verified the feasibility of clinical application. More efficient registration remains a need that should be addressed.
Collapse
Affiliation(s)
- Yahui Cui
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Yupeng Zhou
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Haipeng Zhang
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Yuxiao Yuan
- Department of Radiology, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Wang
- Operating Room, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zuyong Zhang
- Department of Neurosurgery, Hangzhou Xixi Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China.
| |
Collapse
|
4
|
Verhellen A, Elprama SA, Scheerlinck T, Van Aerschot F, Duerinck J, Van Gestel F, Frantz T, Jansen B, Vandemeulebroucke J, Jacobs A. Exploring technology acceptance of head-mounted device-based augmented reality surgical navigation in orthopaedic surgery. Int J Med Robot 2023:e2585. [PMID: 37830305 DOI: 10.1002/rcs.2585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 09/18/2023] [Accepted: 09/28/2023] [Indexed: 10/14/2023]
Abstract
BACKGROUND This study used the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate the acceptance of HMD-based AR surgical navigation. METHODS An experiment was conducted in which participants drilled 12 predefined holes using freehand drilling, proprioceptive control, and AR assistance. Technology acceptance was assessed through a survey and non-participant observations. RESULTS Participants' intention to use AR correlated (p < 0.05) with social influence (Spearman's rho (rs) = 0.599), perceived performance improvement (rs = 0.592) and attitude towards AR (rs = 0.542). CONCLUSIONS While most participants acknowledged the potential of AR, they also highlighted persistent barriers to adoption, such as issues related to user-friendliness, time efficiency and device discomfort. To overcome these challenges, future AR surgical navigation systems should focus on enhancing surgical performance while minimising disruptions to workflows and operating times. Engaging orthopaedic surgeons in the development process can facilitate the creation of tailored solutions and accelerate adoption.
Collapse
Affiliation(s)
| | | | - Thierry Scheerlinck
- Department of Orthopedic Surgery and Traumatology - Research Group BEFY-ORTHO, Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Fiene Van Aerschot
- Department of Orthopedic Surgery and Traumatology - Research Group BEFY-ORTHO, Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Johnny Duerinck
- Department of Neurosurgery-Research Group Center for Neurosciences (C4N-NEUR), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Frederick Van Gestel
- Department of Neurosurgery-Research Group Center for Neurosciences (C4N-NEUR), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel, Brussel, Belgium
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussel, Belgium
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussel, Belgium
| | - Jef Vandemeulebroucke
- Department of Radiology - Department of Electronics and Informatics (ETRO), Universitair Ziekenhuis Brussel - Vrije Universiteit Brussel - Imec, Brussel, Belgium
| | - An Jacobs
- IMEC-SMIT, Vrije Universiteit, Brussel, Belgium
| |
Collapse
|
5
|
Chidambaram S, Anthony D, Jansen T, Vigo V, Fernandez Miranda JC. Intraoperative augmented reality fiber tractography complements cortical and subcortical mapping. World Neurosurg X 2023; 20:100226. [PMID: 37456694 PMCID: PMC10344792 DOI: 10.1016/j.wnsx.2023.100226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 06/12/2023] [Indexed: 07/18/2023] Open
Abstract
Augmented reality (AR) has been found to be advantageous in enhancing visualization of complex neuroanatomy intraoperatively and in neurosurgical education. Another key tool that allows neurosurgeons to have enhanced visualization, namely of white matter tracts, is diffusion tensor imaging (DTI) that is processed with high-definition fiber tractography (HDFT). There remains an enduring challenge in the structural-functional correlation of white matter tracts that centers on the difficulty in clearly assigning function to any given fiber tract when evaluating them through separated as opposed to integrated modalities. Combining the technologies of AR with fiber tractography shows promise in helping to fill in this gap between structural-functional correlation of white matter tracts. This novel study demonstrates through a series of three cases of awake craniotomies for glioma resections a technique that allows the first and most direct evidence of fiber tract stimulation and assignment of function or deficit in vivo through the intraoperative, real-time fusion of electrical cortical stimulation, AR, and HDFT. This novel technique has qualitatively shown to be helpful in guiding intraoperative decision making on extent of resection of gliomas. Future studies could focus on larger, prospective cohorts of glioma patients who undergo this methodology and further correlate the post-operative imaging results to patient functional outcomes.
Collapse
Affiliation(s)
| | | | | | | | - Juan C. Fernandez Miranda
- Corresponding author. Department of Neurological Surgery, Stanford University, 213 Quarry Rd, Rm 2851MC 5957, Palo Alto, CA, 94304, USA.
| |
Collapse
|
6
|
Ragnhildstveit A, Li C, Zimmerman MH, Mamalakis M, Curry VN, Holle W, Baig N, Uğuralp AK, Alkhani L, Oğuz-Uğuralp Z, Romero-Garcia R, Suckling J. Intra-operative applications of augmented reality in glioma surgery: a systematic review. Front Surg 2023; 10:1245851. [PMID: 37671031 PMCID: PMC10476869 DOI: 10.3389/fsurg.2023.1245851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
Background Augmented reality (AR) is increasingly being explored in neurosurgical practice. By visualizing patient-specific, three-dimensional (3D) models in real time, surgeons can improve their spatial understanding of complex anatomy and pathology, thereby optimizing intra-operative navigation, localization, and resection. Here, we aimed to capture applications of AR in glioma surgery, their current status and future potential. Methods A systematic review of the literature was conducted. This adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline. PubMed, Embase, and Scopus electronic databases were queried from inception to October 10, 2022. Leveraging the Population, Intervention, Comparison, Outcomes, and Study design (PICOS) framework, study eligibility was evaluated in the qualitative synthesis. Data regarding AR workflow, surgical application, and associated outcomes were then extracted. The quality of evidence was additionally examined, using hierarchical classes of evidence in neurosurgery. Results The search returned 77 articles. Forty were subject to title and abstract screening, while 25 proceeded to full text screening. Of these, 22 articles met eligibility criteria and were included in the final review. During abstraction, studies were classified as "development" or "intervention" based on primary aims. Overall, AR was qualitatively advantageous, due to enhanced visualization of gliomas and critical structures, frequently aiding in maximal safe resection. Non-rigid applications were also useful in disclosing and compensating for intra-operative brain shift. Irrespective, there was high variance in registration methods and measurements, which considerably impacted projection accuracy. Most studies were of low-level evidence, yielding heterogeneous results. Conclusions AR has increasing potential for glioma surgery, with capacity to positively influence the onco-functional balance. However, technical and design limitations are readily apparent. The field must consider the importance of consistency and replicability, as well as the level of evidence, to effectively converge on standard approaches that maximize patient benefit.
Collapse
Affiliation(s)
- Anya Ragnhildstveit
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Chao Li
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, England
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, England
| | | | - Michail Mamalakis
- Department of Psychiatry, University of Cambridge, Cambridge, England
| | - Victoria N. Curry
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States
| | - Willis Holle
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Physics and Astronomy, The University of Utah, Salt Lake City, UT, United States
| | - Noor Baig
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | | | - Layth Alkhani
- Integrated Research Literacy Group, Draper, UT, United States
- Department of Biology, Stanford University, Stanford, CA, United States
| | | | - Rafael Romero-Garcia
- Department of Psychiatry, University of Cambridge, Cambridge, England
- Instituto de Biomedicina de Sevilla (IBiS) HUVR/CSIC/Universidad de Sevilla/CIBERSAM, ISCIII, Dpto. de Fisiología Médica y Biofísica
| | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge, England
| |
Collapse
|
7
|
Satoh M, Nakajima T, Watanabe E, Kawai K. Augmented Reality in Stereotactic Neurosurgery: Current Status and Issues. Neurol Med Chir (Tokyo) 2023; 63:137-140. [PMID: 36682793 PMCID: PMC10166603 DOI: 10.2176/jns-nmc.2022-0278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
Stereotactic neurosurgery is an established technique, but it has several limitations. In frame-based stereotaxy using a stereotactic frame, frame setting errors may decrease the accuracy of the procedure. Frameless stereotaxy using neuronavigation requires surgeons to shift their view from the surgical field to the navigation display and to advance the needle while assuming a physically uncomfortable position. To overcome these limitations, several researchers have applied augmented reality in stereotactic neurosurgery. Augmented reality enables surgeons to visualize the information regarding the target and preplanned trajectory superimposed over the actual surgical field. In frame-based stereotaxy, a researcher applies tablet computer-based augmented reality to check for the setting errors of the stereotactic frame, thereby improving the safety of the procedure. Several researchers have reported performing frameless stereotaxy guided by head-mounted-display-based augmented reality that enables surgeons to advance the needle at a more natural posture. These studies have shown that augmented reality can address the limitations of stereotactic neurosurgery. Conversely, they have also revealed the limited accuracy of current augmented reality systems for small targets, which indicates that further development of augmented reality systems is needed.
Collapse
Affiliation(s)
- Makoto Satoh
- Department of Neurosurgery, Jichi Medical University
| | | | - Eiju Watanabe
- Department of Neurosurgery, Jichi Medical University
| | - Kensuke Kawai
- Department of Neurosurgery, Jichi Medical University
| |
Collapse
|
8
|
Kögl FV, Léger É, Haouchine N, Torio E, Juvekar P, Navab N, Kapur T, Pieper S, Golby A, Frisken S. A Tool-free Neuronavigation Method based on Single-view Hand Tracking. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 11:1307-1315. [PMID: 37457380 PMCID: PMC10348700 DOI: 10.1080/21681163.2022.2163428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/19/2022] [Indexed: 12/30/2022]
Abstract
This work presents a novel tool-free neuronavigation method that can be used with a single RGB commodity camera. Compared with freehand craniotomy placement methods, the proposed system is more intuitive and less error prone. The proposed method also has several advantages over standard neuronavigation platforms. First, it has a much lower cost, since it doesn't require the use of an optical tracking camera or electromagnetic field generator, which are typically the most expensive parts of a neuronavigation system, making it much more accessible. Second, it requires minimal setup, meaning that it can be performed at the bedside and in circumstances where using a standard neuronavigation system is impractical. Our system relies on machine-learning-based hand pose estimation that acts as a proxy for optical tool tracking, enabling a 3D-3D pre-operative to intra-operative registration. Qualitative assessment from clinical users showed that the concept is clinically relevant. Quantitative assessment showed that on average a target registration error (TRE) of 1.3cm can be achieved. Furthermore, the system is framework-agnostic, meaning that future improvements to hand-tracking frameworks would directly translate to a higher accuracy.
Collapse
Affiliation(s)
- Fryderyk Victor Kögl
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Étienne Léger
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nazim Haouchine
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Erickson Torio
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Parikshit Juvekar
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Whiting School of Engineering, Johns Hopkins University, Baltimore, USA
| | - Tina Kapur
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Steve Pieper
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
- Isomics, Inc., Cambridge, MA, USA
| | - Alexandra Golby
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| | - Sarah Frisken
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA, USA
| |
Collapse
|
9
|
Moon RDC, Barua NU. Usability of mixed reality in awake craniotomy planning. Br J Neurosurg 2022:1-5. [PMID: 36537230 DOI: 10.1080/02688697.2022.2152429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 03/08/2022] [Accepted: 09/08/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE This study aimed to describe our institutional use of a commercially available mixed reality viewer within a multi-disciplinary planning workflow for awake craniotomy surgery and to report an assessment of its usability. MATERIALS AND METHODS Three Tesla MRI scans, including 32-direction diffusion tensor sequences, were reconstructed with BrainLab Elements auto-segmentation software. Magic Leap mixed reality viewer headsets were registered to a shared virtual viewing space to display image reconstructions. System Usability Scale was used to assess the usability of the mixed reality system. RESULTS The awake craniotomy planning workflow utilises the mixed reality viewer to facilitate a stepwise discussion through four progressive anatomical layers; the skin, cerebral cortex, subcortical white matter tracts and tumour with surrounding vasculature. At each stage relevant members of the multi-disciplinary team review key operative considerations, including patient positioning, cortical and subcortical speech mapping protocols and surgical approaches to the tumour.The mixed reality system was used for multi-disciplinary awake craniotomy planning in 10 consecutive procedures over a 5-month period. Ten participants (2 Anaesthetists, 5 Neurosurgical trainees, 2 Speech therapists, 1 Neuropsychologist) completed System Usability Scale assessments, reporting a mean score of 71.5. Feedback highlighted the benefit of being able to rehearse important steps in the procedure, including patient positioning and anaesthetic access and visualising the testing protocol for cortical and subcortical speech mapping. CONCLUSIONS This study supports the use of mixed reality for multidisciplinary planning for awake craniotomy surgery, with an acceptable degree of usability of the interface. We highlight the need to consider the requirements of non-technical, non-neurosurgical team members when involving mixed reality activities.
Collapse
Affiliation(s)
- Richard D C Moon
- Department of Neurosurgery, Southmead Hospital, North Bristol NHS Trust, Bristol, UK
| | | |
Collapse
|
10
|
Mofatteh M, Mashayekhi MS, Arfaie S, Chen Y, Mirza AB, Fares J, Bandyopadhyay S, Henich E, Liao X, Bernstein M. Augmented and virtual reality usage in awake craniotomy: a systematic review. Neurosurg Rev 2022; 46:19. [PMID: 36529827 PMCID: PMC9760592 DOI: 10.1007/s10143-022-01929-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/21/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022]
Abstract
Augmented and virtual reality (AR, VR) are becoming promising tools in neurosurgery. AR and VR can reduce challenges associated with conventional approaches via the simulation and mimicry of specific environments of choice for surgeons. Awake craniotomy (AC) enables the resection of lesions from eloquent brain areas while monitoring higher cortical and subcortical functions. Evidence suggests that both surgeons and patients benefit from the various applications of AR and VR in AC. This paper investigates the application of AR and VR in AC and assesses its prospective utility in neurosurgery. A systematic review of the literature was performed using PubMed, Scopus, and Web of Science databases in accordance with the PRISMA guidelines. Our search results yielded 220 articles. A total of six articles consisting of 118 patients have been included in this review. VR was used in four papers, and the other two used AR. Tumour was the most common pathology in 108 patients, followed by vascular lesions in eight patients. VR was used for intraoperative mapping of language, vision, and social cognition, while AR was incorporated in preoperative training of white matter dissection and intraoperative visualisation and navigation. Overall, patients and surgeons were satisfied with the applications of AR and VR in their cases. AR and VR can be safely incorporated during AC to supplement, augment, or even replace conventional approaches in neurosurgery. Future investigations are required to assess the feasibility of AR and VR in various phases of AC.
Collapse
Affiliation(s)
- Mohammad Mofatteh
- School of Medicine, Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, UK.
| | | | - Saman Arfaie
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec, Canada
- Department of Molecular and Cell Biology, University of California Berkeley, Berkeley, CA, USA
| | - Yimin Chen
- Department of Neurology, Foshan Sanshui District People's Hospital, Foshan, China
| | | | - Jawad Fares
- Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- Northwestern Medicine Malnati Brain Tumor Institute, Feinberg School of Medicine, Lurie Comprehensive Cancer Center, Northwestern University, Chicago, IL, USA
| | - Soham Bandyopadhyay
- Nuffield Department of Surgical Sciences, Oxford University Global Surgery Group, University of Oxford, Oxford, UK
- Clinical Neurosciences, Clinical & Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, Hampshire, UK
- Wessex Neurological Centre, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Edy Henich
- Department of Medicine, McGill University, Montreal, Quebec, Canada
| | - Xuxing Liao
- Department of Neurosurgery, Foshan Sanshui District People's Hospital, Foshan, China
| | - Mark Bernstein
- Division of Neurosurgery, Department of Surgery, University of Toronto, University Health Network, Toronto, Ontario, Canada
- Temmy Latner Center for Palliative Care, Mount Sinai Hospital, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
11
|
Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:1076755. [PMID: 36590155 PMCID: PMC9794840 DOI: 10.3389/fmedt.2022.1076755] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial Intelligence (AI) plays an integral role in enhancing the quality of surgical simulation, which is increasingly becoming a popular tool for enriching the training experience of a surgeon. This spans the spectrum from facilitating preoperative planning, to intraoperative visualisation and guidance, ultimately with the aim of improving patient safety. Although arguably still in its early stages of widespread clinical application, AI technology enables personal evaluation and provides personalised feedback in surgical training simulations. Several forms of surgical visualisation technologies currently in use for anatomical education and presurgical assessment rely on different AI algorithms. However, while it is promising to see clinical examples and technological reports attesting to the efficacy of AI-supported surgical simulators, barriers to wide-spread commercialisation of such devices and software remain complex and multifactorial. High implementation and production costs, scarcity of reports evidencing the superiority of such technology, and intrinsic technological limitations remain at the forefront. As AI technology is key to driving the future of surgical simulation, this paper will review the literature delineating its current state, challenges, and prospects. In addition, a consolidated list of FDA/CE approved AI-powered medical devices for surgical simulation is presented, in order to shed light on the existing gap between academic achievements and the universal commercialisation of AI-enabled simulators. We call for further clinical assessment of AI-supported surgical simulators to support novel regulatory body approved devices and usher surgery into a new era of surgical education.
Collapse
Affiliation(s)
- Jay J. Park
- Department of General Surgery, Norfolk and Norwich University Hospital, Norwich, United Kingdom,Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Jakov Tiefenbach
- Neurological Institute, Cleveland Clinic, Cleveland, OH, United States
| | - Andreas K. Demetriades
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom,Department of Neurosurgery, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
12
|
State of the Art and Future Prospects of Virtual and Augmented Reality in Veterinary Medicine: A Systematic Review. Animals (Basel) 2022; 12:ani12243517. [PMID: 36552437 PMCID: PMC9774422 DOI: 10.3390/ani12243517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/06/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022] Open
Abstract
Virtual reality and augmented reality are new but rapidly expanding topics in medicine. In virtual reality, users are immersed in a three-dimensional environment, whereas in augmented reality, computer-generated images are superimposed on the real world. Despite advances in human medicine, the number of published articles in veterinary medicine is low. These cutting-edge technologies can be used in combination with existing methods in veterinary medicine to achieve diagnostic/therapeutic and educational goals. The purpose of our review was to evaluate studies for their use of virtual reality and augmented reality in veterinary medicine, as well as human medicine with animal trials, to report results and the state of the art. We collected all of the articles we included in our review by screening the Scopus, PubMed, and Web of Science databases. Of the 24 included studies, 11 and 13 articles belonged to virtual reality and augmented reality, respectively. Based on these articles, we determined that using these technologies has a positive impact on the scientific output of students and residents, can reduce training costs, and can be used in training/educational programs. Furthermore, using these tools can promote ethical standards. We reported the absence of standard operation protocols and equipment costs as study limitations.
Collapse
|
13
|
Paro MR, Hersh DS, Bulsara KR. History of Virtual Reality and Augmented Reality in Neurosurgical Training. World Neurosurg 2022; 167:37-43. [PMID: 35977681 DOI: 10.1016/j.wneu.2022.08.042] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 08/08/2022] [Accepted: 08/09/2022] [Indexed: 01/11/2023]
Abstract
Virtual reality (VR) and augmented reality (AR) are rapidly growing technologies. Both have been applied within neurosurgery for presurgical planning and intraoperative navigation, but VR and AR technology is particularly promising for the education of neurosurgical trainees. With the increasing demand for high impact yet efficient educational strategies, VR- and AR-based simulators allow neurosurgical residents to practice technical skills in a low-risk setting. Initial studies have confirmed that such simulators increase trainees' confidence, improve their understanding of operative anatomy, and enhance surgical techniques. Knowledge of the history and conceptual underpinnings of these technologies is useful to understand their current and future applications towards neurosurgical training. The technological precursors for VR and AR were introduced as early as the 1800s, and draw from the fields of entertainment, flight simulation, and education. However, computer software and processing speeds are needed to develop widespread VR- and AR-based surgical simulators, which have only been developed within the last 15 years. During that time, several devices had become rapidly adopted by neurosurgeons, and some programs had begun to incorporate them into the residency curriculum. With ever-improving technology, VR and AR are promising additions to a multi-modal training program, enabling neurosurgical residents to maximize their efforts in preparation for the operating room. In this review, we outline the historical development of the VR and AR systems that are used in neurosurgical training and discuss representative examples of the current technology.
Collapse
Affiliation(s)
- Mitch R Paro
- UConn School of Medicine, Farmington, Connecticut, USA
| | - David S Hersh
- Division of Neurosurgery, Connecticut Children's, Hartford, Connecticut, USA; Department of Surgery, UConn School of Medicine, Farmington, Connecticut, USA
| | - Ketan R Bulsara
- Department of Surgery, UConn School of Medicine, Farmington, Connecticut, USA; Division of Neurosurgery, UConn School of Medicine, Farmington, Connecticut, USA.
| |
Collapse
|
14
|
Singh R, Singh R, Baby B, Suri A. Effect of the Segmentation Threshold on Computed Tomography-Based Reconstruction of Skull Bones with Reference Optical Three-Dimensional Scanning. World Neurosurg 2022; 166:e34-e43. [PMID: 35718274 DOI: 10.1016/j.wneu.2022.06.050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 06/09/2022] [Accepted: 06/10/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND A variety of applications related to neurosurgical procedures, education, and training require accurate reconstruction of the involved structures from the medical images such as computed tomography (CT). This study evaluates the quality of CT-based reconstruction of dry skull bones for advanced neurosurgical applications. The accuracy and precision of these models were examined with reference optical scanning. METHODS Three consecutive CT and optical scans of different skull bones were acquired and used to develop three-dimensional models. The accuracy of three-dimensional models was examined by manual inspection of the defined anatomical landmarks of the skull. Reproducibility was examined by deviation analysis of the models developed from repeated CT and optical scans. RESULTS Precision was excellent in both the techniques with less than 0.1 mm deviation error. On the interscan evaluation of the CT versus optical scan model, deviations of more than 0.1 mm were observed in 16 out of 21 instances. CT reconstruction using standard segmentation algorithms results in missing bone portions while using the default bone segmentation threshold. The segmentation threshold was varied to construct missing bone regions, and its effect on the iso-surface generation was evaluated. The threshold variation led to increased mean deviations of surfaces up to 0.6 mm. CONCLUSIONS The study reveals that bone structure, complexity, and segmentation threshold lead to CT reconstruction variability. The trade-off between the desirable model and accepted mean deviation should be considered as per traits of the desired application.
Collapse
Affiliation(s)
- Ramandeep Singh
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India
| | - Rajdeep Singh
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India
| | - Britty Baby
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India; Amar Nath and Shashi Khosla School of Information Technology, Indian Institute of Technology Delhi, New Delhi, India
| | - Ashish Suri
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India.
| |
Collapse
|
15
|
The intraoperative use of augmented and mixed reality technology to improve surgical outcomes: A systematic review. Int J Med Robot 2022; 18:e2450. [DOI: 10.1002/rcs.2450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 07/23/2022] [Accepted: 07/27/2022] [Indexed: 11/07/2022]
|
16
|
Mozaffari K, Foster CH, Rosner MK. Practical Use of Augmented Reality Modeling to Guide Revision Spine Surgery: An Illustrative Case of Hardware Failure and Overriding Spondyloptosis. Oper Neurosurg (Hagerstown) 2022; 23:212-216. [PMID: 35972084 PMCID: PMC9362336 DOI: 10.1227/ons.0000000000000307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 04/03/2022] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND AND IMPORTANCE Augmented reality (AR) is a novel technology with broadening applications to neurosurgery. In deformity spine surgery, it has been primarily directed to the more precise placement of pedicle screws. However, AR may also be used to generate high fidelity three-dimensional (3D) spine models for cases of advanced deformity with existing instrumentation. We present a case in which an AR-generated 3D model was used to facilitate and expedite the removal of embedded instrumentation and guide the reduction of an overriding spondyloptotic deformity. CLINICAL PRESENTATION A young adult with a remote history of a motor vehicle accident treated with long-segment posterior spinal stabilization presented with increasing back pain and difficulty sitting upright in a wheelchair. Imaging revealed pseudoarthrosis with multiple rod fractures resulting in an overriding spondyloptosis of T6 on T9. An AR-generated 3D model was useful in the intraoperative localization of rod breaks and other extensively embedded instrumentation. Real-time model thresholding expedited the safe explanation of the defunct system and correction of the spondyloptosis deformity. CONCLUSION An AR-generated 3D model proved instrumental in a revision case of hardware failure and high-grade spinal deformity.
Collapse
Affiliation(s)
- Khashayar Mozaffari
- Department of Neurological Surgery, The George Washington University Hospital, Washington, District of Columbia, USA
| | | | | |
Collapse
|
17
|
A Scoping Review of Deep Learning in Cancer Nursing Combined with Augmented Reality: the Era of Intelligent Nursing is Coming. Asia Pac J Oncol Nurs 2022; 9:100135. [PMID: 36276884 PMCID: PMC9579790 DOI: 10.1016/j.apjon.2022.100135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 08/22/2022] [Indexed: 11/30/2022] Open
Abstract
Artificial intelligence has been developing greatly in the field of medicine. As a new research hotspot of artificial intelligence, deep learning (DL) has been widely applied in the fields of cancer risk assessment, symptom recognition, and cancer detection. Therefore, nursing care issues in terms of consuming time and energy, lower accuracy, and lower efficiency can be solved with applying DL in caring cancer patients. In addition, augmented reality (AR) has great navigation potential through combining computer-generated virtual elements with the real world. Thus, DL + AR may facilitate patients with cancer to possess a brand-new model of nursing care that is more intelligent, mobile, and adapted to the information age, compared to traditional nursing. With the advent of the era of intelligent nursing, future nursing models can not only learn from the DL + AR model to meet the needs of patients with cancer but also reduce nursing workload, save healthcare resources, and improve work efficiency, the quality of nursing care, as well as the quality of life for cancer patients.
Collapse
|
18
|
Ahmad HS, Yoon JW. Intra-operative wearable visualization in spine surgery: past, present, and future. JOURNAL OF SPINE SURGERY (HONG KONG) 2022; 8:132-138. [PMID: 35441103 PMCID: PMC8990397 DOI: 10.21037/jss-21-95] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/27/2022] [Indexed: 04/15/2023]
Abstract
The history of modern surgery has run parallel to the invention and development of intra-operative visualization techniques. The first operating room, built in 1804 at Pennsylvania Hospital, demonstrates this principle: illumination of the surgical field by the Sun through an overhead skylight allowed surgeries to proceed even prior to the invention of anesthesia or sterile technique. Surgeries were restricted to begin around when the Sun was at its zenith; without adequate light from the Sun and skylight, surgeons were unable to achieve adequate visualization. In the years since, new visualization instruments have expanded the scope and success of surgical intervention. Spine surgery in particular has benefited greatly from improved visualization technologies, due to the complex and intricate nervous, vascular and musculoskeletal structures that are closely intertwined which surgeons must manipulate. Over time, new technologies have also advanced to take up smaller footprints, leading to the rise of wearable tools that surgeons don intra-operatively to better visualize the surgical field. As surgical techniques shift to more minimally invasive methods, reliable, fidelitous, and ergonomic wearables are of growing importance. Here, we discuss the past and present of wearable visualization tools, from the first surgical loupes to cutting-edge augmented reality (AR) goggles, and comment on how emerging innovations will continue to revolutionize spine surgery.
Collapse
Affiliation(s)
- Hasan S Ahmad
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jang W Yoon
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
19
|
Cercenelli L, Babini F, Badiali G, Battaglia S, Tarsitano A, Marchetti C, Marcelli E. Augmented Reality to Assist Skin Paddle Harvesting in Osteomyocutaneous Fibular Flap Reconstructive Surgery: A Pilot Evaluation on a 3D-Printed Leg Phantom. Front Oncol 2022; 11:804748. [PMID: 35071009 PMCID: PMC8770836 DOI: 10.3389/fonc.2021.804748] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
Background Augmented Reality (AR) represents an evolution of navigation-assisted surgery, providing surgeons with a virtual aid contextually merged with the real surgical field. We recently reported a case series of AR-assisted fibular flap harvesting for mandibular reconstruction. However, the registration accuracy between the real and the virtual content needs to be systematically evaluated before widely promoting this tool in clinical practice. In this paper, after description of the AR based protocol implemented for both tablet and HoloLens 2 smart glasses, we evaluated in a first test session the achievable registration accuracy with the two display solutions, and in a second test session the success rate in executing the AR-guided skin paddle incision task on a 3D printed leg phantom. Methods From a real computed tomography dataset, 3D virtual models of a human leg, including fibula, arteries and skin with planned paddle profile for harvesting, were obtained. All virtual models were imported into Unity software to develop a marker-less AR application suitable to be used both via tablet and via HoloLens 2 headset. The registration accuracy for both solutions was verified on a 3D printed leg phantom obtained from the virtual models, by repeatedly applying the tracking function and computing pose deviations between the AR-projected virtual skin paddle profile and the real one transferred to the phantom via a CAD/CAM cutting guide. The success rate in completing the AR-guided task of skin paddle harvesting was evaluated using CAD/CAM templates positioned on the phantom model surface. Results On average, the marker-less AR protocol showed comparable registration errors (ranging within 1-5 mm) for tablet-based and HoloLens-based solution. Registration accuracy seems to be quite sensitive to ambient light conditions. We found a good success rate in completing the AR-guided task within an error margin of 4 mm (97% and 100% for tablet and HoloLens, respectively). All subjects reported greater usability and ergonomics for HoloLens 2 solution. Conclusions Results revealed that the proposed marker-less AR based protocol may guarantee a registration error within 1-5 mm for assisting skin paddle harvesting in the clinical setting. Optimal lightening conditions and further improvement of marker-less tracking technologies have the potential to increase the efficiency and precision of this AR-assisted reconstructive surgery.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| | - Federico Babini
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Salvatore Battaglia
- Maxillofacial Surgery Unit, Policlinico San Marco University Hospital, University of Catania, Catania, Italy
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Emanuela Marcelli
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| |
Collapse
|
20
|
Cercenelli L, De Stefano A, Billi AM, Ruggeri A, Marcelli E, Marchetti C, Manzoli L, Ratti S, Badiali G. AEducaAR, Anatomical Education in Augmented Reality: A Pilot Experience of an Innovative Educational Tool Combining AR Technology and 3D Printing. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031024. [PMID: 35162049 PMCID: PMC8834017 DOI: 10.3390/ijerph19031024] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/14/2022] [Accepted: 01/15/2022] [Indexed: 01/27/2023]
Abstract
Gross anatomy knowledge is an essential element for medical students in their education, and nowadays, cadaver-based instruction represents the main instructional tool able to provide three-dimensional (3D) and topographical comprehensions. The aim of the study was to develop and test a prototype of an innovative tool for medical education in human anatomy based on the combination of augmented reality (AR) technology and a tangible 3D printed model that can be explored and manipulated by trainees, thus favoring a three-dimensional and topographical learning approach. After development of the tool, called AEducaAR (Anatomical Education with Augmented Reality), it was tested and evaluated by 62 second-year degree medical students attending the human anatomy course at the International School of Medicine and Surgery of the University of Bologna. Students were divided into two groups: AEducaAR-based learning ("AEducaAR group") was compared to standard learning using human anatomy atlas ("Control group"). Both groups performed an objective test and an anonymous questionnaire. In the objective test, the results showed no significant difference between the two learning methods; instead, in the questionnaire, students showed enthusiasm and interest for the new tool and highlighted its training potentiality in open-ended comments. Therefore, the presented AEducaAR tool, once implemented, may contribute to enhancing students' motivation for learning, increasing long-term memory retention and 3D comprehension of anatomical structures. Moreover, this new tool might help medical students to approach to innovative medical devices and technologies useful in their future careers.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab-Laboratory of Bioengineering, Department of Experimental Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy; (L.C.); (E.M.)
| | - Alessia De Stefano
- Cellular Signalling Laboratory, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (A.D.S.); (A.M.B.); (A.R.); (L.M.)
| | - Anna Maria Billi
- Cellular Signalling Laboratory, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (A.D.S.); (A.M.B.); (A.R.); (L.M.)
| | - Alessandra Ruggeri
- Cellular Signalling Laboratory, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (A.D.S.); (A.M.B.); (A.R.); (L.M.)
| | - Emanuela Marcelli
- eDIMES Lab-Laboratory of Bioengineering, Department of Experimental Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy; (L.C.); (E.M.)
| | - Claudio Marchetti
- Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (C.M.); (G.B.)
- Department of Maxillo-Facial Surgery, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| | - Lucia Manzoli
- Cellular Signalling Laboratory, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (A.D.S.); (A.M.B.); (A.R.); (L.M.)
| | - Stefano Ratti
- Cellular Signalling Laboratory, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (A.D.S.); (A.M.B.); (A.R.); (L.M.)
- Correspondence:
| | - Giovanni Badiali
- Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40126 Bologna, Italy; (C.M.); (G.B.)
- Department of Maxillo-Facial Surgery, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40138 Bologna, Italy
| |
Collapse
|