1
|
Cattari N, Cutolo F, Placa LL, Ferrari V. Visualization modality for augmented reality guidance of in-depth tumour enucleation procedures. Healthc Technol Lett 2024; 11:101-107. [PMID: 38638490 PMCID: PMC11022226 DOI: 10.1049/htl2.12058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 11/23/2023] [Indexed: 04/20/2024] Open
Abstract
Recent research studies reported that the employment of wearable augmented reality (AR) systems such as head-mounted displays for the in situ visualisation of ultrasound (US) images can improve the outcomes of US-guided biopsies through reduced procedure completion times and improved accuracy. Here, the authors continue in the direction of recent developments and present the first AR system for guiding an in-depth tumour enucleation procedure under US guidance. The system features an innovative visualisation modality with cutting trajectories that 'sink' into the tissue according to the depth reached by the electric scalpel, tracked in real-time, and a virtual-to-virtual alignment between the scalpel's tip and the trajectory. The system has high accuracy in estimating the scalpel's tip position (mean depth error of 0.4 mm and mean radial error of 1.34 mm). Furthermore, we demonstrated with a preliminary user study that our system allowed us to successfully guide an in-depth tumour enucleation procedure (i.e. preserving the safety margin around the lesion).
Collapse
Affiliation(s)
- Nadia Cattari
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| | - Fabrizio Cutolo
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| | - Luciana La Placa
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
| | - Vincenzo Ferrari
- Department of Information EngineeringUniversity of PisaPisaTuscanyItaly
- EndoCAS CentreUniversity of PisaPisaTuscanyItaly
| |
Collapse
|
2
|
Ruggiero F, Cercenelli L, Emiliani N, Badiali G, Bevini M, Zucchelli M, Marcelli E, Tarsitano A. Preclinical Application of Augmented Reality in Pediatric Craniofacial Surgery: An Accuracy Study. J Clin Med 2023; 12:jcm12072693. [PMID: 37048777 PMCID: PMC10095377 DOI: 10.3390/jcm12072693] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Revised: 03/29/2023] [Accepted: 03/31/2023] [Indexed: 04/08/2023] Open
Abstract
Background: Augmented reality (AR) allows the overlapping and integration of virtual information with the real environment. The camera of the AR device reads the object and integrates the virtual data. It has been widely applied to medical and surgical sciences in recent years and has the potential to enhance intraoperative navigation. Materials and methods: In this study, the authors aim to assess the accuracy of AR guidance when using the commercial HoloLens 2 head-mounted display (HMD) in pediatric craniofacial surgery. The Authors selected fronto-orbital remodeling (FOR) as the procedure to test (specifically, frontal osteotomy and nasal osteotomy were considered). Six people (three surgeons and three engineers) were recruited to perform the osteotomies on a 3D printed stereolithographic model under the guidance of AR. By means of calibrated CAD/CAM cutting guides with different grooves, the authors measured the accuracy of the osteotomies that were performed. We tested accuracy levels of ±1.5 mm, ±1 mm, and ±0.5 mm. Results: With the HoloLens 2, the majority of the individuals involved were able to successfully trace the trajectories of the frontal and nasal osteotomies with an accuracy level of ±1.5 mm. Additionally, 80% were able to achieve an accuracy level of ±1 mm when performing a nasal osteotomy, and 52% were able to achieve an accuracy level of ±1 mm when performing a frontal osteotomy, while 61% were able to achieve an accuracy level of ±0.5 mm when performing a nasal osteotomy, and 33% were able to achieve an accuracy level of ±0.5 mm when performing a frontal osteotomy. Conclusions: despite this being an in vitro study, the authors reported encouraging results for the prospective use of AR on actual patients.
Collapse
Affiliation(s)
- Federica Ruggiero
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Maxillo-Facial Surgery Unit, AUSL Bologna, 40124 Bologna, Italy
| | - Laura Cercenelli
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Nicolas Emiliani
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Giovanni Badiali
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| | - Mirko Bevini
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| | - Mino Zucchelli
- Pediatric Neurosurgery, IRCCS Istituto delle Scienze Neurologiche di Bologna, Via Altura 3, 40138 Bologna, Italy
| | - Emanuela Marcelli
- Laboratory of Bioengineering—eDIMES Lab, Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40138 Bologna, Italy
| | - Achille Tarsitano
- Department of Biomedical and Neuromotor Science, University of Bologna, 40138 Bologna, Italy
- Oral and Maxillo-Facial Surgery Unit, IRCCS Azienda Ospedaliero-Universitaria di Bologna, Via Albertoni 15, 40138 Bologna, Italy
| |
Collapse
|
3
|
Bogomolova K, Vorstenbosch MATM, El Messaoudi I, Holla M, Hovius SER, van der Hage JA, Hierck BP. Effect of binocular disparity on learning anatomy with stereoscopic augmented reality visualization: A double center randomized controlled trial. ANATOMICAL SCIENCES EDUCATION 2023; 16:87-98. [PMID: 34894205 PMCID: PMC10078652 DOI: 10.1002/ase.2164] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 12/05/2021] [Accepted: 12/09/2021] [Indexed: 06/01/2023]
Abstract
Binocular disparity provides one of the important depth cues within stereoscopic three-dimensional (3D) visualization technology. However, there is limited research on its effect on learning within a 3D augmented reality (AR) environment. This study evaluated the effect of binocular disparity on the acquisition of anatomical knowledge and perceived cognitive load in relation to visual-spatial abilities. In a double-center randomized controlled trial, first-year (bio)medical undergraduates studied lower extremity anatomy in an interactive 3D AR environment either with a stereoscopic 3D view (n = 32) or monoscopic 3D view (n = 34). Visual-spatial abilities were tested with a mental rotation test. Anatomical knowledge was assessed by a validated 30-item written test and 30-item specimen test. Cognitive load was measured by the NASA-TLX questionnaire. Students in the stereoscopic 3D and monoscopic 3D groups performed equally well in terms of percentage correct answers (written test: 47.9 ± 15.8 vs. 49.1 ± 18.3; P = 0.635; specimen test: 43.0 ± 17.9 vs. 46.3 ± 15.1; P = 0.429), and perceived cognitive load scores (6.2 ± 1.0 vs. 6.2 ± 1.3; P = 0.992). Regardless of intervention, visual-spatial abilities were positively associated with the specimen test scores (η2 = 0.13, P = 0.003), perceived representativeness of the anatomy test questions (P = 0.010) and subjective improvement in anatomy knowledge (P < 0.001). In conclusion, binocular disparity does not improve learning anatomy. Motion parallax should be considered as another important depth cue that contributes to depth perception during learning in a stereoscopic 3D AR environment.
Collapse
Affiliation(s)
- Katerina Bogomolova
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | | | - Inssaf El Messaoudi
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Micha Holla
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Steven E. R. Hovius
- Department of Plastic and Reconstructive SurgeryRadboud University Medical CenterNijmegenthe Netherlands
| | - Jos A. van der Hage
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | - Beerend P. Hierck
- Department of Anatomy and PhysiologyClinical Sciences, Veterinary Medicine FacultyUtrechtthe Netherlands
| |
Collapse
|
4
|
Condino S, Sannino S, Cutolo F, Giannini A, Simoncini T, Ferrari V. Single feature constrained manual registration method for Augmented Reality applications in gynecological laparoscopic interventions. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:566-571. [PMID: 36086356 DOI: 10.1109/embc48229.2022.9871263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Augmented Reality (AR) can avoid some of the drawbacks of Minimally Invasive Surgery and may provide opportunities for developing innovative tools to assist surgeons. In laparoscopic surgery, the achievement of easy and sufficiently accurate registration is an open challenge. This is particularly true in procedures, such as laparoscopic abdominal Sacro-Colpopexy, where there is a lack of a sufficient number of visible anatomical landmarks to be used as a reference for registration. In an attempt to address the above limitations, we developed and preliminarily testes a constrained manual procedure based on the identification of a single anatomical landmark in the laparoscopic images, and the intraoperative measurement of the laparoscope orientation. Tests in a rigid in-vitro environment show good accuracy (median error 2.4 mm obtained in about 4 min) and good preliminary feedback from the technical staff who tested the system. Further experimentation in a more realistic environment is needed to validate these positive results. Clinical Relevance - This paper provides a new registration method for the development of AR educational videos and AR-based navigation systems for laparoscopic interventions.
Collapse
|
5
|
Mendicino AR, Condino S, Carbone M, Cutolo F, Cattari N, Andreani L, Parchi PD, Capanna R, Ferrari V. Augmented Reality as a Tool to Guide Patient-Specific Templates Placement in Pelvic Resections. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3481-3484. [PMID: 36086331 DOI: 10.1109/embc48229.2022.9871766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Patient-specific templates (PST) have become a useful tool for guiding osteotomy in complex surgical scenarios such as pelvic resections. The design of the surgical template results in sharper, less jagged resection margins than freehand cuts. However, their correct placement can become difficult in some anatomical regions and cannot be verified during surgery. Conventionally, pelvic resections are performed using Computer Assisted Surgery (CAS), and in recent years Augmented Reality (AR) has been proposed in the literature as an additional tool to support PST placement. This work presents an AR task to simplify and improve the accuracy of the positioning of the template by displaying virtual content. The focus of the work is the creation of the virtual guides displayed during the AR task. The system was validated on a patient-specific phantom designed to provide a realistic setup. Encouraging results have been achieved. The use of the AR simplifies the surgical task and optimizes the correct positioning of the cutting template: an average error of 2.19 mm has been obtained, lower than obtained with state-of-the-art solutions. In addition, supporting PST placement through AR guidance is less time-consuming than the standard procedure that solely relies on anatomical landmarks as reference.
Collapse
|
6
|
Cercenelli L, Babini F, Badiali G, Battaglia S, Tarsitano A, Marchetti C, Marcelli E. Augmented Reality to Assist Skin Paddle Harvesting in Osteomyocutaneous Fibular Flap Reconstructive Surgery: A Pilot Evaluation on a 3D-Printed Leg Phantom. Front Oncol 2022; 11:804748. [PMID: 35071009 PMCID: PMC8770836 DOI: 10.3389/fonc.2021.804748] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
Background Augmented Reality (AR) represents an evolution of navigation-assisted surgery, providing surgeons with a virtual aid contextually merged with the real surgical field. We recently reported a case series of AR-assisted fibular flap harvesting for mandibular reconstruction. However, the registration accuracy between the real and the virtual content needs to be systematically evaluated before widely promoting this tool in clinical practice. In this paper, after description of the AR based protocol implemented for both tablet and HoloLens 2 smart glasses, we evaluated in a first test session the achievable registration accuracy with the two display solutions, and in a second test session the success rate in executing the AR-guided skin paddle incision task on a 3D printed leg phantom. Methods From a real computed tomography dataset, 3D virtual models of a human leg, including fibula, arteries and skin with planned paddle profile for harvesting, were obtained. All virtual models were imported into Unity software to develop a marker-less AR application suitable to be used both via tablet and via HoloLens 2 headset. The registration accuracy for both solutions was verified on a 3D printed leg phantom obtained from the virtual models, by repeatedly applying the tracking function and computing pose deviations between the AR-projected virtual skin paddle profile and the real one transferred to the phantom via a CAD/CAM cutting guide. The success rate in completing the AR-guided task of skin paddle harvesting was evaluated using CAD/CAM templates positioned on the phantom model surface. Results On average, the marker-less AR protocol showed comparable registration errors (ranging within 1-5 mm) for tablet-based and HoloLens-based solution. Registration accuracy seems to be quite sensitive to ambient light conditions. We found a good success rate in completing the AR-guided task within an error margin of 4 mm (97% and 100% for tablet and HoloLens, respectively). All subjects reported greater usability and ergonomics for HoloLens 2 solution. Conclusions Results revealed that the proposed marker-less AR based protocol may guarantee a registration error within 1-5 mm for assisting skin paddle harvesting in the clinical setting. Optimal lightening conditions and further improvement of marker-less tracking technologies have the potential to increase the efficiency and precision of this AR-assisted reconstructive surgery.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| | - Federico Babini
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Salvatore Battaglia
- Maxillofacial Surgery Unit, Policlinico San Marco University Hospital, University of Catania, Catania, Italy
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Head and Neck Department, IRCCS Azienda Ospedaliera Universitaria di Bologna, Department of Biomedical and Neuromotor Sciences, Alma Mater Studiorum University of Bologna, Bologna, Italy
| | - Emanuela Marcelli
- eDIMES Lab - Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, Bologna, Italy
| |
Collapse
|
7
|
Morales Mojica CM, Velazco-Garcia JD, Pappas EP, Birbilis TA, Becker A, Leiss EL, Webb A, Seimenis I, Tsekos NV. A Holographic Augmented Reality Interface for Visualizing of MRI Data and Planning of Neurosurgical Procedures. J Digit Imaging 2021; 34:1014-1025. [PMID: 34027587 DOI: 10.1007/s10278-020-00412-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 12/06/2020] [Accepted: 12/18/2020] [Indexed: 10/21/2022] Open
Abstract
The recent introduction of wireless head-mounted displays (HMD) promises to enhance 3D image visualization by immersing the user into 3D morphology. This work introduces a prototype holographic augmented reality (HAR) interface for the 3D visualization of magnetic resonance imaging (MRI) data for the purpose of planning neurosurgical procedures. The computational platform generates a HAR scene that fuses pre-operative MRI sets, segmented anatomical structures, and a tubular tool for planning an access path to the targeted pathology. The operator can manipulate the presented images and segmented structures and perform path-planning using voice and gestures. On-the-fly, the software uses defined forbidden-regions to prevent the operator from harming vital structures. In silico studies using the platform with a HoloLens HMD assessed its functionality and the computational load and memory for different tasks. A preliminary qualitative evaluation revealed that holographic visualization of high-resolution 3D MRI data offers an intuitive and interactive perspective of the complex brain vasculature and anatomical structures. This initial work suggests that immersive experiences may be an unparalleled tool for planning neurosurgical procedures.
Collapse
Affiliation(s)
- Cristina M Morales Mojica
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Jose D Velazco-Garcia
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Eleftherios P Pappas
- Medical Physics Laboratory, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | | | - Aaron Becker
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, USA
| | - Ernst L Leiss
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA
| | - Andrew Webb
- C.J. Gorter Center for High Field MRI, Leiden University Medical Center, Leiden, Netherlands
| | - Ioannis Seimenis
- Medical Physics Laboratory, School of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | - Nikolaos V Tsekos
- MRI Lab, Department of Computer Science, University of Houston, 4800 Calhoun Road PGH 501, Houston, TX, USA.
| |
Collapse
|
8
|
Mixed Reality Interaction and Presentation Techniques for Medical Visualisations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020. [PMID: 33211310 DOI: 10.1007/978-3-030-47483-6_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Mixed, Augmented and Virtual reality technologies are burgeoning with new applications and use cases appearing rapidly. This chapter provides a brief overview of the fundamental display presentation methods; head-worn, hand-held and projector-based displays. We present a summary of visualisation methods that employ these technologies in the medical domain with a diverse range of examples presented including diagnostic and exploration, intervention and clinical, interaction and gestures, and education.
Collapse
|
9
|
Cercenelli L, Carbone M, Condino S, Cutolo F, Marcelli E, Tarsitano A, Marchetti C, Ferrari V, Badiali G. The Wearable VOSTARS System for Augmented Reality-Guided Surgery: Preclinical Phantom Evaluation for High-Precision Maxillofacial Tasks. J Clin Med 2020; 9:jcm9113562. [PMID: 33167432 PMCID: PMC7694536 DOI: 10.3390/jcm9113562] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/29/2020] [Accepted: 11/03/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. METHODS An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. RESULTS VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory's length was within ±0.5 mm accuracy. CONCLUSIONS Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.
Collapse
Affiliation(s)
- Laura Cercenelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
- Correspondence: ; Tel.: +39-0516364603
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Emanuela Marcelli
- eDIMES Lab—Laboratory of Bioengineering, Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy;
| | - Achille Tarsitano
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Claudio Marchetti
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (M.C.); (S.C.); (F.C.); (V.F.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, University of Bologna, 40138 Bologna, Italy; (A.T.); (C.M.); (G.B.)
| |
Collapse
|
10
|
Augmented-Reality-Based 3D Emotional Messenger for Dynamic User Communication with Smart Devices. ELECTRONICS 2020. [DOI: 10.3390/electronics9071127] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
With the development of Internet technologies, chat environments have migrated from PCs to mobile devices. Conversations have moved from phone calls and text messages to mobile messaging services or “messengers,” which has led to a significant surge in the use of mobile messengers such as Line and WhatsApp. However, because these messengers mainly use text as the communication medium, they have the inherent disadvantage of not effectively representing the user’s nonverbal expressions. In this context, we propose a new emotional communication messenger that improves upon the limitations of existing static expressions in current messenger applications. We develop a chat messenger based on augmented reality (AR) technology using smartglasses, which are a type of a wearable device. To this end, we select a server model that is suitable for AR, and we apply an effective emotional expression method based on 16 different basic emotions classified as per Russell’s model. In our app, these emotions can be expressed via emojis, animations, particle effects, and sound clips. Finally, we verify the efficacy of our messenger by conducting a user study to compare it with current 2D-based messenger services. Our messenger service can serve as a prototype for future AR-based messenger apps.
Collapse
|
11
|
Condino S, Fida B, Carbone M, Cercenelli L, Badiali G, Ferrari V, Cutolo F. Wearable Augmented Reality Platform for Aiding Complex 3D Trajectory Tracing. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1612. [PMID: 32183212 PMCID: PMC7146390 DOI: 10.3390/s20061612] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 03/05/2020] [Accepted: 03/11/2020] [Indexed: 01/28/2023]
Abstract
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Benish Fida
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Marina Carbone
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Laura Cercenelli
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Giovanni Badiali
- Maxillofacial Surgery Unit, Department of Biomedical and Neuromotor Sciences and S. Orsola-Malpighi Hospital, Alma Mater Studiorum University of Bologna, 40138 Bologna, Italy; (L.C.); (G.B.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (B.F.); (M.C.); (V.F.)
| |
Collapse
|
12
|
Ambiguity-Free Optical-Inertial Tracking for Augmented Reality Headsets. SENSORS 2020; 20:s20051444. [PMID: 32155808 PMCID: PMC7085738 DOI: 10.3390/s20051444] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/04/2020] [Accepted: 03/04/2020] [Indexed: 01/19/2023]
Abstract
The increasing capability of computing power and mobile graphics has made possible the release of self-contained augmented reality (AR) headsets featuring efficient head-anchored tracking solutions. Ego motion estimation based on well-established infrared tracking of markers ensures sufficient accuracy and robustness. Unfortunately, wearable visible-light stereo cameras with short baseline and operating under uncontrolled lighting conditions suffer from tracking failures and ambiguities in pose estimation. To improve the accuracy of optical self-tracking and its resiliency to marker occlusions, degraded camera calibrations, and inconsistent lighting, in this work we propose a sensor fusion approach based on Kalman filtering that integrates optical tracking data with inertial tracking data when computing motion correlation. In order to measure improvements in AR overlay accuracy, experiments are performed with a custom-made AR headset designed for supporting complex manual tasks performed under direct vision. Experimental results show that the proposed solution improves the head-mounted display (HMD) tracking accuracy by one third and improves the robustness by also capturing the orientation of the target scene when some of the markers are occluded and when the optical tracking yields unstable and/or ambiguous results due to the limitations of using head-anchored stereo tracking cameras under uncontrollable lighting conditions.
Collapse
|
13
|
Low-Computational Cost Stitching Method in a Three-Eyed Endoscope. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:5613931. [PMID: 31316742 PMCID: PMC6604418 DOI: 10.1155/2019/5613931] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 03/01/2019] [Accepted: 04/24/2019] [Indexed: 02/03/2023]
Abstract
Aortic valve replacement is the only definitive treatment for aortic stenosis, a highly prevalent condition in elderly population. Minimally invasive surgery brought numerous benefits to this intervention, and robotics recently provided additional improvements in terms of telemanipulation, motion scaling, and smaller incisions. Difficulties in obtaining a clear and wide field of vision is a major challenge in minimally invasive aortic valve surgery: surgeon orientates with difficulty because of lack of direct view and limited spaces. This work focuses on the development of a computer vision methodology, for a three-eyed endoscopic vision system, to ease minimally invasive instrument guidance during aortic valve surgery. Specifically, it presents an efficient image stitching method to improve spatial awareness and overcome the orientation problems which arise when cameras are decentralized with respect to the main axis of the aorta and are nonparallel oriented. The proposed approach was tested for the navigation of an innovative robotic system for minimally invasive valve surgery. Based on the specific geometry of the setup and the intrinsic parameters of the three cameras, we estimate the proper plane-induced homographic transformation that merges the views of the operatory site plane into a single stitched image. To evaluate the deviation from the image correct alignment, we performed quantitative tests by stitching a chessboard pattern. The tests showed a minimum error with respect to the image size of 0.46 ± 0.15% measured at the homography distance of 40 mm and a maximum error of 6.09 ± 0.23% at the maximum offset of 10 mm. Three experienced surgeons in aortic valve replacement by mini-sternotomy and mini-thoracotomy performed experimental tests based on the comparison of navigation and orientation capabilities in a silicone aorta with and without stitched image. The tests showed that the stitched image allows for good orientation and navigation within the aorta, and furthermore, it provides more safety while releasing the valve than driving from the three separate views. The average processing time for the stitching of three views into one image is 12.6 ms, proving that the method is not computationally expensive, thus leaving space for further real-time processing.
Collapse
|
14
|
Letter to the Editor on “Augmented Reality Based Navigation for Computer Assisted Hip Resurfacing: A Proof of Concept Study”. Ann Biomed Eng 2019; 47:2151-2153. [DOI: 10.1007/s10439-019-02299-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 05/29/2019] [Indexed: 01/20/2023]
|
15
|
Viglialoro R, Esposito N, Condino S, Cutolo F, Guadagni S, Gesi M, Ferrari M, Ferrari V. Augmented Reality to Improve Surgical Simulation. Lessons Learned Towards the Design of a Hybrid Laparoscopic Simulator for Cholecystectomy. IEEE Trans Biomed Eng 2018; 66:2091-2104. [PMID: 30507490 DOI: 10.1109/tbme.2018.2883816] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Hybrid surgical simulators based on Augmented Reality (AR) solutions benefit from the advantages of both the box trainers and the Virtual Reality simulators. This paper reports on the results of a long development stage of a hybrid simulator for laparoscopic cholecystectomy that integrates real and the virtual components. We first outline the specifications of the AR simulator and then we explain the strategy adopted for implementing it based on a careful selection of its simulated anatomical components, and characterized by a real-time tracking of both a target anatomy and of the laparoscope. The former is tracked by means of an electromagnetic field generator, while the latter requires an additional camera for video tracking. The new system was evaluated in terms of AR visualization accuracy, realism and hardware robustness. Obtained results show that the accuracy of AR visualization is adequate for training purposes. The qualitative evaluation confirms the robustness and the realism of the simulator. The AR simulator satisfies all the initial specifications in terms of anatomical appearance, modularity, reusability, minimization of spare parts cost, and ability to record surgical errors and to track in real-time the Calot's triangle and the laparoscope. The proposed system could be an effective training tool for learning the task of identification and isolation of Calot's triangle in laparoscopic cholecystectomy. Moreover, the presented strategy could be applied to simulate other surgical procedures involving the task of identification and isolation of generic tubular structures, such as blood vessels, biliary tree and nerves, which are not directly visible.
Collapse
|
16
|
Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg 2018; 70:389-400. [PMID: 30006832 DOI: 10.1007/s13304-018-0567-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 07/08/2018] [Indexed: 12/17/2022]
Abstract
Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.
Collapse
Affiliation(s)
- Benish Fida
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy. .,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.
| | - Gregorio di Franco
- General Surgery Unit, Department of Surgery, Translational and New Technologies, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.,Vascular Surgery Unit, Cisanello University Hospital AOUP, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
17
|
Abstract
In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models.
Collapse
|
18
|
Cutolo F, Meola A, Carbone M, Sinceri S, Cagnazzo F, Denaro E, Esposito N, Ferrari M, Ferrari V. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom. Comput Assist Surg (Abingdon) 2017; 22:39-53. [PMID: 28754068 DOI: 10.1080/24699322.2017.1358400] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Affiliation(s)
- Fabrizio Cutolo
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| | - Antonio Meola
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marina Carbone
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Sara Sinceri
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | | | - Ennio Denaro
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Nicola Esposito
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Vascular Surgery, Pisa University Medical School, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
- Department of Information Engineering, University of Pisa, Pisa, Italy
| |
Collapse
|
19
|
Recent Advances on Wearable Electronics and Embedded Computing Systems for Biomedical Applications. ELECTRONICS 2017. [DOI: 10.3390/electronics6010012] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|