1
|
Kim SK, Lee Y, Hwang HR, Park SY. 3D human anatomy augmentation over a mannequin for the training of nursing skills. Technol Health Care 2024; 32:1523-1533. [PMID: 37781830 DOI: 10.3233/thc-230586] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
BACKGROUND The in-depth understanding of human anatomy is the foundation for safety in nursing practice. Augmented reality is an emerging technology that can be used for integrative learning in nursing education. OBJECTIVE The study aimed to develop a human anatomy-based skill training system and pilot test its usability and feasibility. METHODS Twenty-seven nursing students participated in 3D anatomy-based skill training for intramuscular injection and Levin tube feeding using HoloLens 2. Various user interfaces including pictures, videos, animation graphics, and annotation boxes assisted users with a comprehensive understanding of the step-by-step procedures for these techniques. A one-group pre-post test was conducted to observe changes in skill performance competency, usability, and learning satisfaction. RESULTS After study participation, a statistically significant improvement in skill performance competency (p< 0.05) was observed. The usability results showed that students were satisfied with the usefulness of the program (9.55 ± 0.49) and scored highly for the intention to participate in other educational programs (9.62 ± 0.59). A high level of learning satisfaction was achieved (9.55 ± 0.49), with positive responses in fostering students' engagement and excitement in the application of cutting-edge technology. CONCLUSION The 3D anatomy-based nursing skill training demonstrated good potential to improve learning outcomes and facilitate engagement in self-directed practice. This can be integrated into undergraduate nursing education as an assistant teaching tool, contributing to the combination of knowledge and practice.
Collapse
Affiliation(s)
- Sun Kyung Kim
- Department of Nursing, Mokpo National University, Jeonnam, Korea
- Department of Biomedicine, Health and Life Convergence Sciences, BK21 Four, Mokpo National University, Jeonnam, Korea
- Biomedical and Healthcare Research Institute, Mokpo National University, Jeonnam, Korea
| | - Youngho Lee
- Department of Computer Engineering, Mokpo National University, Jeonnam, Korea
| | - Hye Ri Hwang
- Department of Nursing, Mokpo National University, Jeonnam, Korea
| | - Su Yeon Park
- Department of Nursing, Mokpo National University, Jeonnam, Korea
| |
Collapse
|
2
|
He F, Qi X, Feng Q, Zhang Q, Pan N, Yang C, Liu S. Research on augmented reality navigation of in vitro fenestration of stent-graft based on deep learning and virtual-real registration. Comput Assist Surg (Abingdon) 2023; 28:2289339. [PMID: 38059572 DOI: 10.1080/24699322.2023.2289339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
OBJECTIVES In vitro fenestration of stent-graft (IVFS) demands high-precision navigation methods to achieve optimal surgical outcomes. This study aims to propose an augmented reality (AR) navigation method for IVFS, which can provide in situ overlay display to locate fenestration positions. METHODS We propose an AR navigation method to assist doctors in performing IVFS. A deep learning-based aorta segmentation algorithm is used to achieve automatic and rapid aorta segmentation. The Vuforia-based virtual-real registration and marker recognition algorithm are integrated to ensure accurate in situ AR image. RESULTS The proposed method can provide three-dimensional in situ AR image, and the fiducial registration error after virtual-real registration is 2.070 mm. The aorta segmentation experiment obtains dice similarity coefficient of 91.12% and Hausdorff distance of 2.59, better than conventional algorithms before improvement. CONCLUSIONS The proposed method can intuitively and accurately locate fenestration positions, and therefore can assist doctors in performing IVFS.
Collapse
Affiliation(s)
- Fengfeng He
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaoyu Qi
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qingmin Feng
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qiang Zhang
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ning Pan
- School of Biomedical Engineering, South-Central Minzu University, Wuhan, China
| | - Chao Yang
- Department of Vascular Surgery, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shenglin Liu
- Institute of Biomedical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
3
|
Mamone V, Ferrari V, D’Amato R, Condino S, Cattari N, Cutolo F. Head-Mounted Projector for Manual Precision Tasks: Performance Assessment. SENSORS (BASEL, SWITZERLAND) 2023; 23:3494. [PMID: 37050554 PMCID: PMC10098766 DOI: 10.3390/s23073494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/22/2023] [Accepted: 03/24/2023] [Indexed: 06/19/2023]
Abstract
The growing interest in augmented reality applications has led to an in-depth look at the performance of head-mounted displays and their testing in numerous domains. Other devices for augmenting the real world with virtual information are presented less frequently and usually focus on the description of the device rather than on its performance analysis. This is the case of projected augmented reality, which, compared to head-worn AR displays, offers the advantages of being simultaneously accessible by multiple users whilst preserving user awareness of the environment and feeling of immersion. This work provides a general evaluation of a custom-made head-mounted projector for the aid of precision manual tasks through an experimental protocol designed for investigating spatial and temporal registration and their combination. The results of the tests show that the accuracy (0.6±0.1 mm of spatial registration error) and motion-to-photon latency (113±12 ms) make the proposed solution suitable for guiding precision tasks.
Collapse
Affiliation(s)
- Virginia Mamone
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Azienda Ospedaliero Universitaria Pisana, 56126 Pisa, Italy
| | - Vincenzo Ferrari
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Renzo D’Amato
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Sara Condino
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Nadia Cattari
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| | - Fabrizio Cutolo
- EndoCAS Center for Computer-Assisted Surgery, University of Pisa, 56124 Pisa, Italy (S.C.); (N.C.); (F.C.)
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy
| |
Collapse
|
4
|
Ma L, Huang T, Wang J, Liao H. Visualization, registration and tracking techniques for augmented reality guided surgery: a review. Phys Med Biol 2023; 68. [PMID: 36580681 DOI: 10.1088/1361-6560/acaf23] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Augmented reality (AR) surgical navigation has developed rapidly in recent years. This paper reviews and analyzes the visualization, registration, and tracking techniques used in AR surgical navigation systems, as well as the application of these AR systems in different surgical fields. The types of AR visualization are divided into two categories ofin situvisualization and nonin situvisualization. The rendering contents of AR visualization are various. The registration methods include manual registration, point-based registration, surface registration, marker-based registration, and calibration-based registration. The tracking methods consist of self-localization, tracking with integrated cameras, external tracking, and hybrid tracking. Moreover, we describe the applications of AR in surgical fields. However, most AR applications were evaluated through model experiments and animal experiments, and there are relatively few clinical experiments, indicating that the current AR navigation methods are still in the early stage of development. Finally, we summarize the contributions and challenges of AR in the surgical fields, as well as the future development trend. Despite the fact that AR-guided surgery has not yet reached clinical maturity, we believe that if the current development trend continues, it will soon reveal its clinical utility.
Collapse
Affiliation(s)
- Longfei Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Tianqi Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Jie Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, People's Republic of China
| |
Collapse
|
5
|
Liu J, Sun W, Zhao Y, Zheng G. Ultrasound Probe and Hand-Eye Calibrations for Robot-Assisted Needle Biopsy. SENSORS (BASEL, SWITZERLAND) 2022; 22:9465. [PMID: 36502167 PMCID: PMC9740029 DOI: 10.3390/s22239465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/28/2022] [Accepted: 11/30/2022] [Indexed: 06/17/2023]
Abstract
In robot-assisted ultrasound-guided needle biopsy, it is essential to conduct calibration of the ultrasound probe and to perform hand-eye calibration of the robot in order to establish a link between intra-operatively acquired ultrasound images and robot-assisted needle insertion. Based on a high-precision optical tracking system, novel methods for ultrasound probe and robot hand-eye calibration are proposed. Specifically, we first fix optically trackable markers to the ultrasound probe and to the robot, respectively. We then design a five-wire phantom to calibrate the ultrasound probe. Finally, an effective method taking advantage of steady movement of the robot but without an additional calibration frame or the need to solve the AX=XB equation is proposed for hand-eye calibration. After calibrations, our system allows for in situ definition of target lesions and aiming trajectories from intra-operatively acquired ultrasound images in order to align the robot for precise needle biopsy. Comprehensive experiments were conducted to evaluate accuracy of different components of our system as well as the overall system accuracy. Experiment results demonstrated the efficacy of the proposed methods.
Collapse
|
6
|
Lu Z, Lv Y, Ai Z, Suo K, Gong X, Wang Y. Calibration of a Catadioptric System and 3D Reconstruction Based on Surface Structured Light. SENSORS (BASEL, SWITZERLAND) 2022; 22:7385. [PMID: 36236487 PMCID: PMC9573738 DOI: 10.3390/s22197385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/05/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
In response to the problem of the small field of vision in 3D reconstruction, a 3D reconstruction system based on a catadioptric camera and projector was built by introducing a traditional camera to calibrate the catadioptric camera and projector system. Firstly, the intrinsic parameters of the camera and the traditional camera are calibrated separately. Then, the calibration of the projection system is accomplished by the traditional camera. Secondly, the coordinate system is introduced to calculate, respectively, the position of the catadioptric camera and projector in the coordinate system, and the position relationship between the coordinate systems of the catadioptric camera and the projector is obtained. Finally, the projector is used to project the structured light fringe to realize the reconstruction using a catadioptric camera. The experimental results show that the reconstruction error is 0.75 mm and the relative error is 0.0068 for a target of about 1 m. The calibration method and reconstruction method proposed in this paper can guarantee the ideal geometric reconstruction accuracy.
Collapse
Affiliation(s)
| | - Yaowen Lv
- Correspondence: ; Tel.: +86-15044100195
| | | | | | | | | |
Collapse
|
7
|
A Multi-User Collaborative AR System for Industrial Applications. SENSORS 2022; 22:s22041319. [PMID: 35214221 PMCID: PMC8878014 DOI: 10.3390/s22041319] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 01/29/2022] [Accepted: 01/31/2022] [Indexed: 01/07/2023]
Abstract
Augmented reality (AR) applications are increasingly being used in various fields (e.g., design, maintenance, assembly, repair, training, etc.), as AR techniques help improve efficiency and reduce costs. Moreover, collaborative AR systems extend applicability, allowing for collaborative environments for different roles. In this paper, we propose a multi-user collaborative AR system (aptly called the “multi-user collaborative system”, or MUCSys); it is composed of three ends—MUCStudio, MUCView, and MUCServer. MUCStudio aims to construct industrial content with CAD model transformation, simplification, database update, marker design, scene editing, and exportation, while MUCView contains sensor data analysis, real-time localization, scene loading, annotation editing, and virtual–real rendering. MUCServer—as the bridge between MUCStudio and MUCView—presents collaborative and database services. To achieve this, we implemented the algorithms of local map establishment, global map registration, optimization, and network synchronization. The system provides AR services for diverse industrial processes via three collaborative ways—remote support, collaborative annotation, and editing. According to the system, applications for cutting machines were presented to improve efficiency and reduce costs, covering cutting head designs, production line sales, and cutting machine inspections. Finally, a user study was performed to prove the usage experience of the system.
Collapse
|