1
|
Zhang G, Nguyen TN, Fooladi-Talari H, Salvador T, Thomas K, Crowley D, Dingeman RS, Shekhar R. Augmented reality for point-of-care ultrasound-guided vascular access in pediatric patients using Microsoft HoloLens 2: a preliminary evaluation. J Med Imaging (Bellingham) 2024; 11:062604. [PMID: 39280781 PMCID: PMC11393663 DOI: 10.1117/1.jmi.11.6.062604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 07/18/2024] [Accepted: 08/15/2024] [Indexed: 09/18/2024] Open
Abstract
Significance Conventional ultrasound-guided vascular access procedures are challenging due to the need for anatomical understanding, precise needle manipulation, and hand-eye coordination. Recently, augmented reality (AR)-based guidance has emerged as an aid to improve procedural efficiency and potential outcomes. However, its application in pediatric vascular access has not been comprehensively evaluated. Aim We developed an AR ultrasound application, HoloUS, using the Microsoft HoloLens 2 to display live ultrasound images directly in the proceduralist's field of view. We presented our evaluation of the effect of using the Microsoft HoloLens 2 for point-of-care ultrasound (POCUS)-guided vascular access in 30 pediatric patients. Approach A custom software module was developed on a tablet capable of capturing the moving ultrasound image from any ultrasound machine's screen. The captured image was compressed and sent to the HoloLens 2 via a hotspot without needing Internet access. On the HoloLens 2, we developed a custom software module to receive, decompress, and display the live ultrasound image. Hand gesture and voice command features were implemented for the user to reposition, resize, and change the gain and the contrast of the image. We evaluated 30 (15 successful control and 12 successful interventional) cases completed in a single-center, prospective, randomized study. Results The mean overall rendering latency and the rendering frame rate of the HoloUS application were 139.30 ms ( σ = 32.02 ms ) and 30 frames per second, respectively. The average procedure completion time was 17.3% shorter using AR guidance. The numbers of puncture attempts and needle redirections were similar between the two groups, and the number of head adjustments was minimal in the interventional group. Conclusion We presented our evaluation of the results from the first study using the Microsoft HoloLens 2 that investigates AR-based POCUS-guided vascular access in pediatric patients. Our evaluation confirmed clinical feasibility and potential improvement in procedural efficiency.
Collapse
Affiliation(s)
- Gesiren Zhang
- Children's National Hospital, Washington, DC, United States
| | | | | | - Tyler Salvador
- Children's National Hospital, Washington, DC, United States
| | - Kia Thomas
- Children's National Hospital, Washington, DC, United States
- Howard University College of Medicine, Washington, DC, United States
| | - Daragh Crowley
- Children's National Hospital, Washington, DC, United States
- University College Cork, Cork, Ireland
| | - R Scott Dingeman
- Children's National Hospital, Washington, DC, United States
- George Washington University School of Medicine and Health Sciences, Washington, DC, United States
| | - Raj Shekhar
- Children's National Hospital, Washington, DC, United States
- AusculTech Dx, Silver Spring, Maryland, United States
- George Washington University School of Medicine and Health Sciences, Washington, DC, United States
| |
Collapse
|
2
|
Tomašević O, Ivančić A, Mejić L, Lužanin Z, Jorgovanović N. Depth-Sensing-Based Algorithm for Chest Morphology Assessment in Children with Cerebral Palsy. SENSORS (BASEL, SWITZERLAND) 2024; 24:5575. [PMID: 39275488 PMCID: PMC11398239 DOI: 10.3390/s24175575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 08/20/2024] [Accepted: 08/22/2024] [Indexed: 09/16/2024]
Abstract
This study introduced a depth-sensing-based approach with robust algorithms for tracking relative morphological changes in the chests of patients undergoing physical therapy. The problem that was addressed was the periodic change in morphological parameters induced by breathing, and since the recording was continuous, the parameters were extracted for the moments of maximum and minimum volumes of the chest (inspiration and expiration moments), and analyzed. The parameters were derived from morphological transverse cross-sections (CSs), which were extracted for the moments of maximal and minimal depth variations, and the reliability of the results was expressed through the coefficient of variation (CV) of the resulting curves. Across all subjects and levels of observed anatomy, the mean CV for CS depth values was smaller than 2%, and the mean CV of the CS area was smaller than 1%. To prove the reproducibility of measurements (extraction of morphological parameters), 10 subjects were recorded in two consecutive sessions with a short interval (2 weeks) where no changes in the monitored parameters were expected and statistical methods show that there was no statistically significant difference between the sessions, which confirms the reproducibility hypothesis. Additionally, based on the representative CSs for inspiration and expirations moments, chest mobility in quiet breathing was examined, and the statistical test showed no difference between the two sessions. The findings justify the proposed algorithm as a valuable tool for evaluating the impact of rehabilitation exercises on chest morphology.
Collapse
Affiliation(s)
- Olivera Tomašević
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| | | | - Luka Mejić
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| | - Zorana Lužanin
- Faculty of Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| | - Nikola Jorgovanović
- Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia
| |
Collapse
|
3
|
Do TLP, Sanhae K, Hwang L, Lee S. Real-Time Spatial Mapping in Architectural Visualization: A Comparison among Mixed Reality Devices. SENSORS (BASEL, SWITZERLAND) 2024; 24:4727. [PMID: 39066124 PMCID: PMC11280614 DOI: 10.3390/s24144727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/14/2024] [Accepted: 07/19/2024] [Indexed: 07/28/2024]
Abstract
Recent advancements in communication technology have catalyzed the widespread adoption of realistic content, with augmented reality (AR) emerging as a pivotal tool for seamlessly integrating virtual elements into real-world environments. In construction, architecture, and urban design, the integration of mixed reality (MR) technology enables rapid interior spatial mapping, providing clients with immersive experiences to envision their desires. The rapid advancement of MR devices, or devices that integrate MR capabilities, offers users numerous opportunities for enhanced entertainment experiences. However, to support designers at a high level of expertise, it is crucial to ensure the accuracy and reliability of the data provided by these devices. This study explored the potential of utilizing spatial mapping within various methodologies for surveying architectural interiors. The objective was to identify optimized spatial mapping procedures and determine the most effective applications for their use. Experiments were conducted to evaluate the interior survey performance, using HoloLens 2, an iPhone 13 Pro for spatial mapping, and photogrammetry. The findings indicate that HoloLens 2 is most suited for the tasks examined in the scope of these experiments. Nonetheless, based on the acquired parameters, the author also proposes approaches to apply the other technologies in specific real-world scenarios.
Collapse
Affiliation(s)
- Tam Le Phuc Do
- Department of Immersive Content Convergence, Kwangwoon University, Seoul 01897, Republic of Korea; (T.L.P.D.); (K.S.); (L.H.)
| | - Kang Sanhae
- Department of Immersive Content Convergence, Kwangwoon University, Seoul 01897, Republic of Korea; (T.L.P.D.); (K.S.); (L.H.)
| | - Leehwan Hwang
- Department of Immersive Content Convergence, Kwangwoon University, Seoul 01897, Republic of Korea; (T.L.P.D.); (K.S.); (L.H.)
| | - Seunghyun Lee
- Ingenium College, Kwangwoon University, Seoul 01897, Republic of Korea
| |
Collapse
|
4
|
Hou J, Hübner P, Schmidt J, Iwaszczuk D. Indoor Mapping with Entertainment Devices: Evaluating the Impact of Different Mapping Strategies for Microsoft HoloLens 2 and Apple iPhone 14 Pro. SENSORS (BASEL, SWITZERLAND) 2024; 24:1062. [PMID: 38400220 PMCID: PMC10893111 DOI: 10.3390/s24041062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024]
Abstract
Due to their low cost and portability, using entertainment devices for indoor mapping applications has become a hot research topic. However, the impact of user behavior on indoor mapping evaluation with entertainment devices is often overlooked in previous studies. This article aims to assess the indoor mapping performance of entertainment devices under different mapping strategies. We chose two entertainment devices, the HoloLens 2 and iPhone 14 Pro, for our evaluation work. Based on our previous mapping experience and user habits, we defined four simplified indoor mapping strategies: straight-forward mapping (SFM), left-right alternating mapping (LRAM), round-trip straight-forward mapping (RT-SFM), and round-trip left-right alternating mapping (RT-LRAM). First, we acquired triangle mesh data under each strategy with the HoloLens 2 and iPhone 14 Pro. Then, we compared the changes in data completeness and accuracy between the different devices and indoor mapping applications. Our findings show that compared to the iPhone 14 Pro, the triangle mesh accuracy acquired by the HoloLens 2 has more stable performance under different strategies. Notably, the triangle mesh data acquired by the HoloLens 2 under the RT-LRAM strategy can effectively compensate for missing wall and floor surfaces, mainly caused by furniture occlusion and the low frame rate of the depth-sensing camera. However, the iPhone 14 Pro is more efficient in terms of mapping completeness and can acquire a complete triangle mesh more quickly than the HoloLens 2. In summary, choosing an entertainment device for indoor mapping requires a combination of specific needs and scenes. If accuracy and stability are important, the HoloLens 2 is more suitable; if efficiency and completeness are important, the iPhone 14 Pro is better.
Collapse
Affiliation(s)
- Jiwei Hou
- Remote Sensing and Image Analysis, Department of Civil and Environmental Engineering, Technical University of Darmstadt, 64287 Darmstadt, Germany; (J.H.); (D.I.)
| | - Patrick Hübner
- Remote Sensing and Image Analysis, Department of Civil and Environmental Engineering, Technical University of Darmstadt, 64287 Darmstadt, Germany; (J.H.); (D.I.)
| | - Jakob Schmidt
- Geodetic Measurement Systems and Sensor Technology, Department of Civil and Environmental Engineering, Technical University of Darmstadt, 64287 Darmstadt, Germany;
| | - Dorota Iwaszczuk
- Remote Sensing and Image Analysis, Department of Civil and Environmental Engineering, Technical University of Darmstadt, 64287 Darmstadt, Germany; (J.H.); (D.I.)
| |
Collapse
|
5
|
Fraser R, Bettati P, Young J, Rathgeb A, Sirsi S, Fei B. A Fast and Interactive Augmented Reality System for PET/CT-guided Intervention of Neuroblastoma. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12928:129281D. [PMID: 38708144 PMCID: PMC11069343 DOI: 10.1117/12.3008663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Neuroblastoma is the most common type of extracranial solid tumor in children and can often result in death if not treated. High-intensity focused ultrasound (HIFU) is a non-invasive technique for treating tissue that is deep within the body. It avoids the use of ionizing radiation, avoiding long-term side-effects of these treatments. The goal of this project was to develop the rendering component of an augmented reality (AR) system with potential applications for image-guided HIFU treatment of neuroblastoma. Our project focuses on taking 3D models of neuroblastoma lesions obtained from PET/CT and displaying them in our AR system in near real-time for use by physicians. We used volume ray casting with raster graphics as our preferred rendering method, as it allows for the real-time editing of our 3D radiologic data. Some unique features of our AR system include intuitive hand gestures and virtual user interfaces that allow the user to interact with the rendered data and process PET/CT images for optimal visualization. We implemented the feature to set a custom transfer function, set custom intensity cutoff points, and region-of-interest extraction via cutting planes. In the future, we hope to incorporate this work as part of a complete system for focused ultrasound treatment by adding ultrasound simulation, visualization, and deformable registration.
Collapse
Affiliation(s)
- Rowan Fraser
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
| | - Patric Bettati
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
| | - Jeff Young
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
| | - Armand Rathgeb
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
| | - Shashank Sirsi
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
| | - Baowei Fei
- Center for Imaging and Surgical Innovation, University of Texas at Dallas, TX
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX
| |
Collapse
|
6
|
Ran C, Zhang X, Yu H, Wang Z, Wang S, Yang J. Combined Filtering Method for Offshore Oil and Gas Platform Point Cloud Data Based on KNN_PCF and Hy_WHF and Its Application in 3D Reconstruction. SENSORS (BASEL, SWITZERLAND) 2024; 24:615. [PMID: 38257706 PMCID: PMC10818742 DOI: 10.3390/s24020615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Revised: 01/10/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024]
Abstract
With the increasing scale of deep-sea oil exploration and drilling platforms, the assessment, maintenance, and optimization of marine structures have become crucial. Traditional detection and manual measurement methods are inadequate for meeting these demands, but three-dimensional laser scanning technology offers a promising solution. However, the complexity of the marine environment, including waves and wind, often leads to problematic point cloud data characterized by noise points and redundancy. To address this challenge, this paper proposes a method that combines K-Nearest-Neighborhood filtering with a hyperbolic function-based weighted hybrid filtering. The experimental results demonstrate the exceptional performance of the algorithm in processing point cloud data from offshore oil and gas platforms. The method improves noise point filtering efficiency by approximately 11% and decreases the total error by 0.6 percentage points compared to existing technologies. Not only does this method accurately process anomalies in high-density areas-it also removes noise while preserving important details. Furthermore, the research method presented in this paper is particularly suited for processing large point cloud data in complex marine environments. It enhances data accuracy and optimizes the three-dimensional reconstruction of offshore oil and gas platforms, providing reliable dimensional information for land-based prefabrication of these platforms.
Collapse
Affiliation(s)
- Chunqing Ran
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
| | - Xiaobo Zhang
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
| | - Hao Yu
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
- College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
| | - Zhengyang Wang
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
| | - Shengli Wang
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
| | - Jichao Yang
- College of Ocean Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China; (C.R.); (H.Y.); (Z.W.); (S.W.)
| |
Collapse
|
7
|
González-Rueda JR, Galparsoro-Catalán A, de Paz-Hermoso VM, Riad-Deglow E, Zubizarreta-Macho Á, Pato-Mourelo J, Hernández-Montero S, Montero-Martín J. Accuracy of zygomatic dental implant placement using computer-aided static and dynamic navigation systems compared with a mixed reality appliance. An in vitro study. J Clin Exp Dent 2023; 15:e1035-e1044. [PMID: 38186921 PMCID: PMC10767737 DOI: 10.4317/jced.61097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 11/14/2023] [Indexed: 01/09/2024] Open
Abstract
Background Analyze and compare the accuracy of zygomatic dental implant placement carried out using a static navigation surgery, a dynamic navigation surgery and an augmented reality appliance. Material and Methods Eighty (80) zygomatic dental implants were randomly assigned to one of four study groups: A: static navigation implant surgery (n = 20) (GI); B: dynamic navigation implant surgery (n = 20) (NI); C: augmented reality appliance implant placement (n = 20) (ARI) and D: free hand technique (n = 20) (FHI). A preoperative cone-beam computed tomography (CBCT) scan of the existing situation was performed to plan the surgical approach for the computer assisted implant surgery study groups. Four zygomatic dental implants were placed in anatomical-based polyurethane models (n = 20) manufactured by stereolithography, and a postoperative CBCT scan was taken. Subsequently, the preoperative planning and postoperative CBCT scans were uploaded to dental implant software to analyze the coronal global, apical global, and angular deviations. Results were analyzed using linear regression models with repeated measures to assess the differences according to the group, according to the position, and the interaction between both variables. If statistically significant differences were detected, 2-to-2 comparisons were made between the groups/positions. Results The results did not show statistically significant differences between the coronal global deviations of GI (5.54 ± 1.72 mm), NI (5.43 ± 2.13 mm), ARI (5.64 ± 1.11 mm) and FHI (4.75 ± 1.58 mm). However, showed statistically significant differences between the apical global deviations of FHI (3.20 ± 1.45 mm) and NI (4.92 ± 1.89 mm) (p = 0.0078), FHI and GI (5.33 ± 2.14 mm) (p = 0.0005) and FHI and ARI (4.88 ± 1.54 mm) (p = 0.0132). In addition, the results showed also statistically significant differences between the angular deviations of FHI (8.47º ± 4.40º) and NI (7.36º ± 4.12º) (p = 0.0086) and between GI (5.30º ± 2.80º) and ARI (9.60º ± 4.25º) (p = 0.0005). Conclusions Free-hand technique provides greater accuracy of zygomatic dental implant placement than computer-assisted implant surgical techniques, and zygomatic dental implants placed in the anterior region are more accurate than in the posterior region. However, it is an in vitro study and further clinical studies must be conducted to extrapolate the results to the clinical setting. Key words:Implantology, computer assisted implant surgery, image-guided surgery, augmented reality, navigation surgery, zygomatic implants.
Collapse
Affiliation(s)
- Juan-Ramón González-Rueda
- Department of Implant Surgery, Faculty of Health Sciences, Alfonso X el Sabio University, 28691 Madrid, Spain
| | - Agustín Galparsoro-Catalán
- Department of Implant Surgery, Faculty of Health Sciences, Alfonso X el Sabio University, 28691 Madrid, Spain
| | | | - Elena Riad-Deglow
- Department of Implant Surgery, Faculty of Health Sciences, Alfonso X el Sabio University, 28691 Madrid, Spain
| | - Álvaro Zubizarreta-Macho
- Department of Implant Surgery, Faculty of Health Sciences, Alfonso X el Sabio University, 28691 Madrid, Spain
- Department of Surgery, Faculty of Medicine, University of Salamanca, 37008 Salamanca, Spain
| | - Jesús Pato-Mourelo
- Department of Surgery, Faculty of Dentistry, University of Navarra, 31009 Pamplona (Navarra), Spain
| | - Sofía Hernández-Montero
- Department of Implant Surgery, Faculty of Health Sciences, Alfonso X el Sabio University, 28691 Madrid, Spain
| | - Javier Montero-Martín
- Department of Surgery, Faculty of Medicine, University of Salamanca, 37008 Salamanca, Spain
| |
Collapse
|
8
|
Zaccardi S, Frantz T, Beckwée D, Swinnen E, Jansen B. On-Device Execution of Deep Learning Models on HoloLens2 for Real-Time Augmented Reality Medical Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:8698. [PMID: 37960398 PMCID: PMC10648161 DOI: 10.3390/s23218698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 10/18/2023] [Accepted: 10/23/2023] [Indexed: 11/15/2023]
Abstract
The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2's limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2-Unity Barracuda and Windows Machine Learning (WinML)-using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance.
Collapse
Affiliation(s)
- Silvia Zaccardi
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
- IMEC, 3001 Leuven, Belgium
| | - Taylor Frantz
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- IMEC, 3001 Leuven, Belgium
| | - David Beckwée
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
| | - Eva Swinnen
- Rehabilitation Research Group (RERE), Vrije Universiteit Brussel, 1090 Brussel, Belgium; (D.B.); (E.S.)
| | - Bart Jansen
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, 1050 Brussel, Belgium; (T.F.); (B.J.)
- IMEC, 3001 Leuven, Belgium
| |
Collapse
|
9
|
Xu Z, Yang Y, Zhu Y, Fan J. Mixed reality drills of indoor earthquake safety considering seismic damage of nonstructural components. Sci Rep 2023; 13:16461. [PMID: 37777548 PMCID: PMC10543390 DOI: 10.1038/s41598-023-43533-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 09/25/2023] [Indexed: 10/02/2023] Open
Abstract
The damaged indoor nonstructural components in the earthquake often cause casualties. To improve the indoor earthquake safety capacity of occupants, a mixed reality (MR) drill method for indoor earthquake safety considering seismic damage of nonstructural components is proposed. First, an MR device, HoloLens, is used to capture indoor point clouds, and the indoor three-dimensional scene is reconstructed using point clouds. Subsequently, the seismic motion models of indoor components are established, so that the indoor nonstructural seismic damage scene is constructed using the physics engine and displayed using HoloLens. Finally, a guidance algorithm for a safe zone was designed for the drills. Taking a typical office as an example, an indoor earthquake safety drill was performed. The drill results show that the proposed MR method can increase the average efficiency of moving to a safe zone by 43.1%. Therefore, the outcome of this study can effectively improve the earthquake safety ability of occupants, thereby reducing casualties.
Collapse
Affiliation(s)
- Zhen Xu
- Research Institute of Urbanization and Urban Safety, School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing, 100083, China.
| | - Yajun Yang
- Research Institute of Urbanization and Urban Safety, School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yian Zhu
- Research Institute of Urbanization and Urban Safety, School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| | - Jingjing Fan
- Research Institute of Urbanization and Urban Safety, School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing, 100083, China
| |
Collapse
|
10
|
Zheng C, Jarecki A, Lee K. Integrated system architecture with mixed-reality user interface for virtual-physical hybrid swarm simulations. Sci Rep 2023; 13:14761. [PMID: 37679356 PMCID: PMC10485072 DOI: 10.1038/s41598-023-40623-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 08/14/2023] [Indexed: 09/09/2023] Open
Abstract
This paper introduces a hybrid robotic swarm system architecture that combines virtual and physical components and enables human-swarm interaction through mixed reality (MR) devices. The system comprises three main modules: (1) the virtual module, which simulates robotic agents, (2) the physical module, consisting of real robotic agents, and (3) the user interface (UI) module. To facilitate communication between the modules, the UI module connects with the virtual module using Photon Network and with the physical module through the Robot Operating System (ROS) bridge. Additionally, the virtual and physical modules communicate via the ROS bridge. The virtual and physical agents form a hybrid swarm by integrating these three modules. The human-swarm interface based on MR technology enables one or multiple human users to interact with the swarm in various ways. Users can create and assign tasks, monitor real-time swarm status and activities, or control and interact with specific robotic agents. To validate the system-level integration and embedded swarm functions, two experimental demonstrations were conducted: (a) two users playing planner and observer roles, assigning five tasks for the swarm to allocate the tasks autonomously and execute them, and (b) a single user interacting with the hybrid swarm consisting of two physical agents and 170 virtual agents by creating and assigning a task list and then controlling one of the physical robots to complete a target identification mission.
Collapse
Affiliation(s)
- Chuanqi Zheng
- Mechanical Engineering, Texas A&M University, College Station, 77845, TX, USA
| | - Annalisa Jarecki
- Mechanical Engineering, Texas A&M University, College Station, 77845, TX, USA
| | - Kiju Lee
- Mechanical Engineering, Texas A&M University, College Station, 77845, TX, USA.
- Engineering Technology & Industrial Distribution, Texas A&M University, College Station, 77845, TX, USA.
| |
Collapse
|
11
|
Shabir D, Anjum A, Hamza H, Padhan J, Al-Ansari A, Yaacoub E, Mohammed A, Navkar NV. Development and Evaluation of a Mixed-Reality Tele-ultrasound System. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1867-1874. [PMID: 37263893 DOI: 10.1016/j.ultrasmedbio.2023.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 02/25/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023]
Abstract
OBJECTIVE The objective of this feasibility study was to develop and assess a tele-ultrasound system that would enable an expert sonographer (situated at the remote site) to provide real-time guidance to an operator (situated at the imaging site) using a mixed-reality environment. METHODS An architecture along with the operational workflow of the system is designed and a prototype is developed that enables guidance in form of audiovisual cues. The visual cues comprise holograms (of the ultrasound images and ultrasound probe) and is rendered to the operator using a head-mounted display device. The position and orientation of the ultrasound probe's hologram are remotely controlled by the expert sonographer and guide the placement of a physical ultrasound probe at the imaging site. The developed prototype was evaluated for its performance on a network. In addition, a user study (with 12 participants) was conducted to assess the operator's ability to align the probe under different guidance modes. RESULTS The network performance revealed the view of the imaging site and ultrasound images were transferred to the remote site in 233 ± 42 and 158 ± 38 ms, respectively. The expert sonographer was able to transfer, to the imaging site, data related to position and orientation of the ultrasound probe's hologram in 78 ± 13 ms. The user study indicated that the audiovisual cues are sufficient for an operator to position and orient a physical probe for accurate depiction of the targeted tissue (p < 0.001). The probe's placement translational and rotational errors were 1.4 ± 0.6 mm and 5.4 ± 2.2º. CONCLUSION The work illustrates the feasibility of using a mixed-reality environment for effective communication between an expert sonographer (ultrasound physician) and an operator. Further studies are required to determine its applicability in a clinical setting during tele-ultrasound.
Collapse
Affiliation(s)
- Dehlela Shabir
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Arshak Anjum
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Hawa Hamza
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | | | | | - Elias Yaacoub
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Amr Mohammed
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Nikhil V Navkar
- Department of Surgery, Hamad Medical Corporation, Doha, Qatar.
| |
Collapse
|
12
|
Engström J, Jevinger Å, Olsson CM, Persson JA. Some Design Considerations in Passive Indoor Positioning Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:5684. [PMID: 37420850 PMCID: PMC10301307 DOI: 10.3390/s23125684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/10/2023] [Accepted: 06/13/2023] [Indexed: 07/09/2023]
Abstract
User location is becoming an increasingly common and important feature for a wide range of services. Smartphone owners increasingly use location-based services, as service providers add context-enhanced functionality such as car-driving routes, COVID-19 tracking, crowdedness indicators, and suggestions for nearby points of interest. However, positioning a user indoors is still problematic due to the fading of the radio signal caused by multipath and shadowing, where both have complex dependencies on the indoor environment. Location fingerprinting is a common positioning method where Radio Signal Strength (RSS) measurements are compared to a reference database of previously stored RSS values. Due to the size of the reference databases, these are often stored in the cloud. However, server-side positioning computations make preserving the user's privacy problematic. Given the assumption that a user does not want to communicate his/her location, we pose the question of whether a passive system with client-side computations can substitute fingerprinting-based systems, which commonly use active communication with a server. We compared two passive indoor location systems based on multilateration and sensor fusion using an Unscented Kalman Filter (UKF) with fingerprinting and show how these may provide accurate indoor positioning without compromising the user's privacy in a busy office environment.
Collapse
Affiliation(s)
- Jimmy Engström
- Sony Europe B.V., 223 62 Lund, Sweden
- Internet of Things and People Research Center, Department of Computer Science and Media Technology, Malmö University, 205 06 Malmö, Sweden; (Å.J.); (C.M.O.); (J.A.P.)
| | - Åse Jevinger
- Internet of Things and People Research Center, Department of Computer Science and Media Technology, Malmö University, 205 06 Malmö, Sweden; (Å.J.); (C.M.O.); (J.A.P.)
| | - Carl Magnus Olsson
- Internet of Things and People Research Center, Department of Computer Science and Media Technology, Malmö University, 205 06 Malmö, Sweden; (Å.J.); (C.M.O.); (J.A.P.)
| | - Jan A. Persson
- Internet of Things and People Research Center, Department of Computer Science and Media Technology, Malmö University, 205 06 Malmö, Sweden; (Å.J.); (C.M.O.); (J.A.P.)
| |
Collapse
|
13
|
Gu W, Knopf J, Cast J, Higgins LD, Knopf D, Unberath M. Nail it! vision-based drift correction for accurate mixed reality surgical guidance. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02950-x. [PMID: 37231201 DOI: 10.1007/s11548-023-02950-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/02/2023] [Indexed: 05/27/2023]
Abstract
PURPOSE Mixed reality-guided surgery through head-mounted displays (HMDs) is gaining interest among surgeons. However, precise tracking of HMDs relative to the surgical environment is crucial for successful outcomes. Without fiducial markers, spatial tracking of the HMD suffers from millimeter- to centimeter-scale drift, resulting in misaligned visualization of registered overlays. Methods and workflows capable of automatically correcting for drift after patient registration are essential to assuring accurate execution of surgical plans. METHODS We present a mixed reality surgical navigation workflow that continuously corrects for drift after patient registration using only image-based methods. We demonstrate its feasibility and capabilities using the Microsoft HoloLens on glenoid pin placement in total shoulder arthroplasty. A phantom study was conducted involving five users with each user placing pins on six glenoids of different deformity, followed by a cadaver study by an attending surgeon. RESULTS In both studies, all users were satisfied with the registration overlay before drilling the pin. Postoperative CT scans showed 1.5 mm error in entry point deviation and 2.4[Formula: see text] error in pin orientation on average in the phantom study and 2.5 mm and 1.5[Formula: see text] in the cadaver study. A trained user takes around 90 s to complete the workflow. Our method also outperformed HoloLens native tracking in drift correction. CONCLUSION Our findings suggest that image-based drift correction can provide mixed reality environments precisely aligned with patient anatomy, enabling pin placement with consistently high accuracy. These techniques constitute a next step toward purely image-based mixed reality surgical guidance, without requiring patient markers or external tracking hardware.
Collapse
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, MD, USA.
| | | | - John Cast
- Johns Hopkins University, Baltimore, MD, USA
| | | | - David Knopf
- Arthrex Inc., 1 Arthrex Way, Naples, FL, USA
| | | |
Collapse
|
14
|
Alfakhori M, Sardi Barzallo JS, Coors V. Occlusion Handling for Mobile AR Applications in Indoor and Outdoor Scenarios. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094245. [PMID: 37177449 PMCID: PMC10180934 DOI: 10.3390/s23094245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/19/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023]
Abstract
When producing an engaging augmented reality (AR) user experience, it is crucial to create AR content that mimics real-life objects' behavior to the greatest extent possible. A critical aspect to achieve this is ensuring that the digital objects conform to line-of-sight rules and are either partially or completely occluded, just like real-world objects would be. The study explores the concept of utilizing a pre-existing 3D representation of the physical environment as an occlusion mask that governs the rendering of each pixel. Specifically, the research aligns a Level of Detail (LOD) 1 building model and a 3D mesh model with their real-world counterparts and evaluates the effectiveness of occlusion between the two models in an outdoor setting. Despite the mesh model containing more detailed information, the overall results do not show improvement. In an indoor scenario, the researchers leverage the scanning capability of HoloLens 2.0 to create a pre-scanned representation, which helps overcome the limited range and delay of the mesh reconstruction.
Collapse
Affiliation(s)
- Muhammad Alfakhori
- Centre for Geodesy and Geoinformatics, Stuttgart University of Applied Sciences (HFT Stuttgart), 70174 Stuttgart, Germany
| | - Juan Sebastián Sardi Barzallo
- Centre for Geodesy and Geoinformatics, Stuttgart University of Applied Sciences (HFT Stuttgart), 70174 Stuttgart, Germany
| | - Volker Coors
- Centre for Geodesy and Geoinformatics, Stuttgart University of Applied Sciences (HFT Stuttgart), 70174 Stuttgart, Germany
| |
Collapse
|
15
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
16
|
An HBIM Methodology for the Accurate and Georeferenced Reconstruction of Urban Contexts Surveyed by UAV: The Case of the Castle of Charles V. REMOTE SENSING 2022. [DOI: 10.3390/rs14153688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The potentialities of the use of the UAV survey as a base for the generation of the context mesh are illustrated through the experiments on the case study, the Crotone Fortress, proposing a systematic general methodology and two procedural workflows for the importation of the triangulated model, maintaining its real geographical coordinates, in the Autodesk Revit environment through a Dynamo Visual Programming script [VPL]. First, the texturisation of the mesh of the urban context was experimented with, using the real-sized photogrammetric orthoimage as Revit material; therefore, the reproduction of the discretised detailed areas of the urban context was tested. They were imported via Dynamo by reading the coordinates of the vertices of every single face that constitutes the triangulated model and associating to each of them the corresponding real colorimetric data. Starting from the georeferenced context of the photogrammetric mesh, nine federated BIM models were produced: the general context models, the detailed models and the architectural model of the fortress.
Collapse
|
17
|
Benchmarking Built-In Tracking Systems for Indoor AR Applications on Popular Mobile Devices. SENSORS 2022; 22:s22145382. [PMID: 35891058 PMCID: PMC9320911 DOI: 10.3390/s22145382] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022]
Abstract
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track and maintain accurate registration between real and computer-generated objects. Thus, it is crucially important to assess tracking capabilities. In this paper, we present a benchmark evaluation of the tracking performances of some of the most popular AR handheld devices, which can be regarded as a representative set of devices for sale in the global market. In particular, eight different next-gen devices including smartphones and tablets were considered. Experiments were conducted in a laboratory by adopting an external tracking system. The experimental methodology consisted of three main stages: calibration, data acquisition, and data evaluation. The results of the experimentation showed that the selected devices, in combination with the AR SDKs, have different tracking performances depending on the covered trajectory.
Collapse
|
18
|
von Haxthausen F, Moreta-Martinez R, Pose Díez de la Lastra A, Pascau J, Ernst F. UltrARsound: in situ visualization of live ultrasound images using HoloLens 2. Int J Comput Assist Radiol Surg 2022; 17:2081-2091. [PMID: 35776399 PMCID: PMC9515035 DOI: 10.1007/s11548-022-02695-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 05/31/2022] [Indexed: 11/24/2022]
Abstract
Purpose Augmented Reality (AR) has the potential to simplify ultrasound (US) examinations which usually require a skilled and experienced sonographer to mentally align narrow 2D cross-sectional US images in the 3D anatomy of the patient. This work describes and evaluates a novel approach to track retroreflective spheres attached to the US probe using an inside-out technique with the AR glasses HoloLens 2. Finally, live US images are displayed in situ on the imaged anatomy. Methods The Unity application UltrARsound performs spatial tracking of the US probe and attached retroreflective markers using the depth camera integrated into the AR glasses—thus eliminating the need for an external tracking system. Additionally, a Kalman filter is implemented to improve the noisy measurements of the camera. US images are streamed wirelessly via the PLUS toolkit to HoloLens 2. The technical evaluation comprises static and dynamic tracking accuracy, frequency and latency of displayed images. Results Tracking is performed with a median accuracy of 1.98 mm/1.81\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘ for the static setting when using the Kalman filter. In a dynamic scenario, the median error was 2.81 mm/1.70\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$^\circ $$\end{document}∘. The tracking frequency is currently limited to 20 Hz. 83% of the displayed US images had a latency lower than 16 ms. Conclusions In this work, we showed that spatial tracking of retroreflective spheres with the depth camera of HoloLens 2 is feasible, achieving a promising accuracy for in situ visualization of live US images. For tracking, no additional hardware nor modifications to HoloLens 2 are required making it a cheap and easy-to-use approach. Moreover, a minimal latency of displayed images enables a real-time perception for the sonographer.
Collapse
Affiliation(s)
- Felix von Haxthausen
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain. .,Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Schleswig-Holstein, Germany.
| | - Rafael Moreta-Martinez
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Alicia Pose Díez de la Lastra
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Avda. Universidad 30, 28911, Leganés, Spain
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Schleswig-Holstein, Germany
| |
Collapse
|
19
|
Pose-Díez-de-la-Lastra A, Moreta-Martinez R, García-Sevilla M, García-Mato D, Calvo-Haro JA, Mediavilla-Santos L, Pérez-Mañanes R, von Haxthausen F, Pascau J. HoloLens 1 vs. HoloLens 2: Improvements in the New Model for Orthopedic Oncological Interventions. SENSORS 2022; 22:s22134915. [PMID: 35808407 PMCID: PMC9269857 DOI: 10.3390/s22134915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022]
Abstract
This work analyzed the use of Microsoft HoloLens 2 in orthopedic oncological surgeries and compares it to its predecessor (Microsoft HoloLens 1). Specifically, we developed two equivalent applications, one for each device, and evaluated the augmented reality (AR) projection accuracy in an experimental scenario using phantoms based on two patients. We achieved automatic registration between virtual and real worlds using patient-specific surgical guides on each phantom. They contained a small adaptor for a 3D-printed AR marker, the characteristic patterns of which were easily recognized using both Microsoft HoloLens devices. The newest model improved the AR projection accuracy by almost 25%, and both of them yielded an RMSE below 3 mm. After ascertaining the enhancement of the second model in this aspect, we went a step further with Microsoft HoloLens 2 and tested it during the surgical intervention of one of the patients. During this experience, we collected the surgeons’ feedback in terms of comfortability, usability, and ergonomics. Our goal was to estimate whether the improved technical features of the newest model facilitate its implementation in actual surgical scenarios. All of the results point to Microsoft HoloLens 2 being better in all the aspects affecting surgical interventions and support its use in future experiences.
Collapse
Affiliation(s)
- Alicia Pose-Díez-de-la-Lastra
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (A.P.-D.-d.-l.-L.); (R.M.-M.); (M.G.-S.); (D.G.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
| | - Rafael Moreta-Martinez
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (A.P.-D.-d.-l.-L.); (R.M.-M.); (M.G.-S.); (D.G.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
| | - Mónica García-Sevilla
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (A.P.-D.-d.-l.-L.); (R.M.-M.); (M.G.-S.); (D.G.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
| | - David García-Mato
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (A.P.-D.-d.-l.-L.); (R.M.-M.); (M.G.-S.); (D.G.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
| | - José Antonio Calvo-Haro
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
- Servicio de Cirugía Ortopédica y Traumatología, Hospital General Universitario Gregorio Marañón, 28007 Madrid, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Lydia Mediavilla-Santos
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
- Servicio de Cirugía Ortopédica y Traumatología, Hospital General Universitario Gregorio Marañón, 28007 Madrid, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Rubén Pérez-Mañanes
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
- Servicio de Cirugía Ortopédica y Traumatología, Hospital General Universitario Gregorio Marañón, 28007 Madrid, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad Complutense de Madrid, 28040 Madrid, Spain
| | - Felix von Haxthausen
- Institute for Robotics and Cognitive Systems, University of Lübeck, 23562 Lübeck, Germany;
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (A.P.-D.-d.-l.-L.); (R.M.-M.); (M.G.-S.); (D.G.-M.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (J.A.C.-H.); (L.M.-S.); (R.P.-M.)
- Correspondence: ; Tel.: +34-91-624-8196
| |
Collapse
|
20
|
Evans E, Dass M, Muter WM, Tuthill C, Tan AQ, Trumbower RD. A Wearable Mixed Reality Platform to Augment Overground Walking: A Feasibility Study. Front Hum Neurosci 2022; 16:868074. [PMID: 35754777 PMCID: PMC9218429 DOI: 10.3389/fnhum.2022.868074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 05/03/2022] [Indexed: 11/21/2022] Open
Abstract
Humans routinely modify their walking speed to adapt to functional goals and physical demands. However, damage to the central nervous system (CNS) often results in abnormal modulation of walking speed and increased risk of falls. There is considerable interest in treatment modalities that can provide safe and salient training opportunities, feedback about walking performance, and that may augment less reliable sensory feedback within the CNS after injury or disease. Fully immersive virtual reality technologies show benefits in boosting training-related gains in walking performance; however, they lack views of the real world that may limit functional carryover. Augmented reality and mixed reality head-mount displays (MR-HMD) provide partially immersive environments to extend the virtual reality benefits of interacting with virtual objects but within an unobstructed view of the real world. Despite this potential advantage, the feasibility of using MR-HMD visual feedback to promote goal-directed changes in overground walking speed remains unclear. Thus, we developed and evaluated a novel mixed reality application using the Microsoft HoloLens MR-HMD that provided real-time walking speed targets and augmented visual feedback during overground walking. We tested the application in a group of adults not living with disability and examined if they could use the targets and visual feedback to walk at 85%, 100%, and 115% of each individual’s self-selected speed. We examined whether individuals were able to meet each target gait speed and explored differences in accuracy across repeated trials and at the different speeds. Additionally, given the importance of task-specificity to therapeutic interventions, we examined if walking speed adjustment strategies were consistent with those observed during usual overground walking, and if walking with the MR-HMD resulted in increased variability in gait parameters. Overall, participants matched their overground walking speed to the target speed of the MR-HMD visual feedback conditions (all p-values > 0.05). The percent inaccuracy was approximately 5% across all speed matching conditions and remained consistent across walking trials after the first overall walking trial. Walking with the MR-HMD did not result in more variability in walking speed, however, we observed more variability in stride length and time when walking with feedback from the MR-HMD compared to walking without feedback. The findings offer support for mixed reality-based visual feedback as a method to provoke goal-specific changes in overground walking behavior. Further studies are necessary to determine the clinical safety and efficacy of this MR-HMD technology to provide extrinsic sensory feedback in combination with traditional treatments in rehabilitation.
Collapse
Affiliation(s)
- Emily Evans
- Spaulding Rehabilitation Hospital, Cambridge, MA, United States.,Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, United States
| | - Megan Dass
- Georgia Institute of Technology, School of Computer Science, Atlanta, GA, United States
| | - William M Muter
- Spaulding Rehabilitation Hospital, Cambridge, MA, United States
| | - Christopher Tuthill
- Spaulding Rehabilitation Hospital, Cambridge, MA, United States.,Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, United States
| | - Andrew Q Tan
- Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, United States.,Department of Integrative Physiology, University of Colorado Boulder, Boulder, CO, United States
| | - Randy D Trumbower
- Spaulding Rehabilitation Hospital, Cambridge, MA, United States.,Department of Physical Medicine and Rehabilitation, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
21
|
Zhou Z, Yang Z, Jiang S, Zhuo J, Zhu T, Ma S. Augmented reality surgical navigation system based on the spatial drift compensation method for glioma resection surgery. Med Phys 2022; 49:3963-3979. [PMID: 35383964 DOI: 10.1002/mp.15650] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/11/2022] [Accepted: 03/28/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The number of patients who suffer from glioma has been increasing, and this malignancy is a serious threat to human health. The mainstream treatment for glioma is surgical resection; therefore, accurate resection can improve postoperative patient recovery. PURPOSE Many studies have investigated surgical navigation guided by mixed reality, with good outcomes. However, the limitations of mixed reality, such as spatial drift caused by environmental changes, limit its clinical application. Therefore, we present a mixed reality surgical navigation system for glioma resection. Preoperative information can be fused precisely with the real patient with the spatial compensation method to achieve clinically suitable accuracy. METHODS A head-mounted device was used to display virtual information, and a markerless spatial registration method was applied to precisely align the virtual anatomy with the real patient preoperatively. High-accuracy preoperative and intraoperative movement and spatial drift compensation methods were used to increase the positional accuracy of the mixed reality-guided glioma resection system when the patient's head is fixed to the bed frame. Several experiments were designed to validate the accuracy and efficacy of this system. RESULTS Phantom experiments were performed to test the efficacy and accuracy of this system under ideal conditions, and clinical tests were conducted to assess the performance of this system in clinical application. The accuracy of spatial registration was 1.18 mm in the phantom experiments and 1.86 mm in the clinical application. CONCLUSIONS Herein, we present a mixed reality-based multimodality fused surgical navigation system for assisting surgeons in intuitively identifying the glioma boundary intraoperatively. The experimental results indicate that this system has suitable accuracy and efficacy for clinical usage. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Zeyang Zhou
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.,Centre for advanced Mechanisms and Robotics, Tianjin University, Tianjin, 300350, China
| | - Jie Zhuo
- Department of Neurosurgery, Tianjin Huanhu hospital, Tianjin, 300200, China
| | - Tao Zhu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shixing Ma
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|
22
|
The Identification, Development, and Evaluation of BIM-ARDM: A BIM-Based AR Defect Management System for Construction Inspections. BUILDINGS 2022. [DOI: 10.3390/buildings12020140] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
This article presents our findings from a three-stage research project, which consists of the identification, development, and evaluation of a defect management Augmented Reality (AR) prototype that incorporates Building Information Modelling (BIM) technologies. Within the first stage, we conducted a workshop with four construction-industry representatives to capture their opinions and perceptions of the potentials and barriers associated with the integration of BIM and AR in the construction industry. The workshop findings led us to the second stage, which consisted of the development of an on-site BIM-based AR defect management (BIM-ARDM) system for construction inspections. Finally, a study was conducted to evaluate BIM-ARDM in comparison to the current paper-based defect management inspection approach employed on construction sites. The findings from the study revealed BIM-ARDM significantly outperformed current approaches in terms of usability, workload, performance, completion time, identifying defects, locating building elements, and assisting the user with the inspection task.
Collapse
|
23
|
Point Cloud Validation: On the Impact of Laser Scanning Technologies on the Semantic Segmentation for BIM Modeling and Evaluation. REMOTE SENSING 2022. [DOI: 10.3390/rs14030582] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Building Information models created from laser scanning inputs are becoming increasingly commonplace, but the automation of the modeling and evaluation is still a subject of ongoing research. Current advancements mainly target the data interpretation steps, i.e., the instance and semantic segmentation by developing advanced deep learning models. However, these steps are highly influenced by the characteristics of the laser scanning technologies themselves, which also impact the reconstruction/evaluation potential. In this work, the impact of different data acquisition techniques and technologies on these procedures is studied. More specifically, we quantify the capacity of static, trolley, backpack, and head-worn mapping solutions and their semantic segmentation results such as for BIM modeling and analyses procedures. For the analysis, international standards and specifications are used wherever possible. From the experiments, the suitability of each platform is established, along with the pros and cons of each system. Overall, this work provides a much needed update on point cloud validation that is needed to further fuel BIM automation.
Collapse
|
24
|
Schmucker M, Haag M. Automated Size Recognition in Pediatric Emergencies Using Machine Learning and Augmented Reality: Within-Group Comparative Study. JMIR Form Res 2021; 5:e28345. [PMID: 34542416 PMCID: PMC8491115 DOI: 10.2196/28345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/25/2021] [Accepted: 06/19/2021] [Indexed: 01/26/2023] Open
Abstract
Background Pediatric emergencies involving children are rare events, and the experience of emergency physicians and the results of such emergencies are accordingly poor. Anatomical peculiarities and individual adjustments make treatment during pediatric emergency susceptible to error. Critical mistakes especially occur in the calculation of weight-based drug doses. Accordingly, the need for a ubiquitous assistance service that can, for example, automate dose calculation is high. However, few approaches exist due to the complexity of the problem. Objective Technically, an assistance service is possible, among other approaches, with an app that uses a depth camera that is integrated in smartphones or head-mounted displays to provide a 3D understanding of the environment. The goal of this study was to automate this technology as much as possible to develop and statistically evaluate an assistance service that does not have significantly worse measurement performance than an emergency ruler (the state of the art). Methods An assistance service was developed that uses machine learning to recognize patients and then automatically determines their size. Based on the size, the weight is automatically derived, and the dosages are calculated and presented to the physician. To evaluate the app, a small within-group design study was conducted with 17 children, who were each measured with the app installed on a smartphone with a built-in depth camera and a state-of-the-art emergency ruler. Results According to the statistical results (one-sample t test; P=.42; α=.05), there is no significant difference between the measurement performance of the app and an emergency ruler under the test conditions (indoor, daylight). The newly developed measurement method is thus not technically inferior to the established one in terms of accuracy. Conclusions An assistance service with an integrated augmented reality emergency ruler is technically possible, although some groundwork is still needed. The results of this study clear the way for further research, for example, usability testing.
Collapse
Affiliation(s)
- Michael Schmucker
- GECKO Institute, Heilbronn University of Applied Sciences, Heilbronn, Germany
| | - Martin Haag
- GECKO Institute, Heilbronn University of Applied Sciences, Heilbronn, Germany
| |
Collapse
|
25
|
Review of Microsoft HoloLens Applications over the Past Five Years. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11167259] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Since Microsoft HoloLens first appeared in 2016, HoloLens has been used in various industries, over the past five years. This study aims to review academic papers on the applications of HoloLens in several industries. A review was performed to summarize the results of 44 papers (dated between January 2016 and December 2020) and to outline the research trends of applying HoloLens to different industries. This study determined that HoloLens is employed in medical and surgical aids and systems, medical education and simulation, industrial engineering, architecture, civil engineering and other engineering fields. The findings of this study contribute towards classifying the current uses of HoloLens in various industries and identifying the types of visualization techniques and functions.
Collapse
|
26
|
Condino S, Cutolo F, Cattari N, Colangeli S, Parchi PD, Piazza R, Ruinato AD, Capanna R, Ferrari V. Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens. SENSORS 2021; 21:s21134450. [PMID: 34209748 PMCID: PMC8272062 DOI: 10.3390/s21134450] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 06/23/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022]
Abstract
Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient’s preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning.
Collapse
Affiliation(s)
- Sara Condino
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
- Correspondence:
| | - Fabrizio Cutolo
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
| | - Nadia Cattari
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Simone Colangeli
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Paolo Domenico Parchi
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Roberta Piazza
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Alfio Damiano Ruinato
- EndoCAS Center, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy; (N.C.); (R.P.); (A.D.R.)
| | - Rodolfo Capanna
- Orthopaedic and Traumatology Division, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, 56124 Pisa, Italy; (S.C.); (P.D.P.); (R.C.)
| | - Vincenzo Ferrari
- Information Engineering Department, University of Pisa, 56126 Pisa, Italy; (F.C.); (V.F.)
| |
Collapse
|
27
|
Hu X, Baena FRY, Cutolo F. Head-Mounted Augmented Reality Platform for Markerless Orthopaedic Navigation. IEEE J Biomed Health Inform 2021; 26:910-921. [PMID: 34115600 DOI: 10.1109/jbhi.2021.3088442] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual augmented reality (AR) has the potential to improve the accuracy, efficiency and reproducibility of computer-assisted orthopaedic surgery (CAOS). AR Head-mounted displays (HMDs) further allow non-eye-shift target observation and egocentric view. Recently, a markerless tracking and registration (MTR) algorithm was proposed to avoid the artificial markers that are conventionally pinned into the target anatomy for tracking, as their use prolongs surgical workflow, introduces human-induced errors, and necessitates additional surgical invasion in patients. However, such an MTR-based method has neither been explored for surgical applications nor integrated into current AR HMDs, making the ergonomic HMD-based markerless AR CAOS navigation hard to achieve. To these aims, we present a versatile, device-agnostic and accurate HMD-based AR platform. Our software platform, supporting both video see-through (VST) and optical see-through (OST) modes, integrates two proposed fast calibration procedures using a specially designed calibration tool. According to the camera-based evaluation, our AR platform achieves a display error of 6.31 2.55 arcmin for VST and 7.72 3.73 arcmin for OST. A proof-of-concept markerless surgical navigation system to assist in femoral bone drilling was then developed based on the platform and Microsoft HoloLens 1. According to the user study, both VST and OST markerless navigation systems are reliable, with the OST system providing the best usability. The measured navigation error is 4.90 1.04 mm, 5.96 2.22 for VST system and 4.36 0.80 mm, 5.65 1.42 for OST system.
Collapse
|
28
|
Gu W, Shah K, Knopf J, Navab N, Unberath M. Feasibility of image-based augmented reality guidance of total shoulder arthroplasty using microsoft HoloLens 1. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1835556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Wenhao Gu
- Johns Hopkins University, Baltimore, USA
| | | | | | | | | |
Collapse
|
29
|
Meng FH, Zhu ZH, Lei ZH, Zhang XH, Shao L, Zhang HZ, Zhang T. Feasibility of the application of mixed reality in mandible reconstruction with fibula flap: A cadaveric specimen study. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:e45-e49. [PMID: 33434746 DOI: 10.1016/j.jormas.2021.01.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/02/2020] [Accepted: 01/04/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND In recent years, a new technology, mixed reality (MR), has emerged and surpassed the limitations of augmented reality (AR) with its inability to interact with hologram. This study aimed to investigate the feasibility of the application of MR in mandible reconstruction with fibula flap. METHODS Computed tomography (CT) examination was performed for one cadaveric mandible and ten fibula bones. Using professional software Proplan CMF 3.0 (Materialize, Leuven, Belgium), we created a defected mandibular model and simulated the reconstruction design with these 10 fibula bones. The surgical plans were transferred to the HoloLens. We used HoloLens to guide the osteotomy and shaping of the fibular bone. After fixing the fibular segments using the Ti template, all segments underwent a CT examination. Before and after objects were compared for measurements of the location of fibular osteotomies, angular deviation of fibular segments, and intergonial angle distances. RESULTS The mean location of the fibular osteotomies, angular deviation of the fibular segments, and intergonial angle distances were 2.11 ± 1.31 mm, 2.85°± 1.97°, and 7.24 ± 3.42 mm, respectively. CONCLUSION The experimental results revealed that slight deviations remained in the accuracy of fibular osteotomy. With the further development of technology, it has the potential to improve the efficiency and precision of the reconstructive surgery.
Collapse
Affiliation(s)
- F H Meng
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China
| | - Z H Zhu
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - Z H Lei
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China
| | - X H Zhang
- Shenzhen Luohu Hospital Group Luohu People's Hospital, Department of Oral and Maxillofacial Surgery, 518020, Shenzhen, China
| | - L Shao
- Beijing Institute of Technology, Optoelectronic College, 100081, Beijing, China
| | - H Z Zhang
- Chinese PLA General Hospital, Department of Oral and Maxillofacial Surgery, 100853, Beijing, China.
| | - T Zhang
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, 100730, Beijing, China.
| |
Collapse
|
30
|
Manni F, Mamprin M, Holthuizen R, Shan C, Burström G, Elmi-Terander A, Edström E, Zinger S, de With PHN. Multi-view 3D skin feature recognition and localization for patient tracking in spinal surgery applications. Biomed Eng Online 2021; 20:6. [PMID: 33413426 PMCID: PMC7792004 DOI: 10.1186/s12938-020-00843-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/19/2020] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking. PURPOSE To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition. METHODS Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D. RESULTS The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively. CONCLUSIONS This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marco Mamprin
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Caifeng Shan
- Shandong University of Science and Technology, Qingdao, China
| | - Gustav Burström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Adrian Elmi-Terander
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Erik Edström
- Department of Neurosurgery, Karolinska University Hospital and Department of Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
31
|
BIM-Based Registration and Localization of 3D Point Clouds of Indoor Scenes Using Geometric Features for Augmented Reality. REMOTE SENSING 2020. [DOI: 10.3390/rs12142302] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Augmented reality can improve construction and facility management by visualizing an as-planned model on its corresponding surface for fast, easy, and correct information retrieval. This requires the localization registration of an as-built model in an as-planned model. However, the localization and registration of indoor environments fail, owing to self-similarity in an indoor environment, relatively large as-planned models, and the presence of additional unplanned objects. Therefore, this paper proposes a computer vision-based method to (1) homogenize indoor as-planned and as-built models, (2) reduce the search space of model matching, and (3) localize the structure (e.g., room) for registration of the scanned area in its as-planned model. This method extracts a representative horizontal cross section from the as-built and as-planned point clouds to make these models similar, restricts unnecessary transformation to reduce the search space, and corresponds the line features for the estimation of the registration transformation matrix. The performance of this method, in terms of registration accuracy, is evaluated on as-built point clouds of rooms and a hallway on a building floor. A rotational error of 0.005 rad and a translational error of 0.088 m are observed in the experiments. Hence, the geometric feature described on a representative cross section with transformation restrictions can be a computationally cost-effective solution for indoor localization and registration.
Collapse
|
32
|
Manni F, Elmi-Terander A, Burström G, Persson O, Edström E, Holthuizen R, Shan C, Zinger S, van der Sommen F, de With PHN. Towards Optical Imaging for Spine Tracking without Markers in Navigated Spine Surgery. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3641. [PMID: 32610555 PMCID: PMC7374436 DOI: 10.3390/s20133641] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 06/13/2020] [Accepted: 06/22/2020] [Indexed: 12/18/2022]
Abstract
Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.
Collapse
Affiliation(s)
- Francesca Manni
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Adrian Elmi-Terander
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Gustav Burström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Oscar Persson
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | - Erik Edström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm SE-171 46, Sweden & Department of Neurosurgery, Karolinska University Hospital, SE-171 46 Stockholm, Sweden; (A.E.-T.); (G.B.); (O.P.); (E.E.)
| | | | - Caifeng Shan
- Philips Research, High Tech Campus 36, 5656 AE Eindhoven, The Netherlands;
| | - Svitlana Zinger
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| | - Peter H. N. de With
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands; (S.Z.); (F.v.d.S.); (P.H.N.d.W.)
| |
Collapse
|
33
|
Quantifying Spatiotemporal Gait Parameters with HoloLens in Healthy Adults and People with Parkinson's Disease: Test-Retest Reliability, Concurrent Validity, and Face Validity. SENSORS 2020; 20:s20113216. [PMID: 32517076 PMCID: PMC7313704 DOI: 10.3390/s20113216] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 05/19/2020] [Accepted: 06/02/2020] [Indexed: 11/30/2022]
Abstract
Microsoft’s HoloLens, a mixed-reality headset, provides, besides holograms, rich position data of the head, which can be used to quantify what the wearer is doing (e.g., walking) and to parameterize such acts (e.g., speed). The aim of the current study is to determine test-retest reliability, concurrent validity, and face validity of HoloLens 1 for quantifying spatiotemporal gait parameters. This was done in a group of 23 healthy young adults (mean age 21 years) walking at slow, comfortable, and fast speeds, as well as in a group of 24 people with Parkinson’s disease (mean age 67 years) walking at comfortable speed. Walking was concurrently measured with HoloLens 1 and a previously validated markerless reference motion-registration system. We comprehensively evaluated HoloLens 1 for parameterizing walking (i.e., walking speed, step length and cadence) in terms of test-retest reliability (i.e., consistency over repetitions) and concurrent validity (i.e., between-systems agreement), using the intraclass correlation coefficient (ICC) and Bland–Altman’s bias and limits of agreement. Test-retest reliability and between-systems agreement were excellent for walking speed (ICC ≥ 0.861), step length (ICC ≥ 0.884), and cadence (ICC ≥ 0.765), with narrower between-systems than over-repetitions limits of agreement. Face validity was demonstrated with significantly different walking speeds, step lengths and cadences over walking-speed conditions. To conclude, walking speed, step length, and cadence can be reliably and validly quantified from the position data of the wearable HoloLens 1 measurement system, not only for a broad range of speeds in healthy young adults, but also for self-selected comfortable speed in people with Parkinson’s disease.
Collapse
|