1
|
Lim CH, Cha MC, Lee SC. Physical loads on upper extremity muscles while interacting with virtual objects in an augmented reality context. APPLIED ERGONOMICS 2024; 120:104340. [PMID: 38964218 DOI: 10.1016/j.apergo.2024.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 06/20/2024] [Accepted: 06/25/2024] [Indexed: 07/06/2024]
Abstract
Augmented reality (AR) environments are emerging as prominent user interfaces and gathering significant attention. However, the associated physical strain on the users presents a considerable challenge. Within this background, this study explores the impact of movement distance (MD) and target-to-user distance (TTU) on the physical load during drag-and-drop (DND) tasks in an AR environment. To address this objective, a user experiment was conducted utilizing a 5× 5 within-subject design with MD (16, 32, 48, 64, and 80 cm) and TTU (40, 80, 120, 160, and 200 cm) as the variables. Physical load was assessed using normalized electromyography (NEMG) (%MVC) indicators of the upper extremity muscles and the physical item of NASA-Task load index (TLX). The results revealed significant variations in the physical load based on MD and TTU. Specifically, both the NEMG and subjective physical workload values increased with increasing MD. Moreover, NEMG increased with decreasing TTU, whereas the subjective physical workload scores increased with increasing TTU. Interaction effects of MD and TTU on NEMG were also significantly observed. These findings suggest that considering the MD and TTU when developing content for interacting with AR objects in AR environments could potentially alleviate user load.
Collapse
Affiliation(s)
- Chae Heon Lim
- Department of Human Computer Interaction, Hanyang University ERICA, Ansan, Republic of Korea
| | - Min Chul Cha
- Division of Media and Communication, Hankuk University of Foreign Studies, Seoul, Republic of Korea
| | - Seul Chan Lee
- Department of Human Computer Interaction, Hanyang University ERICA, Ansan, Republic of Korea.
| |
Collapse
|
2
|
Astrologo AN, Nano S, Klemm EM, Shefelbine SJ, Dennerlein JT. Determining the effects of AR/VR HMD design parameters (mass and inertia) on cervical spine joint torques. APPLIED ERGONOMICS 2024; 116:104183. [PMID: 38071785 DOI: 10.1016/j.apergo.2023.104183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 10/01/2023] [Accepted: 11/19/2023] [Indexed: 01/16/2024]
Abstract
This study aimed to determine gravitational and dynamic torques and muscle activity of the neck across a series of design parameters of head mounted displays (mass, center of mass, and counterweights) associated with virtual and augmented reality (VR/AR). Twenty young adult participants completed five movement types (Slow and Fast Flexion/Extension and Rotation, and Search) while wearing a custom-designed prototype headset that varied the three design parameters: display mass (0, 200, 500, and 750 g), distance of the display's center of mass in front of the eyes (approximately 1, 3, and 5 cm anteriorly), and counterweights of 0, 166, 332, and 500 g to balance the display mass of 500 g at 7 cm. Inverse dynamics of a link segment model of the head and headset provided estimates of the torques about the joint between the skull and the occiput-first cervical vertebrae (OC1) and joint between the C7 and T1 vertebrae (C7). Surface electromyography (EMG) measured bilateral muscle activity of the splenius and upper trapezius muscles. Adding 750 g of display mass nearly doubled root mean square joint torques across all movement types. Increasing the distance of the display mass in front of the eyes by 4 cm increased torques about OC1 for the Slow and Fast Rotation and Search movements by approximately 20%. Adding a counterweight decreased torques about OC1 during the rotation and search tasks but did not decrease the torques experienced in the lower cervical spine (C7). For the flexion/extension axis, the magnitude of the dynamic torque component was 20% or less of the total torque experienced whereas for the rotation axis the magnitude of the dynamic torque component was greater than 50% of the total torque. Surface EMG root mean square values significantly varied across movement types with the fast rotation having the largest values; however, they did not vary significantly across the headset configurations.
Collapse
Affiliation(s)
| | - Sarah Nano
- Department of Bioengineering, Northeastern University, Boston, MA, USA
| | - Elizabeth M Klemm
- Department of Bioengineering, Northeastern University, Boston, MA, USA
| | - Sandra J Shefelbine
- Department of Bioengineering, Northeastern University, Boston, MA, USA; Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
| | - Jack T Dennerlein
- Sargent College of Health and Rehabilitation Sciences, Boston University, Boston, MA, USA.
| |
Collapse
|
3
|
DuTell V, Gibaldi A, Focarelli G, Olshausen BA, Banks MS. High-fidelity eye, head, body, and world tracking with a wearable device. Behav Res Methods 2024; 56:32-42. [PMID: 35879503 PMCID: PMC10794349 DOI: 10.3758/s13428-022-01888-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 12/13/2022]
Abstract
We describe the design and performance of a high-fidelity wearable head-, body-, and eye-tracking system that offers significant improvement over previous such devices. This device's sensors include a binocular eye tracker, an RGB-D scene camera, a high-frame-rate scene camera, and two visual odometry sensors, for a total of ten cameras, which we synchronize and record from with a data rate of over 700 MB/s. The sensors are operated by a mini-PC optimized for fast data collection, and powered by a small battery pack. The device records a subject's eye, head, and body positions, simultaneously with RGB and depth data from the subject's visual environment, measured with high spatial and temporal resolution. The headset weighs only 1.4 kg, and the backpack with batteries 3.9 kg. The device can be comfortably worn by the subject, allowing a high degree of mobility. Together, this system overcomes many limitations of previous such systems, allowing high-fidelity characterization of the dynamics of natural vision.
Collapse
Affiliation(s)
- Vasha DuTell
- Wertheim School of Optometry and Vision Science, UC Berkeley, Minor Hall, Berkeley, CA, USA.
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Evans Hall, Berkeley, CA, USA.
| | - Agostino Gibaldi
- Wertheim School of Optometry and Vision Science, UC Berkeley, Minor Hall, Berkeley, CA, USA
| | - Giulia Focarelli
- Wertheim School of Optometry and Vision Science, UC Berkeley, Minor Hall, Berkeley, CA, USA
| | - Bruno A Olshausen
- Wertheim School of Optometry and Vision Science, UC Berkeley, Minor Hall, Berkeley, CA, USA
- Redwood Center for Theoretical Neuroscience, UC Berkeley, Evans Hall, Berkeley, CA, USA
| | - Martin S Banks
- Wertheim School of Optometry and Vision Science, UC Berkeley, Minor Hall, Berkeley, CA, USA
| |
Collapse
|
4
|
Du Y, Liu K, Ju Y, Wang H. A comfort analysis of AR glasses on physical load during long-term wearing. ERGONOMICS 2023; 66:1325-1339. [PMID: 36377507 DOI: 10.1080/00140139.2022.2146207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 11/04/2022] [Indexed: 06/16/2023]
Abstract
The present study investigated the effect of the physical load of augmented reality (AR) glasses on subjective assessments for an extended duration of a video viewing task. Ninety-six subjects were recruited for this test and were divided by spectacle use, sex, age, and body mass index (BMI). Four glasses frame weights were assessed. To investigate their effectiveness, a novel prototype adopting three design interventions, (1) adjustable frame width, (2) ergonomic temples, and (3) fixed centre of gravity, was designed with regard to subjective discomfort ratings (nose, ear, and overall). Subjective discomfort in all regions was significantly increased with increasing physical load on the nose. In addition, non-spectacle users, women, older users, and participants in the middle BMI category reported higher discomfort than other groups. This finding could have important implications for the ergonomic design of AR glasses and could help to identify design considerations relevant to the emerging wearable display industry. Practitioner summary: This research aims to explore the influence of the physical load of augmented reality (AR) glasses. It found that discomfort was increased with added nose load. Non-spectacle users, women, older users, and participants in the middle BMI category were more sensitive to discomfort. The results have important implications for glasses-type wearables' design.
Collapse
Affiliation(s)
- Yujia Du
- School of Design, Hunan University, Changsha, China
| | - Kexiang Liu
- School of Design, Hunan University, Changsha, China
| | - Yuxin Ju
- School of Design, Hunan University, Changsha, China
| | - Haining Wang
- School of Design, Hunan University, Changsha, China
| |
Collapse
|
5
|
Trinidad-Fernández M, Bossavit B, Salgado-Fernández J, Abbate-Chica S, Fernández-Leiva AJ, Cuesta-Vargas AI. Head-Mounted Display for Clinical Evaluation of Neck Movement Validation with Meta Quest 2. SENSORS (BASEL, SWITZERLAND) 2023; 23:3077. [PMID: 36991788 PMCID: PMC10056752 DOI: 10.3390/s23063077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 06/19/2023]
Abstract
Neck disorders have a significant impact on people because of their high incidence. The head-mounted display (HMD) systems, such as Meta Quest 2, grant access to immersive virtual reality (iRV) experiences. This study aims to validate the Meta Quest 2 HMD system as an alternative for screening neck movement in healthy people. The device provides data about the position and orientation of the head and, thus, the neck mobility around the three anatomical axes. The authors develop a VR application that solicits participants to perform six neck movements (rotation, flexion, and lateralization on both sides), which allows the collection of corresponding angles. An InertiaCube3 inertial measurement unit (IMU) is also attached to the HMD to compare the criterion to a standard. The mean absolute error (MAE), the percentage of error (%MAE), and the criterion validity and agreement are calculated. The study shows that the average absolute errors do not exceed 1° (average = 0.48 ± 0.09°). The rotational movement's average %MAE is 1.61 ± 0.82%. The head orientations obtain a correlation between 0.70 and 0.96. The Bland-Altman study reveals good agreement between the HMD and IMU systems. Overall, the study shows that the angles provided by the Meta Quest 2 HMD system are valid to calculate the rotational angles of the neck in each of the three axes. The obtained results demonstrate an acceptable error percentage and a very minimal absolute error when measuring the degrees of neck rotation; therefore, the sensor can be used for screening neck disorders in healthy people.
Collapse
Affiliation(s)
- Manuel Trinidad-Fernández
- Grupo de Investigación Clinimetría, Departamento de Fisioterapia, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina (IBIMA), Plataforma Bionand, 29590 Málaga, Spain
| | - Benoît Bossavit
- ITIS Software, Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, Andalucía Tech, 29071 Málaga, Spain
| | - Javier Salgado-Fernández
- Departamento de Expresión Gráfica, Diseño y Proyectos, Escuela de Ingenierías Industriales, Universidad de Málaga, 29071 Málaga, Spain
| | - Susana Abbate-Chica
- Grupo de Investigación Clinimetría, Departamento de Fisioterapia, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina (IBIMA), Plataforma Bionand, 29590 Málaga, Spain
| | - Antonio J. Fernández-Leiva
- ITIS Software, Departamento de Lenguajes y Ciencias de la Computación, Universidad de Málaga, Andalucía Tech, 29071 Málaga, Spain
| | - Antonio I. Cuesta-Vargas
- Grupo de Investigación Clinimetría, Departamento de Fisioterapia, Universidad de Málaga, 29071 Málaga, Spain
- Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina (IBIMA), Plataforma Bionand, 29590 Málaga, Spain
- School of Clinical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
6
|
de’Sperati C, Dalmasso V, Moretti M, Høeg ER, Baud-Bovy G, Cozzi R, Ippolito J. Enhancing Visual Exploration through Augmented Gaze: High Acceptance of Immersive Virtual Biking by Oldest Olds. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1671. [PMID: 36767037 PMCID: PMC9914324 DOI: 10.3390/ijerph20031671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/12/2023] [Accepted: 01/13/2023] [Indexed: 06/18/2023]
Abstract
The diffusion of virtual reality applications dedicated to aging urges us to appraise its acceptance by target populations, especially the oldest olds. We investigated whether immersive virtual biking, and specifically a visuomotor manipulation aimed at improving visual exploration (augmented gaze), was well accepted by elders living in assisted residences. Twenty participants (mean age 89.8 years, five males) performed three 9 min virtual biking sessions pedalling on a cycle ergometer while wearing a Head-Mounted Display which immersed them inside a 360-degree pre-recorded biking video. In the second and third sessions, the relationship between horizontal head rotation and contingent visual shift was experimentally manipulated (augmented gaze), the visual shift being twice (gain = 2.0) or thrice (gain = 3.0) the amount of head rotation. User experience, motion sickness and visual exploration were measured. We found (i) very high user experience ratings, regardless of the gain; (ii) no effect of gain on motion sickness; and (iii) increased visual exploration (slope = +46%) and decreased head rotation (slope = -18%) with augmented gaze. The improvement in visual exploration capacity, coupled with the lack of intolerance signs, suggests that augmented gaze can be a valuable tool to improve the "visual usability" of certain virtual reality applications for elders, including the oldest olds.
Collapse
Affiliation(s)
- Claudio de’Sperati
- Laboratory of Action, Perception and Cognition, School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Vittorio Dalmasso
- Laboratory of Action, Perception and Cognition, School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Michela Moretti
- Laboratory of Action, Perception and Cognition, School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Emil Rosenlund Høeg
- Multisensory Experience Laboratory, Department of Architecture, Design and Media Technology, Aalborg University, 2450 Copenhagen, Denmark
| | - Gabriel Baud-Bovy
- Laboratory of Action, Perception and Cognition, School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy
| | - Roberto Cozzi
- RSA San Giuseppe, Associazione Monte Tabor, 20132 Milan, Italy
| | - Jacopo Ippolito
- RSA San Giuseppe, Associazione Monte Tabor, 20132 Milan, Italy
| |
Collapse
|
7
|
Guinet AL, Bouyer G, Otmane S, Desailly E. Visual Feedback in Augmented Reality to Walk at Predefined Speed Cross-Sectional Study Including Children With Cerebral Palsy. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2322-2331. [PMID: 35951576 DOI: 10.1109/tnsre.2022.3198243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In an augmented reality environment, the range of possible real-time visual feedback is extensive. This study aimed to compare the impact of six scenarios in augmented reality combining four visual feedback characteristics on achieving a target walking speed. The six scenarios have been developed for Microsoft Hololens augmented reality headset. The four feedback characteristics that we have varied were: Color; Spatial anchoring; Speed of the feedback, and Persistence. Each characteristic could have different values (for example, the color could be unicolor, bicolor, or gradient). Participants had to walk for two consecutive walking trials for each scenario: at their maximal speed and an intermediate speed. Mean speed, percentage of time spent above or around target speed, and time to reach target speed were compared between scenarios using mixed linear models. A total of 25 children with disabilities have been included. The feasibility and user experience were excellent. Mean speed during scenario 6, which displayed feedback with gradient color, attached to the world, with a speed relative to the player equal to his speed, and that disappeared over time, was significantly higher than other scenarios and control (p =0.003). Participants spent 80.98% of time above target speed during scenario 6. This scenario mixed the best combination of feedback characteristics to exceed the target walking speed (p=0.0058). Scenarios 5 and 6, which shared the same feedback characteristics for spatial anchoring (world-locked) and feedback speed (equal to the player speed), decreased the time to reach the target speed (p=0.019). Delivering multi-modal feedback has been recognized as more effective for improving motor performance. Therefore, our results showed that not all visual feedback had the same impact on performance. Further studies are required to test the weight of each feedback characteristic and their possible interactions inside each scenario. This study was registered in the ClinicalTrials.gov database (NCT04460833).
Collapse
|
8
|
Vlahovic S, Suznjevic M, Skorin-Kapov L. A survey of challenges and methods for Quality of Experience assessment of interactive VR applications. JOURNAL ON MULTIMODAL USER INTERFACES 2022; 16. [PMCID: PMC9051501 DOI: 10.1007/s12193-022-00388-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
User acceptance of virtual reality (VR) applications is dependent on multiple aspects, such as usability, enjoyment, and cybersickness. To fully realize the disruptive potential of VR technology in light of recent technological advancements (e.g., advanced headsets, immersive graphics), gaining a deeper understanding of underlying factors and dimensions impacting and contributing to the overall end-user experience is of great benefit to hardware manufacturers, software and content developers, and service providers. To provide insight into user behaviour and preferences, researchers conduct user studies exploring the influence of various user-, system-, and context-related factors on the overall Quality of Experience (QoE) and its dimensions. When planning and executing such studies, researchers are faced with numerous methodological challenges related to study design aspects, such as specification of dependant and independent variables, subjective and objective assessment methods, preparation of test materials, test environment, and participant recruitment. Approaching these challenges from a multidisciplinary perspective, this paper reviews different aspects of performing perception-based QoE assessment for interactive VR applications and presents options and recommendations for research methodology design. We provide an overview of different influence factors and dimensions that may affect the overall QoE, with a focus on presence, immersion, and discomfort. Furthermore, we address ethical and practical issues regarding participant choice and test material, present different assessment methods and measures commonly used in VR research, and discuss approaches to choosing study duration and location. Lastly, we provide a concise analysis of key challenges that need to be addressed in future studies centered around VR QoE.
Collapse
Affiliation(s)
- Sara Vlahovic
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
| | - Mirko Suznjevic
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
| | - Lea Skorin-Kapov
- Faculty of Electrical Engineering and Computing, University of Zagreb, Unska 3, 10000 Zagreb, Croatia
| |
Collapse
|
9
|
Horsak B, Simonlehner M, Schöffer L, Dumphart B, Jalaeefar A, Husinsky M. Overground Walking in a Fully Immersive Virtual Reality: A Comprehensive Study on the Effects on Full-Body Walking Biomechanics. Front Bioeng Biotechnol 2021; 9:780314. [PMID: 34957075 PMCID: PMC8693458 DOI: 10.3389/fbioe.2021.780314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 11/16/2021] [Indexed: 11/24/2022] Open
Abstract
Virtual reality (VR) is an emerging technology offering tremendous opportunities to aid gait rehabilitation. To this date, real walking with users immersed in virtual environments with head-mounted displays (HMDs) is either possible with treadmills or room-scale (overground) VR setups. Especially for the latter, there is a growing interest in applications for interactive gait training as they could allow for more self-paced and natural walking. This study investigated if walking in an overground VR environment has relevant effects on 3D gait biomechanics. A convenience sample of 21 healthy individuals underwent standard 3D gait analysis during four randomly assigned walking conditions: the real laboratory (RLab), a virtual laboratory resembling the real world (VRLab), a small version of the VRlab (VRLab-), and a version which is twice as long as the VRlab (VRLab+). To immerse the participants in the virtual environment we used a VR-HMD, which was operated wireless and calibrated in a way that the virtual labs would match the real-world. Walking speed and a single measure of gait kinematic variability (GaitSD) served as primary outcomes next to standard spatio-temporal parameters, their coefficients of variant (CV%), kinematics, and kinetics. Briefly described, participants demonstrated a slower walking pattern (-0.09 ± 0.06 m/s) and small accompanying kinematic and kinetic changes. Participants also showed a markedly increased gait variability in lower extremity gait kinematics and spatio-temporal parameters. No differences were found between walking in VRLab+ vs. VRLab-. Most of the kinematic and kinetic differences were too small to be regarded as relevant, but increased kinematic variability (+57%) along with increased percent double support time (+4%), and increased step width variability (+38%) indicate gait adaptions toward a more conservative or cautious gait due to instability induced by the VR environment. We suggest considering these effects in the design of VR-based overground training devices. Our study lays the foundation for upcoming developments in the field of VR-assisted gait rehabilitation as it describes how VR in overground walking scenarios impacts our gait pattern. This information is of high relevance when one wants to develop purposeful rehabilitation tools.
Collapse
Affiliation(s)
- Brian Horsak
- Center for Digital Health and Social Innovation, St. Pölten University of Applied Sciences, St Pölten, Austria
| | - Mark Simonlehner
- Department of Health, Institute of Health Sciences, St. Pölten University of Applied Sciences, St Pölten, Austria
| | - Lucas Schöffer
- Department of Media and Digital Technologies, Institute of Creative∖Media/Technologies, St. Pölten University of Applied Sciences, St Pölten, Austria
| | - Bernhard Dumphart
- Department of Health, Institute of Health Sciences, St. Pölten University of Applied Sciences, St Pölten, Austria
| | - Arian Jalaeefar
- Department of Media and Digital Technologies, Institute of Creative∖Media/Technologies, St. Pölten University of Applied Sciences, St Pölten, Austria
| | - Matthias Husinsky
- Department of Media and Digital Technologies, Institute of Creative∖Media/Technologies, St. Pölten University of Applied Sciences, St Pölten, Austria
| |
Collapse
|
10
|
Chidambaram S, Stifano V, Demetres M, Teyssandier M, Palumbo MC, Redaelli A, Olivi A, Apuzzo MLJ, Pannullo SC. Applications of augmented reality in the neurosurgical operating room: A systematic review of the literature. J Clin Neurosci 2021; 91:43-61. [PMID: 34373059 DOI: 10.1016/j.jocn.2021.06.032] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/15/2022]
Abstract
Advancements in imaging techniques are key forces of progress in neurosurgery. The importance of accurate visualization of intraoperative anatomy cannot be overemphasized and is commonly delivered through traditional neuronavigation. Augmented Reality (AR) technology has been tested and applied widely in various neurosurgical subspecialties in intraoperative, clinical use and shows promise for the future. This systematic review of the literature explores the ways in which AR technology has been successfully brought into the operating room (OR) and incorporated into clinical practice. A comprehensive literature search was performed in the following databases from inception-April 2020: Ovid MEDLINE, Ovid EMBASE, and The Cochrane Library. Studies retrieved were then screened for eligibility against predefined inclusion/exclusion criteria. A total of 54 articles were included in this systematic review. The studies were sub- grouped into brain and spine subspecialties and analyzed for their incorporation of AR in the neurosurgical clinical setting. AR technology has the potential to greatly enhance intraoperative visualization and guidance in neurosurgery beyond the traditional neuronavigation systems. However, there are several key challenges to scaling the use of this technology and bringing it into standard operative practice including accurate and efficient brain segmentation of magnetic resonance imaging (MRI) scans, accounting for brain shift, reducing coregistration errors, and improving the AR device hardware. There is also an exciting potential for future work combining AR with multimodal imaging techniques and artificial intelligence to further enhance its impact in neurosurgery.
Collapse
Affiliation(s)
| | - Vito Stifano
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Institute of Neurosurgery, Catholic University, Rome, Italy
| | - Michelle Demetres
- Samuel J. Wood & C.V. Starr Biomedical Information Center, Weill Cornell Medical, College/New York Presbyterian Hospital, New York, NY, USA
| | | | - Maria Chiara Palumbo
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Alberto Redaelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Alessandro Olivi
- Department of Neurosurgery, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Institute of Neurosurgery, Catholic University, Rome, Italy
| | | | - Susan C Pannullo
- Department of Neurosurgery, Weill Cornell Medical College, NY, USA.
| |
Collapse
|
11
|
Healey LA, Derouin AJ, Callaghan JP, Cronin DS, Fischer SL. Night Vision Goggle and Counterweight Use Affect Neck Muscle Activity During Reciprocal Scanning. Aerosp Med Hum Perform 2021; 92:172-181. [PMID: 33754975 DOI: 10.3357/amhp.5673.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
BACKGROUND: Mass, moment of inertia, and amplitude of neck motion were altered during a reciprocal scanning task to investigate how night vision goggles (NVGs) use mechanistically is associated with neck trouble among rotary-wing aircrew.METHODS: There were 30 subjects measured while scanning between targets at 2 amplitudes (near and far) and under 4 head supported mass conditions (combinations of helmet, NVGs, and counterweights). Electromyography (EMG) was measured bilaterally from the sternocleidomastoid and upper neck extensors. Kinematics were measured from the trunk and head.RESULTS: Scanning between the far amplitude targets required higher peak angular accelerations (7% increase) and neck EMG (between 1.24.5% increase), lower muscle cocontraction ratios (6.7% decrease), and fewer gaps in EMG (up to a 59% decrease) relative to the near targets. Increasing the mass of the helmet had modest effects on neck EMG, while increasing the moment of inertia did not.DISCUSSION: Target amplitude, not head supported mass configuration, had a greater effect on exposure metrics. Use of NVGs restricts field-of-view, requiring an increased amplitude of neck movement. This may play an important role in understanding links between neck trouble and NVG use.Healey LA, Derouin AJ, Callaghan JP, Cronin DS, Fischer SL. Night vision goggle and counterweight use affect neck muscle activity during reciprocal scanning. Aerosp Med Hum Perform. 2021; 92(3):172181.
Collapse
|
12
|
Almajid R, Tucker C, Keshner E, Vasudevan E, Wright WG. Effects of wearing a head-mounted display during a standard clinical test of dynamic balance. Gait Posture 2021; 85:78-83. [PMID: 33517040 DOI: 10.1016/j.gaitpost.2021.01.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Revised: 10/24/2020] [Accepted: 01/18/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND The use of virtual reality (VR) in clinical settings has increased with the introduction of affordable, easy-to-use head-mounted displays (HMDs). However, some have raised concerns about the effects that HMDs have on posture and locomotion, even without the projection of a virtual scene, which may be different across ages. RESEARCH QUESTION How does HMD wear impact the kinematic measures in younger and older adults? METHODS Twelve healthy young and sixteen older adults participated in two testing conditions: 1) TUG with no HMD and 2) TUG with an HMD displaying a scene of the actual environment (TUGHMD). The dependent variables were the pitch, yaw, and roll peak trunk velocities (PTVs) in each TUG component, turning cadence, and the time to complete the TUG and its components - SIT-TO-STAND, TURN, WALK, and STAND-TO-SIT. RESULTS Wearing the HMD decreased turning cadence and pitch and yaw PTVs in all TUG components, decreased roll PTV in SIT-TO-STAND and TURN, and increased the time taken to complete all TUG components in all participants. Wearing the HMD decreased the pitch PTV in SIT-TO-STAND in older relative to younger adults. Wearing an HMD affected TUG performance in younger and older adults, which should be considered when an HMD is used for VR applications in rehabilitation. SIGNIFICANCE Our findings highlight the importance of considering the physical effect of HMD wear in clinical testing, which may not be present with non-wearable VR technologies.
Collapse
Affiliation(s)
- Rania Almajid
- Department of Physical Therapy, West Coast University, 590 N Vssermont Ave, Los Angeles, CA, 90004, USA; Department of Physical Therapy, Temple University, 1801 N Broad St., Philadelphia, PA, 19122, USA.
| | - Carole Tucker
- Department of Physical Therapy, Temple University, 1801 N Broad St., Philadelphia, PA, 19122, USA.
| | - Emily Keshner
- Department of Physical Therapy, Temple University, 1801 N Broad St., Philadelphia, PA, 19122, USA.
| | - Erin Vasudevan
- Department of Health and Rehabilitation Sciences, School of Health Technology and Management, Stony Brook University, 101 Nicolls Road, Health Sciences Center, Stony Brook, 11794, USA.
| | - William Geoffrey Wright
- Department of Physical Therapy, Temple University, 1801 N Broad St., Philadelphia, PA, 19122, USA; Department of Bioengineering, Temple University 1801 N Broad St., Philadelphia, PA, 19122, USA.
| |
Collapse
|
13
|
Zorzal ER, Paulo SF, Rodrigues P, Mendes JJ, Lopes DS. An immersive educational tool for dental implant placement: A study on user acceptance. Int J Med Inform 2020; 146:104342. [PMID: 33310434 DOI: 10.1016/j.ijmedinf.2020.104342] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 11/11/2020] [Accepted: 11/13/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Tools for training and education of dental students can improve their ability to perform technical procedures such as dental implant placement. Shortage of training can negatively affect dental implantologists' performance during intraoperative procedures, resulting in lack of surgical precision and, consequently, inadequate implant placement, which may lead to unsuccessful implant supported restorations or other complications. OBJECTIVE We designed and developed IMMPLANT a virtual reality educational tool to assist implant placement learning, which allows users to freely manipulate 3D dental models (e.g., a simulated patient's mandible and implant) with their dominant hand while operating a touchscreen device to assist 3D manipulation. METHODS The proposed virtual reality tool combines an immersive head-mounted display, a small hand tracking device and a smartphone that are all connected to a laptop. The operator's dominant hand is tracked to quickly and coarsely manipulate either the 3D dental model or the virtual implant, while the non-dominant hand holds a smartphone converted into a controller to assist button activation and a greater input precision for 3D implant positioning and inclination. We evaluated IMMPLANT's usability and acceptance during training sessions with 16 dental professionals. RESULTS The conducted user acceptance study revealed that IMMPLANT constitutes a versatile, portable, and complementary tool to assist implant placement learning, as it promotes immersive visualization and spatial manipulation of 3D dental anatomy. CONCLUSIONS IMMPLANT is a promising virtual reality tool to assist student learning and 3D dental visualization for implant placement education. IMMPLANT may also be easily incorporated into training programs for dental students.
Collapse
Affiliation(s)
- Ezequiel Roberto Zorzal
- ICT/UNIFESP, Instituto de Ciência e Tecnologia, Universidade Federal de São Paulo, Brazil; INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| | | | - Pedro Rodrigues
- Clinical Research Unit (CRU), Centro de Investigação Interdisciplinar Egas Moniz (CiiEM), Instituto Universitário Egas Moniz, Almada, Portugal
| | - José João Mendes
- Clinical Research Unit (CRU), Centro de Investigação Interdisciplinar Egas Moniz (CiiEM), Instituto Universitário Egas Moniz, Almada, Portugal
| | - Daniel Simões Lopes
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| |
Collapse
|
14
|
Logeswaran A, Munsch C, Chong YJ, Ralph N, McCrossnan J. The role of extended reality technology in healthcare education: Towards a learner-centred approach. Future Healthc J 2020; 8:e79-e84. [PMID: 33791482 DOI: 10.7861/fhj.2020-0112] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The use of extended reality (XR) technologies is growing rapidly in a range of industries from gaming to aviation. However, how this technology should be implemented in healthcare education is not well-documented in the literature. Learner-driven implementation of educational technology has previously been shown to be more effective than a technology-driven approach. In this paper we conduct a narrative literature review of relevant papers to explore the role of XR technologies in learner-driven approaches to healthcare educatio. This paper aims to evaluate the position of XR technologies in learner-centred pedagogical models, determine what functions of XR technologies can improve learner-centred approaches in healthcare education, and explore whether XR technologies can improve learning outcomes in healthcare education. We conclude that XR technologies have unique attributes that can improve learning outcomes when compared to traditional learning methods, but there is currently a shortfall in learner-centred implementation of XR technologies in healthcare education, where these technologies have the capacity to cause a paradigm shift.
Collapse
|
15
|
Experimental Setup Employed in the Operating Room Based on Virtual and Mixed Reality: Analysis of Pros and Cons in Open Abdomen Surgery. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:8851964. [PMID: 32832048 PMCID: PMC7428968 DOI: 10.1155/2020/8851964] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 07/20/2020] [Accepted: 07/24/2020] [Indexed: 12/19/2022]
Abstract
Currently, surgeons in operating rooms are forced to focus their attention both on the patient's body and on flat low-quality surgical monitors, in order to get all the information needed to successfully complete surgeries. The way the data are displayed leads to disturbances of the surgeon's visuals, which may affect his performances, besides the fact that other members of the surgical team do not have proper visual tools able to aid him. The idea underlying this paper is to exploit mixed reality to support surgeons during surgical procedures. In particular, the proposed experimental setup, employed in the operating room, is based on an architecture that put together the Microsoft HoloLens, a Digital Imaging and Communications in Medicine (DICOM) player and a mixed reality visualization tool (i.e., Spectator View) developed by using the Mixed Reality Toolkit in Unity with Windows 10 SDK. The suggested approach enables visual information on the patient's body as well as information on the results of medical screenings to be visualized on the surgeon's headsets. Additionally, the architecture enables any data and details to be shared by the team members or by external users during surgical operations. The paper analyses in detail advantages and drawbacks that the surgeons have found when they wore the Microsoft HoloLens headset during all the ten open abdomen surgeries conducted at the IRCCS Hospital “Giovanni Paolo II” in the city of Bari (Italy). A survey based on Likert scale demonstrates how the use of the suggested tools can increase the execution speed by allowing multitasking procedures, i.e., by checking medical images at high resolution without leaving the operating table and the patient. On the other hand, the survey also reveals an increase in the physical stress and reduced comfort due to the weight of the Microsoft HoloLens device, along with drawbacks due to the battery autonomy. Additionally, the survey seems to encourage the use of DICOM Viewer and Spectator View both for surgical education and for improving surgery outcomes. Note that the real use of the conceived platform in the operating room represents a remarkable feature of this paper, since most if not all the studies conducted so far in literature exploit mixed reality only in simulated environments and not in real operating rooms. In conclusion, the study clearly highlights that, despite the challenges required in the forthcoming years to improve the current technology, mixed reality represents a promising technique that will soon enter the operating rooms to support surgeons during surgical procedures in many hospitals across the world.
Collapse
|
16
|
Zorzal ER, Campos Gomes JM, Sousa M, Belchior P, da Silva PG, Figueiredo N, Lopes DS, Jorge J. Laparoscopy with augmented reality adaptations. J Biomed Inform 2020; 107:103463. [PMID: 32562897 DOI: 10.1016/j.jbi.2020.103463] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 05/27/2020] [Accepted: 05/30/2020] [Indexed: 11/29/2022]
Abstract
One of the most promising applications of Optical See-Through Augmented Reality is minimally laparoscopic surgery, which currently suffers from problems such as surgeon discomfort and fatigue caused by looking at a display positioned outside the surgeon's visual field, made worse by the length of the procedure. This fatigue is especially felt on the surgeon's neck, as it is strained from adopting unnatural postures in order to visualise the laparoscopic video feed. Throughout this paper, we will present work in Augmented Reality, as well as developments in surgery and Augmented Reality applied to both surgery in general and laparoscopy in particular to address these issues. We applied user and task analysis methods to learn about practices performed in the operating room by observing surgeons in their working environment in order to understand, in detail, how they performed their tasks and achieved their intended goals. Drawing on observations and analysis of video recordings of laparoscopic surgeries, we identified relevant constraints and design requirements. Besides proposals to approach the ergonomic issues, we present a design and implementation of a multimodal interface to enhance the laparoscopic procedure. Our method makes it more comfortable for surgeons by allowing them to keep the laparoscopic video in their viewing area regardless of neck posture. Also, our interface makes it possible to access patient imaging data without interrupting the operation. It also makes it possible to communicate with team members through a pointing reticle. We evaluated how surgeons perceived the implemented prototype, in terms of usefulness and usability, via a think-aloud protocol to conduct qualitative evaluation sessions which we describe in detail in this paper. In addition to checking the advantages of the prototype as compared to traditional laparoscopic settings, we also conducted a System Usability Scale questionnaire for measuring its usability, and a NASA Task Load Index questionnaire to rate perceived workload and to assess the prototype effectiveness. Our results show that surgeons consider that our prototype can improve surgeon-to-surgeon communication using head pose as a means of pointing. Also, surgeons believe that our approach can afford a more comfortable posture throughout the surgery and enhance hand-eye coordination, as physicians no longer need to twist their necks to look at screens placed outside the field of operation.
Collapse
Affiliation(s)
- Ezequiel Roberto Zorzal
- ICT/UNIFESP, Instituto de Ciência e Tecnologia, Universidade Federal de São Paulo, Brazil; INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| | | | - Maurício Sousa
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal; Champalimaud Foundation, Lisbon, Portugal
| | - Pedro Belchior
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal
| | | | | | - Daniel Simões Lopes
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal
| | - Joaquim Jorge
- INESC-ID Lisboa, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| |
Collapse
|
17
|
Muhla F, Clanché F, Duclos K, Meyer P, Maïaux S, Colnat-Coulbois S, Gauchard GC. Impact of using immersive virtual reality over time and steps in the Timed Up and Go test in elderly people. PLoS One 2020; 15:e0229594. [PMID: 32168361 PMCID: PMC7069621 DOI: 10.1371/journal.pone.0229594] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 02/10/2020] [Indexed: 11/18/2022] Open
Abstract
Today, falls constitute a substantial health problem, especially in the elderly, and the diagnostic tests used by clinicians present often a low sensitivity and specificity. This is the case for the Timed Up and Go test which lacks contextualization with regard to everyday life limiting the relevance of its diagnosis. Virtual reality enables the creation of immersive, reproducible and secure environments, close to situations encountered in daily life, and as such could improve falling risk assessment. This study aims to evaluate the effect of immersive virtual reality by wearing a virtual reality headset with a non-disturbing virtual environment compared to real world on the Timed Up and Go test completion. Thirty-one elders (73.7 ± 9 years old) volunteered to participate in the study and the mean times and number of steps to complete a Timed Up and Go were compared in two conditions: actual-world clinical and virtual reality conditions. The results showed that the mean completion times and most of the mean number of steps of the Timed Up and Go in virtual reality condition were significantly different to those in clinical condition. These results suggest that there is a virtual reality effect and this effect is significantly correlated to the time taken to complete the Timed Up and Go. This information will be of interest to quantify the potential part of virtual reality effect on the motor control, measured in a virtual task using virtual controlled disturbances.
Collapse
Affiliation(s)
- Frédéric Muhla
- UFR STAPS, Faculty of Sport Science, Université de Lorraine, Villers-lès- Nancy, France
- EA 3450 DevAH, Development, Adaptation and Handicap, Faculty of Medicine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| | - Fabien Clanché
- UFR STAPS, Faculty of Sport Science, Université de Lorraine, Villers-lès- Nancy, France
- EA 3450 DevAH, Development, Adaptation and Handicap, Faculty of Medicine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| | - Karine Duclos
- UFR STAPS, Faculty of Sport Science, Université de Lorraine, Villers-lès- Nancy, France
- EA 3450 DevAH, Development, Adaptation and Handicap, Faculty of Medicine, Université de Lorraine, Vandœuvre-lès-Nancy, France
| | | | | | | | - Gérome C. Gauchard
- UFR STAPS, Faculty of Sport Science, Université de Lorraine, Villers-lès- Nancy, France
- EA 3450 DevAH, Development, Adaptation and Handicap, Faculty of Medicine, Université de Lorraine, Vandœuvre-lès-Nancy, France
- * E-mail:
| |
Collapse
|
18
|
Mechanical Energy Expenditure-based Comfort Evaluation Model for Gesture Interaction. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2018:9861697. [PMID: 30719035 PMCID: PMC6335735 DOI: 10.1155/2018/9861697] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Accepted: 10/31/2018] [Indexed: 11/17/2022]
Abstract
As an advanced interaction mode, the gesture has been widely used for the human-computer interaction (HCI). The paper proposes a comfort evaluation model based on the mechanical energy expenditure (MEE) and the mechanical efficiency (ME) to predict the comfort of gestures. The proposed comfort evaluation model takes nineteen muscles and seven degrees of freedom into consideration based on the data of muscles and joints and is capable of simulating the MEE and the ME of both static and dynamic gestures. The comfort scores (CSs) can be therefore calculated by normalizing and assigning different decision weights to the MEE and the ME. Compared with the traditional comfort prediction methods based on measurement, on the one hand, the proposed comfort evaluation model makes it possible for providing a quantitative value for the comfort of gestures without using electromyography (EMG) or other measuring devices; on the other hand, from the ergonomic perspective, the results provide an intuitive indicator to predict which act has the higher risk of fatigue or injury for joints and muscles. Experiments are conducted to validate the effectiveness of the proposed model. According to the comparison result among the proposed comfort evaluation model, the model based on the range of motion (ROM) and the model based on the method for movement and gesture assessment (MMGA), a slight difference can be found due to the ignorance of dynamic gestures and the relative kinematic characteristics during the movements of dynamic gestures. Therefore, considering the feedback of perceived effects and gesture recognition rate in HCI, designers can achieve a better optimization for the gesture design by making use of the proposed comfort evaluation model.
Collapse
|
19
|
Lin CJ, Widyaningrum R. The effect of parallax on eye fixation parameter in projection-based stereoscopic displays. APPLIED ERGONOMICS 2018; 69:10-16. [PMID: 29477316 DOI: 10.1016/j.apergo.2017.12.020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 11/20/2017] [Accepted: 12/29/2017] [Indexed: 06/08/2023]
Abstract
The promising technology of stereoscopic displays is interesting to explore because 3D virtual applications are widely known. Thus, this study investigated the effect of parallax on eye fixation in stereoscopic displays. The experiment was conducted in three different levels of parallax, in which virtual balls were projected at the screen, at 20 cm and 50 cm in front the screen. The two important findings of this study are that parallax has significant effects on fixation duration, time to first fixation, number of fixations, and accuracy. The participant had more accurate fixations, fewer fixations, shorter fixation durations, and shorter times to first fixation when the virtual ball was projected at the screen than when it was projected at the other two levels of parallax.
Collapse
Affiliation(s)
- Chiuhsiang Joe Lin
- Department of Industrial Management, National Taiwan University of Science and Technology, No.43, Sec. 4, Keelung Rd., Da'an Dist., Taipei 10607, Taiwan, ROC.
| | - Retno Widyaningrum
- Department of Industrial Management, National Taiwan University of Science and Technology, No.43, Sec. 4, Keelung Rd., Da'an Dist., Taipei 10607, Taiwan, ROC
| |
Collapse
|
20
|
Chihara T, Seo A. Evaluation of physical workload affected by mass and center of mass of head-mounted display. APPLIED ERGONOMICS 2018; 68:204-212. [PMID: 29409636 DOI: 10.1016/j.apergo.2017.11.016] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Revised: 11/27/2017] [Accepted: 11/29/2017] [Indexed: 06/07/2023]
Abstract
A head-mounted display (HMD) with inappropriate mass and center of mass (COM) increases the physical workload of HMD users. The aim of this study was to investigate the effects of mass and COM of HMD on physical workload. Twelve subjects participated in this study. The mass and posteroanterior COM position were 0.8, 1.2, or 1.6 kg and -7.0, 0.0, or 7.0 cm, respectively. The subjects gazed at the target objects in four test postures: the neutral, look-up, body-bending, and look-down postures. The normalized joint torques for the neck and the lumbar region were calculated based on the measured segment angles. The results showed that the neck joint torque was significantly affected by mass and COM and it increased as the HMD mass increased for all test postures. The COM position that minimized the neck joint torque varied depending on the test postures, and the recommended ranges of COM were identified.
Collapse
Affiliation(s)
- Takanori Chihara
- Kanazawa University, Kakuma-machi, Kanazawa, Ishikawa 920-1192, Japan.
| | - Akihiko Seo
- Tokyo Metropolitan University, 6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan.
| |
Collapse
|
21
|
Lim J, Palmer CJ, Busa MA, Amado A, Rosado LD, Ducharme SW, Simon D, Van Emmerik REA. Additional helmet and pack loading reduce situational awareness during the establishment of marksmanship posture. ERGONOMICS 2017; 60:824-836. [PMID: 27594581 DOI: 10.1080/00140139.2016.1222001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.
Collapse
Affiliation(s)
- Jongil Lim
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Christopher J Palmer
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
- b Naval Special Warfare Command, N8 Survival Systems , Coronado , CA , USA
| | - Michael A Busa
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Avelino Amado
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Luis D Rosado
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Scott W Ducharme
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Darnell Simon
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| | - Richard E A Van Emmerik
- a Motor Control Laboratory, Department of Kinesiology , University of Massachusetts Amherst , Amherst , MA , USA
| |
Collapse
|
22
|
Spanlang B, Normand JM, Borland D, Kilteni K, Giannopoulos E, Pomés A, González-Franco M, Perez-Marcos D, Arroyo-Palacios J, Muncunill XN, Slater M. How to Build an Embodiment Lab: Achieving Body Representation Illusions in Virtual Reality. Front Robot AI 2014. [DOI: 10.3389/frobt.2014.00009] [Citation(s) in RCA: 126] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
23
|
Subramanian SK, Levin MF. Viewing medium affects arm motor performance in 3D virtual environments. J Neuroeng Rehabil 2011; 8:36. [PMID: 21718542 PMCID: PMC3145562 DOI: 10.1186/1743-0003-8-36] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2010] [Accepted: 06/30/2011] [Indexed: 12/04/2022] Open
Abstract
Background 2D and 3D virtual reality platforms are used for designing individualized training environments for post-stroke rehabilitation. Virtual environments (VEs) are viewed using media like head mounted displays (HMDs) and large screen projection systems (SPS) which can influence the quality of perception of the environment. We estimated if there were differences in arm pointing kinematics when subjects with and without stroke viewed a 3D VE through two different media: HMD and SPS. Methods Two groups of subjects participated (healthy control, n = 10, aged 53.6 ± 17.2 yrs; stroke, n = 20, 66.2 ± 11.3 yrs). Arm motor impairment and spasticity were assessed in the stroke group which was divided into mild (n = 10) and moderate-to-severe (n = 10) sub-groups based on Fugl-Meyer Scores. Subjects pointed (8 times each) to 6 randomly presented targets located at two heights in the ipsilateral, middle and contralateral arm workspaces. Movements were repeated in the same VE viewed using HMD (Kaiser XL50) and SPS. Movement kinematics were recorded using an Optotrak system (Certus, 6 markers, 100 Hz). Upper limb motor performance (precision, velocity, trajectory straightness) and movement pattern (elbow, shoulder ranges and trunk displacement) outcomes were analyzed using repeated measures ANOVAs. Results For all groups, there were no differences in endpoint trajectory straightness, shoulder flexion and shoulder horizontal adduction ranges and sagittal trunk displacement between the two media. All subjects, however, made larger errors in the vertical direction using HMD compared to SPS. Healthy subjects also made larger errors in the sagittal direction, slower movements overall and used less range of elbow extension for the lower central target using HMD compared to SPS. The mild and moderate-to-severe sub-groups made larger RMS errors with HMD. The only advantage of using the HMD was that movements were 22% faster in the moderate-to-severe stroke sub-group compared to the SPS. Conclusions Despite the similarity in majority of the movement kinematics, differences in movement speed and larger errors were observed for movements using the HMD. Use of the SPS may be a more comfortable and effective option to view VEs for upper limb rehabilitation post-stroke. This has implications for the use of VR applications to enhance upper limb recovery.
Collapse
Affiliation(s)
- Sandeep K Subramanian
- School of Physical and Occupational Therapy, McGill University, 3654 Promenade Sir William Osler, Montreal, Qc. H3G 1Y5, Canada
| | | |
Collapse
|