1
|
Chen J, Wright WG, Keshner E, Darvish K. Design and usability of a system for the study of head orientation. FRONTIERS IN REHABILITATION SCIENCES 2022; 3:978882. [PMID: 36386774 PMCID: PMC9663472 DOI: 10.3389/fresc.2022.978882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 09/20/2022] [Indexed: 11/06/2022]
Abstract
The ability to control head orientation relative to the body is a multisensory process that mainly depends on proprioceptive, vestibular, and visual sensory systems. A system to study the sensory integration of head orientation was developed and tested. A test seat with a five-point harness was assembled to provide passive postural support. A lightweight head-mounted display was designed for mounting multiaxis accelerometers and a mini-CCD camera to provide the visual input to virtual reality goggles with a 39° horizontal field of view. A digitally generated sinusoidal signal was delivered to a motor-driven computer-controlled sled on a 6-m linear railing system. A data acquisition system was designed to collect acceleration data. A pilot study was conducted to test the system. Four young, healthy subjects were seated with their trunks fixed to the seat. The subjects received a sinusoidal anterior–posterior translation with peak accelerations of 0.06g at 0.1 Hz and 0.12g at 0.2, 0.5, and 1.1 Hz. Four sets of visual conditions were randomly presented along with the translation. These conditions included eyes open, looking forward, backward, and sideways, and also eyes closed. Linear acceleration data were collected from linear accelerometers placed on the head, trunk, and seat and were processed using MATLAB. The head motion was analyzed using fast Fourier transform to derive the gain and phase of head pitch acceleration relative to seat linear acceleration. A randomization test for two independent variables tested the significance of visual and inertial effects on response gain and phase shifts. Results show that the gain was close to one, with no significant difference among visual conditions across frequencies. The phase was shown to be dependent on the head strategy each subject used.
Collapse
Affiliation(s)
- Ji Chen
- Department of Mechanical Engineering, University of the District of Columbia, Washington, DC, United States
- Correspondence: Ji Chen
| | | | - Emily Keshner
- Department of Physical Therapy, Temple University, Philadelphia, PA, United States
| | - Kurosh Darvish
- Department of Mechanical Engineering, Temple University, Philadelphia, PA, United States
| |
Collapse
|
2
|
The impact of external and internal focus of attention on visual dependence and EEG alpha oscillations during postural control. J Neuroeng Rehabil 2022; 19:81. [PMID: 35883085 PMCID: PMC9316701 DOI: 10.1186/s12984-022-01059-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 07/07/2022] [Indexed: 11/10/2022] Open
Abstract
Background The ability to maintain upright posture requires successful integration of multiple sensory inputs (visual, vestibular, and somatosensory). When one or more sensory systems become unreliable, the postural control system must “down-weight” (or reduce the influence of) those senses and rely on other senses to maintain postural stability. As individuals age, their ability to successfully reweight sensory inputs diminishes, leading to increased fall risk. The present study investigates whether manipulating attentional focus can improve the ability to prioritize different sensory inputs for postural control. Methods Forty-two healthy adults stood on a balance board while wearing a virtual reality (VR) head-mounted display. The VR environment created a multisensory conflict amongst the different sensory signals as participants were tasked with maintaining postural stability on the balance board. Postural sway and scalp electroencephalography (EEG) were measured to assess visual weighting and cortical activity changes. Participants were randomized into groups that received different instructions on where to focus their attention during the balance task. Results Following the instructions to direct attention toward the movement of the board (external focus group) was associated with lower visual weighting and better balance performance than when not given any instructions on attentional focus (control group). Following the instructions to direct attention towards movement of the feet (internal focus group) did not lead to any changes in visual weighting or balance performance. Both external and internal focus groups exhibited increased EEG alpha power (8–13 Hz) activity over the occipital cortex as compared to the control group. Conclusions Current results suggest that directing one’s attention externally, away from one’s body, may optimize sensory integration for postural control when visual inputs are incongruent with somatosensory and vestibular inputs. Current findings may be helpful for clinicians and researchers in developing strategies to improve sensorimotor mechanisms for balance.
Collapse
|
3
|
Pawlitzki E, Schlenstedt C, Schmidt N, Rotkirch I, Gövert F, Hartwigsen G, Witt K. Spatial orientation and postural control in patients with Parkinson's disease. Gait Posture 2018; 60:50-54. [PMID: 29153480 DOI: 10.1016/j.gaitpost.2017.11.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Revised: 08/31/2017] [Accepted: 11/11/2017] [Indexed: 02/02/2023]
Abstract
Postural instability is one of the most disabling and risky symptoms of advanced Parkinson's disease (PD). The purpose of this study was to investigate whether and how this is mediated by a centrally impaired spatial orientation. Therefore, we performed a spatial orientation study in 21 PD patients (mean age 68years, SD 8.5 years, 9 women) in a medically on condition and 21 healthy controls (mean age 68.9years, SD 5.5years, 14 women). We compared their spatial responses to the horizontal axis (Sakashita's visual target cancellation task), the vertical axis (bucket-test), the sagittal axis (tilt table test) and postural stability using the Fullerton Advanced Balance Scale (FAB). We found larger deviations on the vertical axis in PD patients, although the direct comparisons of performance in PD patients and healthy controls did not reveal significant differences. While the total scores of the FAB Scale were significantly worse in PD (25.9 points, SD 7.2 points) compared to controls (35.1 points, SD 2.3 points, p<0.01), the results from the spatialorientation task did not correlate with the FAB Scale. In summary, our results argue against a relation between perceptional deficits of spatial information and postural control in PD. These results are in favor of a deficit in higher order integration of spatial stimuli in PD that might influence balance control.
Collapse
Affiliation(s)
- E Pawlitzki
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany.
| | - C Schlenstedt
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany
| | - N Schmidt
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany
| | - I Rotkirch
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany
| | - F Gövert
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany
| | - G Hartwigsen
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Stephanstraße 1a, 04103 Leipzig, Germany
| | - K Witt
- University Medical Center Schleswig-Holstein, Christian-Albrechts-University, Department of Neurology, Arnold-Heller-Straße 3, 24105 Kiel, Germany; Department of Neurology, School of Medicine and Health Sciences - European Medical School, University Oldenburg, Steinweg 13-17, 26122 Oldenburg, Germany
| |
Collapse
|
4
|
Dakin CJ, Rosenberg A. Gravity estimation and verticality perception. HANDBOOK OF CLINICAL NEUROLOGY 2018; 159:43-59. [PMID: 30482332 DOI: 10.1016/b978-0-444-63916-5.00003-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Gravity is a defining force that governs the evolution of mechanical forms, shapes and anchors our perception of the environment, and imposes fundamental constraints on our interactions with the world. Within the animal kingdom, humans are relatively unique in having evolved a vertical, bipedal posture. Although a vertical posture confers numerous benefits, it also renders us less stable than quadrupeds, increasing susceptibility to falls. The ability to accurately and precisely estimate our orientation relative to gravity is therefore of utmost importance. Here we review sensory information and computational processes underlying gravity estimation and verticality perception. Central to gravity estimation and verticality perception is multisensory cue combination, which serves to improve the precision of perception and resolve ambiguities in sensory representations by combining information from across the visual, vestibular, and somatosensory systems. We additionally review experimental paradigms for evaluating verticality perception, and discuss how particular disorders affect the perception of upright. Together, the work reviewed here highlights the critical role of multisensory cue combination in gravity estimation, verticality perception, and creating stable gravity-centered representations of our environment.
Collapse
Affiliation(s)
- Christopher J Dakin
- Department of Kinesiology and Health Science, Utah State University, Logan, UT, United States.
| | - Ari Rosenberg
- Department of Neuroscience, School of Medicine and Public Health, University of Wisconsin - Madison, Madison, WI, United States
| |
Collapse
|
5
|
Wright WG. Using virtual reality to augment perception, enhance sensorimotor adaptation, and change our minds. Front Syst Neurosci 2014; 8:56. [PMID: 24782724 PMCID: PMC3986528 DOI: 10.3389/fnsys.2014.00056] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2014] [Accepted: 03/24/2014] [Indexed: 11/18/2022] Open
Abstract
Technological advances that involve human sensorimotor processes can have both intended and unintended effects on the central nervous system (CNS). This mini review focuses on the use of virtual environments (VE) to augment brain functions by enhancing perception, eliciting automatic motor behavior, and inducing sensorimotor adaptation. VE technology is becoming increasingly prevalent in medical rehabilitation, training simulators, gaming, and entertainment. Although these VE applications have often been shown to optimize outcomes, whether it be to speed recovery, reduce training time, or enhance immersion and enjoyment, there are inherent drawbacks to environments that can potentially change sensorimotor calibration. Across numerous VE studies over the years, we have investigated the effects of combining visual and physical motion on perception, motor control, and adaptation. Recent results from our research involving exposure to dynamic passive motion within a visually-depicted VE reveal that short-term exposure to augmented sensorimotor discordance can result in systematic aftereffects that last beyond the exposure period. Whether these adaptations are advantageous or not, remains to be seen. Benefits as well as risks of using VE-driven sensorimotor stimulation to enhance brain processes will be discussed.
Collapse
Affiliation(s)
- W Geoffrey Wright
- Physical Therapy and Bioengineering, Motion Analysis and Perception Laboratory, Temple University Philadelphia, PA, USA
| |
Collapse
|
6
|
Wright WG, Agah MR, Darvish K, Keshner EA. Head stabilization shows visual and inertial dependence during passive stimulation: implications for virtual rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2013; 21:191-7. [PMID: 23314779 DOI: 10.1109/tnsre.2012.2237040] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Sensorimotor coordination relies on the fine calibration and integration of visual, vestibular, and somatosensory input. Using virtual environments (VE) allows for the dissociation of visual and inertial inputs to manipulate human behavioral outputs. Our goal was to employ VE technology in a novel manner to investigate how head stabilization is affected by spatiotemporal properties of dynamic visual input when combined with passive motion on a linear sled. Healthy adults (n = 12) wore a head-mounted display during naso-occipital sinusoidal horizontal whole body translations while seated. Subjects were secured in a seat with a five-point harness, with the head free to move. Frequency and amplitude of sinusoidal input (i.e., inertial conditions) were set to create overlapping conditions of maximum acceleration (amax) or velocity (vmax). Four inertial conditions were combined with four visual conditions (VIS). VIS were created so that direction of optic flow either matched direction of passive motion or did not. The effect of near and far fixation distance within the VE was also tested. Head kinematics were collected with a three-axis gyro. Head stability showed a complex interaction dependent on changes in weighting of visual and inertial inputs that changed with the sled driving frequency. Inertial condition affected amplitude (p < 0.0000) and phase (p < 0.0000) of head pitch angular velocity. In the absence of visual input, head pitch velocity amplitude increased (p < 0.01). An interaction effect between inertial and VIS conditions on head yaw occurred in SW (p < 0.05). There was also a significant interaction of depth of field and inertial condition on amplitude (p < 0.001) and phase (p < 0.05) of head yaw velocity in SW, especially during high vmax conditions. We conclude visual flow can organize lateral cervical responses despite being discordant with inertial input. When using VE for rehabilitation, possible unintended, involuntary or reflexive motor responses that may not be present in traditional training environments should be taken into consideration.
Collapse
Affiliation(s)
- W Geoffrey Wright
- Department of Bioengineering, Temple University, Philadelphia, PA 19140, USA.
| | | | | | | |
Collapse
|
7
|
Correia Grácio BJ, de Winkel KN, Groen EL, Wentink M, Bos JE. The time constant of the somatogravic illusion. Exp Brain Res 2012; 224:313-21. [PMID: 23124839 DOI: 10.1007/s00221-012-3313-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2012] [Accepted: 10/12/2012] [Indexed: 10/27/2022]
Abstract
Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.
Collapse
Affiliation(s)
- B J Correia Grácio
- Faculty of Aerospace Engineering, Control and Simulation Division, Delft University of Technology, P.O. Box 5058, 2600 GB Delft, The Netherlands.
| | | | | | | | | |
Collapse
|
8
|
Bringoux L, Lepecq JC, Danion F. Does visually induced self-motion affect grip force when holding an object? J Neurophysiol 2012; 108:1685-94. [PMID: 22723677 DOI: 10.1152/jn.00407.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Accurate control of grip force during object manipulation is necessary to prevent the object from slipping, especially to compensate for the action of gravitational and inertial forces resulting from hand/object motion. The goal of the current study was to assess whether the control of grip force was influenced by visually induced self-motion (i.e., vection), which would normally be accompanied by changes in object load. The main task involved holding a 400-g object between the thumb and the index finger while being seated within a virtual immersive environment that simulated the vertical motion of an elevator across floors. Different visual motions were tested, including oscillatory (0.21 Hz) and constant-speed displacements of the virtual scene. Different arm-loading conditions were also tested: with or without the hand-held object and with or without oscillatory arm motion (0.9 Hz). At the perceptual level, ratings from participants showed that both oscillatory and constant-speed motion of the elevator rapidly induced a long-lasting sensation of self-motion. At the sensorimotor level, vection compellingness altered arm movement control. Spectral analyses revealed that arm motion was entrained by the oscillatory motion of the elevator. However, we found no evidence that grip force used to hold the object was visually affected. Specifically, spectral analyses revealed no component in grip force that would mirror the virtual change in object load associated with the oscillatory motion of the elevator, thereby allowing the grip-to-load force coupling to remain unaffected. Altogether, our findings show that the neural mechanisms underlying vection interfere with arm movement control but do not interfere with the delicate modulation of grip force. More generally, those results provide evidence that the strength of the coupling between the sensorimotor system and the perceptual level can be modulated depending on the effector.
Collapse
Affiliation(s)
- Lionel Bringoux
- Institute of Movement Sciences, Aix-Marseille University and Centre National de la Recherche Scientifique, Marseille, France
| | | | | |
Collapse
|
9
|
Wright WG. Linear vection in virtual environments can be strengthened by discordant inertial input. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2009:1157-60. [PMID: 19963991 DOI: 10.1109/iembs.2009.5333425] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Visual and gravitoinertial sensory inputs are integrated by the central nervous system to provide a compelling and veridical sense of spatial orientation and motion. Although it's known that visual input alone can drive this perception, questions remain as to how vestibular/ proprioceptive (i.e. inertial) inputs integrate with visual input to affect this process. This was investigated further by combining sinusoidal vertical linear oscillation (5 amplitudes between 0m and +/-0.8m) with two different virtual visual inputs. Visual scenes were viewed in a large field-of-view head-mounted display (HMD), which depicted an enriched, hi-res, dynamic image of the actual test chamber from the perspective of a subject seated in the linear motion device. The scene either depicted horizontal (+/-0.7m) or vertical (+/-0.8m) linear 0.2Hz sinusoidal translation. Horizontal visual motion with vertical inertial motion represents a 90 degrees spatial shift. Vertical visual motion with vertical inertial motion whereby the highest physical point matches the lowest visual point and vice versa represents a 180 degrees temporal shift, i.e. opposite of what one experiences in reality. Inertial-only stimulation without visual input was identified as vertical linear oscillation with accurate reports of acceleration peaks and troughs, but a slight tendency to underestimate amplitude. Visual-only (stationary) stimulation was less compelling than combined visual+inertial conditions. In visual+inertial conditions, visual input dominated the direction of perceived self-motion, however, increasing the inertial amplitude increased how compelling this non-veridical perception was. That is, perceived vertical self-motion was most compelling when inertial stimulation was maximal, despite perceiving "up" when physically "down" and vice versa. Similarly, perceived horizontal self-motion was most compelling when vertical inertial motion was at maximum amplitude. "Cross-talk" between visual and vestibular channels was suggested by reports of small vertical components of perceived self-motion combined with a dominant horizontal component. In conclusion, direction of perceived self-motion was dominated by visual motion, however, compellingness of this illusion was strengthened by increasing discordant inertial input. Thus, spatial mapping of inertial systems may be completely labile, while amplitude coding of the input intensifies the percept.
Collapse
|
10
|
Bringoux L, Bourdin C, Lepecq JC, Sandor PMB, Pergandi JM, Mestre D. Interaction between reference frames during subjective vertical estimates in a tilted immersive virtual environment. Perception 2010; 38:1053-71. [PMID: 19764307 DOI: 10.1068/p6089] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Numerous studies highlighted the influence of a tilted visual frame on the perception of the visual vertical ('rod-and-frame effect' or RFE). Here, we investigated whether this influence can be modified in a virtual immersive environment (CAVE-like) by the structure of the visual scene and by the adjustment mode allowing visual or visuo-kinaesthetic control (V and VK mode, respectively). The way this influence might dynamically evolve throughout the adjustment was also investigated in two groups of subjects with the head unrestrained or restrained upright. RFE observed in the immersive environment was qualitatively comparable to that obtained in a real display (portable rod-and-frame test; Oltman 1968, Perceptual and Motor Skills 26 503-506). Moreover, RFE in the immersive environment appeared significantly influenced by the structure of the visual scene and by the adjustment mode: the more geometrical and meaningful 3-D features the visual scene contained, the greater the RFE. The RFE was also greater when the subjective vertical was assessed under visual control only, as compared to visuo-kinaesthetic control. Furthermore, the results showed a significant RFE increase throughout the adjustment, indicating that the influence of the visual scene upon subjective vertical might dynamically evolve over time. The latter effect was more pronounced for structured visual scenes and under visuo-kinaesthetic control. On the other hand, no difference was observed between the two groups of subjects having the head restrained or unrestrained. These results are discussed in terms of dynamic combination between coexisting reference frames for spatial orientation.
Collapse
Affiliation(s)
- Lionel Bringoux
- Institut des Sciences du Mouvement "Etienne-Jules Marey", CNRS-Université de la Méditerranée, UMR 6233, 163 avenue de Luminy CP 910, F 13288 Marseille Cedex 9, France.
| | | | | | | | | | | |
Collapse
|
11
|
Guenther AL, Bartl K, Nauderer J, Schneider E, Huesmann A, Brandt T, Glasauer S. Modality-dependent Indication of the Subjective Vertical during Combined Linear and Rotational Movements. Ann N Y Acad Sci 2009; 1164:376-9. [DOI: 10.1111/j.1749-6632.2009.03849.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
12
|
Compensatory manual motor responses while object wielding during combined linear visual and physical roll tilt stimulation. Exp Brain Res 2008; 192:683-94. [PMID: 18830585 DOI: 10.1007/s00221-008-1581-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2008] [Accepted: 09/16/2008] [Indexed: 10/21/2022]
Abstract
Dynamic signals from multiple sensory channels must be integrated by the central nervous system to create a unified perception of self-motion and spatial orientation. Using immersive virtual environments, we altered the relative contribution of visual and inertial inputs and evaluated the effects on perceptuomotor outputs. Subjects seated in a tilting chair were exposed to a combined 0.25 Hz sinusoidal roll-tilt (+/-7.5 degrees) about the naso-occipital axis while viewing one of four visual conditions. One visual condition was in darkness, and the other three depicted 2 m of sinusoidal horizontal or vertical linear motion either synchronous or asynchronous with the roll-tilt. Subjects performed a perceptuomotor task of aligning a handheld object to gravitational vertical (GV) with the entire arm being free to move in six degrees of freedom. Subjects were tested with two objects, a joystick and glass of water, in counter-balanced order. Specific visual effects were as follows: (1) the phase leads of object tilt relative to chair/subject roll-tilt were affected by visual condition, (2) horizontal translation of the object was entrained with visual velocity, rather than with visual acceleration or maximum roll-tilt, and (3) when vertical visual motion was viewed during chair/subject roll-tilt, vertical object translation increased. Although the head-fixed scene meant visual vertical cues were always aligned with the subject's median sagittal plane, object tilt showed sensitivity to inertial roll-tilt (Gain > 0.5) which was not significantly different from the dark condition. Two object effects were found: (1) tilt deviation from GV was greater when wielding a joystick compared to a full glass of water, and (2) the phase of horizontal visual motion relative to subject roll tilt affected the joystick amplitude of horizontal translation but not the glass of water. In conclusion, an attentional shift driven by postural assumptions can account for the two object effects, however, the visual effects suggest that a process for deriving the net gravitoinertial force from visual and inertial cues is involved. Inertial signals dominated the perception of verticality, but visual linear translation affected the spatiotemporal dynamics of the manual motor responses during object wielding.
Collapse
|
13
|
Vingerhoets RAA, Medendorp WP, Van Gisbergen JAM. Body-tilt and visual verticality perception during multiple cycles of roll rotation. J Neurophysiol 2008; 99:2264-80. [PMID: 18337369 DOI: 10.1152/jn.00704.2007] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
To assess the effects of degrading canal cues for dynamic spatial orientation in human observers, we tested how judgments about visual-line orientation in space (subjective visual vertical task, SVV) and estimates of instantaneous body tilt (subjective body-tilt task, SBT) develop in the course of three cycles of constant-velocity roll rotation. These abilities were tested across the entire tilt range in separate experiments. For comparison, we also obtained SVV data during static roll tilt. We found that as tilt increased, dynamic SVV responses became strongly biased toward the head pole of the body axis (A-effect), as if body tilt was underestimated. However, on entering the range of near-inverse tilts, SVV responses adopted a bimodal pattern, alternating between A-effects (biased toward head-pole) and E-effects (biased toward feet-pole). Apart from an onset effect, this tilt-dependent pattern of systematic SVV errors repeated itself in subsequent rotation cycles with little sign of worsening performance. Static SVV responses were qualitatively similar and consistent with previous reports but showed smaller A-effects. By contrast, dynamic SBT errors were small and unimodal, indicating that errors in visual-verticality estimates were not caused by errors in body-tilt estimation. We discuss these results in terms of predictions from a canal-otolith interaction model extended with a leaky integrator and an egocentric bias mechanism. We conclude that the egocentric-bias mechanism becomes more manifest during constant velocity roll-rotation and that perceptual errors due to incorrect disambiguation of the otolith signal are small despite the decay of canal signals.
Collapse
Affiliation(s)
- R A A Vingerhoets
- Department of Biophysics, Nijmegen Institute for Cognition and Information, Radboud University Nijmegen, Nijmegen, The Netherlands
| | | | | |
Collapse
|