1
|
Bischof WF, Anderson NC, Kingstone A. A tutorial: Analyzing eye and head movements in virtual reality. Behav Res Methods 2024:10.3758/s13428-024-02482-5. [PMID: 39117987 DOI: 10.3758/s13428-024-02482-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2024] [Indexed: 08/10/2024]
Abstract
This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.
Collapse
Affiliation(s)
- Walter F Bischof
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| | - Nicola C Anderson
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
2
|
van Opstal AJ. Neural encoding of instantaneous kinematics of eye-head gaze shifts in monkey superior Colliculus. Commun Biol 2023; 6:927. [PMID: 37689726 PMCID: PMC10492853 DOI: 10.1038/s42003-023-05305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 08/31/2023] [Indexed: 09/11/2023] Open
Abstract
The midbrain superior colliculus is a crucial sensorimotor stage for programming and generating saccadic eye-head gaze shifts. Although it is well established that superior colliculus cells encode a neural command that specifies the amplitude and direction of the upcoming gaze-shift vector, there is controversy about the role of the firing-rate dynamics of these neurons during saccades. In our earlier work, we proposed a simple quantitative model that explains how the recruited superior colliculus population may specify the detailed kinematics (trajectories and velocity profiles) of head-restrained saccadic eye movements. We here show that the same principles may apply to a wide range of saccadic eye-head gaze shifts with strongly varying kinematics, despite the substantial nonlinearities and redundancy in programming and execute rapid goal-directed eye-head gaze shifts to peripheral targets. Our findings could provide additional evidence for an important role of the superior colliculus in the optimal control of saccades.
Collapse
Affiliation(s)
- A John van Opstal
- Section Neurophysics, Donders Centre for Neuroscience, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Alizadeh A, Van Opstal AJ. Dynamic control of eye-head gaze shifts by a spiking neural network model of the superior colliculus. Front Comput Neurosci 2022; 16:1040646. [PMID: 36465967 PMCID: PMC9714624 DOI: 10.3389/fncom.2022.1040646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 11/03/2022] [Indexed: 09/11/2023] Open
Abstract
INTRODUCTION To reorient gaze (the eye's direction in space) towards a target is an overdetermined problem, as infinitely many combinations of eye- and head movements can specify the same gaze-displacement vector. Yet, behavioral measurements show that the primate gaze-control system selects a specific contribution of eye- and head movements to the saccade, which depends on the initial eye-in-head orientation. Single-unit recordings in the primate superior colliculus (SC) during head-unrestrained gaze shifts have further suggested that cells may encode the instantaneous trajectory of a desired straight gaze path in a feedforward way by the total cumulative number of spikes in the neural population, and that the instantaneous gaze kinematics are thus determined by the neural firing rates. The recordings also indicated that the latter is modulated by the initial eye position. We recently proposed a conceptual model that accounts for many of the observed properties of eye-head gaze shifts and on the potential role of the SC in gaze control. METHODS Here, we extend and test the model by incorporating a spiking neural network of the SC motor map, the output of which drives the eye-head motor control circuitry by linear cumulative summation of individual spike effects of each recruited SC neuron. We propose a simple neural mechanism on SC cells that explains the modulatory influence of feedback from an initial eye-in-head position signal on their spiking activity. The same signal also determines the onset delay of the head movement with respect to the eye. Moreover, the downstream eye- and head burst generators were taken to be linear, as our earlier work had indicated that much of the non-linear main-sequence kinematics of saccadic eye movements may be due to neural encoding at the collicular level, rather than at the brainstem. RESULTS AND DISCUSSION We investigate how the spiking activity of the SC population drives gaze to the intended target location within a dynamic local gaze-velocity feedback circuit that yields realistic eye- and head-movement kinematics and dynamic SC gaze-movement fields.
Collapse
Affiliation(s)
| | - A. John Van Opstal
- Department of Biophysics, Donders Centre for Neuroscience, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
4
|
Efficient training approaches for optimizing behavioral performance and reducing head fixation time. PLoS One 2022; 17:e0276531. [DOI: 10.1371/journal.pone.0276531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 10/10/2022] [Indexed: 11/12/2022] Open
Abstract
The use of head fixation has become routine in systems neuroscience. However, whether the behavior changes with head fixation, whether animals can learn aspects of a task while freely moving and transfer this knowledge to the head fixed condition, has not been examined in much detail. Here, we used a novel floating platform, the “Air-Track”, which simulates free movement in a real-world environment to address the effect of head fixation and developed methods to accelerate training of behavioral tasks for head fixed mice. We trained mice in a Y maze two choice discrimination task. One group was trained while head fixed and compared to a separate group that was pre-trained while freely moving and then trained on the same task while head fixed. Pre-training significantly reduced the time needed to relearn the discrimination task while head fixed. Freely moving and head fixed mice displayed similar behavioral patterns, however, head fixation significantly slowed movement speed. The speed of movement in the head fixed mice depended on the weight of the platform. We conclude that home-cage pre-training improves learning performance of head fixed mice and that while head fixation obviously limits some aspects of movement, the patterns of behavior observed in head fixed and freely moving mice are similar.
Collapse
|
5
|
Kim K, Lee JH. The effect of feedback in virtual attention training on orienting attention in individuals with sluggish cognitive tempo. J Atten Disord 2022; 26:1640-1652. [PMID: 35491754 DOI: 10.1177/10870547221090664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE This study was conducted to assess the effectiveness of feedback in a virtual attention training program to improve the attentional characteristics of sluggish cognitive tempo (SCT). METHOD The SCT group (N = 60) and control group (N = 30) were identified, and the attention network test-revised (ANT-R) was performed to measure attention characteristics. Based on this result, a virtual reality (VR) feedback attention training program was developed to improve the efficiency of engagement and disengagement of attention in SCT. Sixty participants with SCT were recruited and grouped into two conditions: VR feedback (n = 30) and no-feedback (n = 30) conditions. RESULTS The results show that the VR attention training program with feedback significantly improves the attention-orienting network. CONCLUSION This suggests that it is necessary to provide immediate feedback for effective attention training for SCT and continuous intervention may be possible when feedback is provided together.
Collapse
Affiliation(s)
- Kyunghwa Kim
- College of Social Science, Chung-Ang University, Seoul, South Korea
| | - Jang-Han Lee
- College of Social Science, Chung-Ang University, Seoul, South Korea
| |
Collapse
|
6
|
Zahler SH, Taylor DE, Wong JY, Adams JM, Feinberg EH. Superior colliculus drives stimulus-evoked directionally biased saccades and attempted head movements in head-fixed mice. eLife 2021; 10:73081. [PMID: 34970968 PMCID: PMC8747496 DOI: 10.7554/elife.73081] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 12/27/2021] [Indexed: 11/13/2022] Open
Abstract
Animals investigate their environments by directing their gaze towards salient stimuli. In the prevailing view, mouse gaze shifts entail head rotations followed by brainstem-mediated eye movements, including saccades to reset the eyes. These 'recentering' saccades are attributed to head movement-related vestibular cues. However, microstimulating mouse superior colliculus (SC) elicits directed head and eye movements resembling SC-dependent sensory-guided gaze shifts in other species, suggesting that mouse gaze shifts may be more flexible than has been recognized. We investigated this possibility by tracking eye and attempted head movements in a head-fixed preparation that eliminates head movement-related sensory cues. We found tactile stimuli evoke directionally biased saccades coincident with attempted head rotations. Differences in saccade endpoints across stimuli are associated with distinct stimulus-dependent relationships between initial eye position and saccade direction and amplitude. Optogenetic perturbations revealed SC drives these gaze shifts. Thus, head-fixed mice make sensory-guided, SC-dependent gaze shifts involving coincident, directionally biased saccades and attempted head movements. Our findings uncover flexibility in mouse gaze shifts and provide a foundation for studying head-eye coupling.
Collapse
Affiliation(s)
- Sebastian H Zahler
- Department of Anatomy, University of California, San Francisco, San Francisco, United States
| | - David E Taylor
- Department of Anatomy, University of California, San Francisco, San Francisco, United States
| | - Joey Y Wong
- Department of Anatomy, University of California, San Francisco, San Francisco, United States
| | - Julia M Adams
- Department of Anatomy, University of California, San Francisco, San Francisco, United States
| | - Evan H Feinberg
- Department of Anatomy, University of California, San Francisco, San Francisco, United States
| |
Collapse
|
7
|
Abstract
Walking animals are faced with making a trade-off between maintaining a stable posture and gait and pursuing other goals such as keeping a straight path. A new study on exploratory walking in flies provides a sophisticated quantitative account of this behavioural problem, with some intriguing discoveries.
Collapse
Affiliation(s)
- Manuel Zimmer
- Department of Neuroscience and Developmental Biology, University of Vienna, Vienna Biocenter (VBC), Djerassiplatz 1, 1030 Vienna, Austria; Research Institute of Molecular Pathology (IMP), Vienna Biocenter (VBC), Campus-Vienna-Biocenter 1, 1030 Vienna, Austria.
| |
Collapse
|
8
|
Cruz TL, Pérez SM, Chiappe ME. Fast tuning of posture control by visual feedback underlies gaze stabilization in walking Drosophila. Curr Biol 2021; 31:4596-4607.e5. [PMID: 34499851 PMCID: PMC8556163 DOI: 10.1016/j.cub.2021.08.041] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 07/01/2021] [Accepted: 08/13/2021] [Indexed: 02/08/2023]
Abstract
Locomotion requires a balance between mechanical stability and movement flexibility to achieve behavioral goals despite noisy neuromuscular systems, but rarely is it considered how this balance is orchestrated. We combined virtual reality tools with quantitative analysis of behavior to examine how Drosophila uses self-generated visual information (reafferent visual feedback) to control gaze during exploratory walking. We found that flies execute distinct motor programs coordinated across the body to maximize gaze stability. However, the presence of inherent variability in leg placement relative to the body jeopardizes fine control of gaze due to posture-stabilizing adjustments that lead to unintended changes in course direction. Surprisingly, whereas visual feedback is dispensable for head-body coordination, we found that self-generated visual signals tune postural reflexes to rapidly prevent turns rather than to promote compensatory rotations, a long-standing idea for visually guided course control. Together, these findings support a model in which visual feedback orchestrates the interplay between posture and gaze stability in a manner that is both goal dependent and motor-context specific.
Collapse
Affiliation(s)
- Tomás L Cruz
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal
| | | | - M Eugenia Chiappe
- Champalimaud Research, Champalimaud Centre for the Unknown, 1400-038 Lisbon, Portugal.
| |
Collapse
|
9
|
Jana S, Gopal A, Murthy A. Computational Mechanisms Mediating Inhibitory Control of Coordinated Eye-Hand Movements. Brain Sci 2021; 11:607. [PMID: 34068477 PMCID: PMC8150398 DOI: 10.3390/brainsci11050607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 05/02/2021] [Accepted: 05/04/2021] [Indexed: 11/17/2022] Open
Abstract
Significant progress has been made in understanding the computational and neural mechanisms that mediate eye and hand movements made in isolation. However, less is known about the mechanisms that control these movements when they are coordinated. Here, we outline our computational approaches using accumulation-to-threshold and race-to-threshold models to elucidate the mechanisms that initiate and inhibit these movements. We suggest that, depending on the behavioral context, the initiation and inhibition of coordinated eye-hand movements can operate in two modes-coupled and decoupled. The coupled mode operates when the task context requires a tight coupling between the effectors; a common command initiates both effectors, and a unitary inhibitory process is responsible for stopping them. Conversely, the decoupled mode operates when the task context demands weaker coupling between the effectors; separate commands initiate the eye and hand, and separate inhibitory processes are responsible for stopping them. We hypothesize that the higher-order control processes assess the behavioral context and choose the most appropriate mode. This computational mechanism can explain the heterogeneous results observed across many studies that have investigated the control of coordinated eye-hand movements and may also serve as a general framework to understand the control of complex multi-effector movements.
Collapse
Affiliation(s)
- Sumitash Jana
- Department of Psychology, University of California San Diego, La Jolla, CA 92093, USA
| | - Atul Gopal
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD 20814, USA
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka 560012, India;
| |
Collapse
|
10
|
Bischof WF, Anderson NC, Doswell MT, Kingstone A. Visual exploration of omnidirectional panoramic scenes. J Vis 2020; 20:23. [PMID: 32692829 PMCID: PMC7424099 DOI: 10.1167/jov.20.7.23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omnidirectional panoramic scenes, composed of both landscape and fractal images, using a virtual reality viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions and the spatial distribution of head positions and the distribution of head shifts, as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behavior best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic virtual reality environment, and the results demonstrate that a virtual reality environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behavior.
Collapse
|
11
|
Hadjidimitrakis K. Coupling of head and hand movements during eye-head-hand coordination: there is more to reaching than meets eye. J Neurophysiol 2020; 123:1579-1582. [PMID: 32233904 DOI: 10.1152/jn.00099.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Does arm reaching affect eye-head shifts? Does the head alter eye-hand coordinated movements? Sensorimotor research has focused on either eye-head or eye-hand coordination, with only occasional works studying all these effectors together. Arora et al. (Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. J Neurophysiol 122: 1946-1961, 2019) examined eye-head-hand coordination for the first time in nonhuman primates and provide evidence suggesting that head and hand movements are more coupled than traditionally considered.
Collapse
|
12
|
Pandey S, Simhadri S, Zhou Y. Rapid Head Movements in Common Marmoset Monkeys. iScience 2020; 23:100837. [PMID: 32058952 PMCID: PMC6997856 DOI: 10.1016/j.isci.2020.100837] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Revised: 11/18/2019] [Accepted: 01/09/2020] [Indexed: 11/27/2022] Open
Abstract
Gaze shifts, the directing of the eyes to an approaching predator, preferred food source, or potential mate, have universal biological significance for the survival of a species. Our knowledge of gaze behavior is based primarily on visually triggered responses, whereas head orientation triggered by auditory stimuli remains poorly characterized. Common marmoset (Callithrix jacchus) is a diurnal, small-bodied (∼350 g), New World monkey species, known for its rich behavioral repertoires during social interactions. We used a lightweight head tracking system to measure marmosets' reflexive head orientations toward a natural stimulus presented from behind. We found that marmoset could rotate its head at angular velocities above 1,000°/s and maintained target accuracy for a wide range of rotation amplitudes (up to 250°). This unusual, saccadic head orienting behavior offers opportunities for understanding the many biological factors that have shaped the evolution of sensorimotor controls of gaze orientation by the primate brain. Marmosets can make rapid, reflexive head turns in response to natural stimuli The peak velocity of marmoset head turns can exceed that of primate eye saccades When the environment is lit, head movements are faster than when it is dark
Collapse
Affiliation(s)
- Swarnima Pandey
- College of Health Solutions, Arizona State University, 975 S. Myrtle Avenue, Coor Hall 3470, Tempe, AZ 85287, USA
| | - Sravanthi Simhadri
- College of Health Solutions, Arizona State University, 975 S. Myrtle Avenue, Coor Hall 3470, Tempe, AZ 85287, USA
| | - Yi Zhou
- College of Health Solutions, Arizona State University, 975 S. Myrtle Avenue, Coor Hall 3470, Tempe, AZ 85287, USA.
| |
Collapse
|
13
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Zubair HN, Chu KMI, Johnson JL, Rivers TJ, Beloozerova IN. Gaze coordination with strides during walking in the cat. J Physiol 2019; 597:5195-5229. [PMID: 31460673 DOI: 10.1113/jp278108] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 08/19/2019] [Indexed: 11/08/2022] Open
Abstract
KEY POINTS Vision plays a crucial role in guiding locomotion in complex environments, but the coordination between gaze and stride is not well understood. The coordination of gaze shifts, fixations, constant gaze and slow gaze with strides in cats walking on different surfaces were examined. It was found that gaze behaviours are coordinated with strides even when walking on a flat surface in the complete darkness, occurring in a sequential order during different phases of the stride. During walking on complex surfaces, gaze behaviours are typically more tightly coordinated with strides, particularly at faster speeds, only slightly shifting in phase. These findings indicate that the coordination of gaze behaviours with strides is not vision-driven, but is a part of the whole body locomotion synergy; the visual environment and locomotor task modulate it. The results may be relevant to developing diagnostic tools and rehabilitation approaches for patients with locomotor deficits. ABSTRACT Vision plays a crucial role in guiding locomotion in complex environments. However, the coordination between the gaze and stride is not well understood. We investigated this coordination in cats walking on a flat surface in darkness or light, along a horizontal ladder and on a pathway with small stones. We recorded vertical and horizontal eye movements and 3-D head movement, and calculated where gaze intersected the walkway. The coordination of gaze shifts away from the animal, gaze shifts toward, fixations, constant gaze, and slow gaze with strides was investigated. We found that even during walking on the flat surface in the darkness, all gaze behaviours were coordinated with strides. Gaze shifts and slow gaze toward started in the beginning of each forelimb's swing and ended in its second half. Fixations peaked throughout the beginning and middle of swing. Gaze shifts away began throughout the second half of swing of each forelimb and ended when both forelimbs were in stance. Constant gaze and slow gaze away occurred in the beginning of stance. However, not every behaviour occurred during every stride. Light had a small effect. The ladder and stones typically increased the coordination and caused gaze behaviours to occur 3% earlier in the cycle. At faster speeds, the coordination was often tighter and some gaze behaviours occurred 2-16% later in the cycle. The findings indicate that the coordination of gaze with strides is not vision-driven, but is a part of the whole body locomotion synergy; the visual environment and locomotor task modulate it.
Collapse
Affiliation(s)
- Humza N Zubair
- Barrow Neurological Institute, St Joseph's Hospital and Medical Center, Phoenix, AZ, USA
| | - Kevin M I Chu
- Barrow Neurological Institute, St Joseph's Hospital and Medical Center, Phoenix, AZ, USA.,Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Justin L Johnson
- Barrow Neurological Institute, St Joseph's Hospital and Medical Center, Phoenix, AZ, USA
| | - Trevor J Rivers
- Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS, USA
| | - Irina N Beloozerova
- Barrow Neurological Institute, St Joseph's Hospital and Medical Center, Phoenix, AZ, USA.,School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
15
|
Genetically Defined Functional Modules for Spatial Orienting in the Mouse Superior Colliculus. Curr Biol 2019; 29:2892-2904.e8. [PMID: 31474533 PMCID: PMC6739420 DOI: 10.1016/j.cub.2019.07.083] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 07/26/2019] [Accepted: 07/30/2019] [Indexed: 01/27/2023]
Abstract
In order to explore and interact with their surroundings, animals need to orient toward specific positions in space. Throughout the animal kingdom, head movements represent a primary form of orienting behavior. The superior colliculus (SC) is a fundamental structure for the generation of orienting responses, but how genetically distinct groups of collicular neurons contribute to these spatially tuned behaviors remains largely to be defined. Here, through the genetic dissection of the murine SC, we identify a functionally and genetically homogeneous subclass of glutamatergic neurons defined by the expression of the paired-like homeodomain transcription factor Pitx2. We show that the optogenetic stimulation of Pitx2ON neurons drives three-dimensional head displacements characterized by stepwise, saccade-like kinematics. Furthermore, during naturalistic foraging behavior, the activity of Pitx2ON neurons precedes and predicts the onset of spatially tuned head movements. Intriguingly, we reveal that Pitx2ON neurons are clustered in an orderly array of anatomical modules that tile the entire intermediate layer of the SC. Such a modular organization gives origin to a discrete and discontinuous representation of the motor space, with each Pitx2ON module subtending a defined portion of the animal’s egocentric space. The modularity of Pitx2ON neurons provides an anatomical substrate for the convergence of spatially coherent sensory and motor signals of cortical and subcortical origins, thereby promoting the recruitment of appropriate movement vectors. Overall, these data support the view of the superior colliculus as a selectively addressable and modularly organized spatial-motor register. Pitx2 expression labels a functionally homogeneous class of projecting SC neurons Pitx2ON neurons drive three-dimensional head movements during foraging behavior Pitx2ON neurons are organized in an orderly array of anatomical modules Modularity of Pitx2ON neurons defines a discrete motor map for spatial orienting
Collapse
|
16
|
Visual Performance and Perception as a Target of Saccadic Strategies in Patients With Unilateral Vestibular Loss. Ear Hear 2019; 39:1176-1186. [PMID: 29578887 DOI: 10.1097/aud.0000000000000576] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To evaluate the ability of saccadic strategies developed during vestibular compensation to reduce the effect of an impaired vestibulo-ocular reflex (VOR) on a retinal smear and image motion sensation. DESIGN Twenty patients with unilateral vestibular loss were examined with a video head impulse test before and after vestibular rehabilitation (VR) with the use of gaze stabilization and refixation saccades training. Head and eye velocity functions were processed to infer the retinal eccentricity, and through its correlation with visual acuity (VA), several measurements are proposed to evaluate the influence of VR on saccades behavior and visual performance. To isolate the effect of saccades on the findings and avoid bias because of gain differences, only patients whose VOR gain values remained unchanged after VR were included. RESULTS Improved contribution of covert saccades and reduction of overt saccades latency were measured after VR. We found significant differences when assessing both the interval less than 70% VA (50.25 ms), which is considered the limit of a moderate low vision, and less than 50% VA (39.515 ms), which is the limit for severe low vision. Time to recover a VA of 75% (near normal) was reduced in all the patients (median: 56.472 ms). CONCLUSION Despite the absence of VOR gain improvement, patients with unilateral vestibular loss are able to develop saccadic strategies that allow the shortening of the interval of retinal smear and image motion. The proposed measurements might be of use to evaluate VR outcomes and visually induced impairment.
Collapse
|
17
|
Schindler A, Bartels A. Human V6 Integrates Visual and Extra-Retinal Cues during Head-Induced Gaze Shifts. iScience 2018; 7:191-197. [PMID: 30267680 PMCID: PMC6153141 DOI: 10.1016/j.isci.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2018] [Revised: 07/13/2018] [Accepted: 09/04/2018] [Indexed: 11/18/2022] Open
Abstract
A key question in vision research concerns how the brain compensates for self-induced eye and head movements to form the world-centered, spatiotopic representations we perceive. Although human V3A and V6 integrate eye movements with vision, it is unclear which areas integrate head motion signals with visual retinotopic representations, as fMRI typically prevents head movement executions. Here we examined whether human early visual cortex V3A and V6 integrate these signals. A previously introduced paradigm allowed participant head movement during trials, but stabilized the head during data acquisition utilizing the delay between blood-oxygen-level-dependent (BOLD) and neural signals. Visual stimuli simulated either a stable environment or one with arbitrary head-coupled visual motion. Importantly, both conditions were matched in retinal and head motion. Contrasts revealed differential responses in human V6. Given the lack of vestibular responses in primate V6, these results suggest multi-modal integration of visual with neck efference copy signals or proprioception in V6. Setup with head-mounted goggles and head movement during fMRI Simulation of forward flow in stable or unstable world during head rotation Human V6 integrates visual self-motion with head motion signals Likely mediated by efference copy or proprioception as V6 lacks vestibular input
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany; Centre for Integrative Neuroscience & MEG Center, University of Tübingen, Tübingen 72076, Germany.
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25, Tübingen 72076, Germany; Department of Psychology, University of Tübingen, Tübingen 72076, Germany; Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
18
|
Mork R, Falkenberg HK, Fostervold KI, Thorud HMS. Visual and psychological stress during computer work in healthy, young females-physiological responses. Int Arch Occup Environ Health 2018; 91:811-830. [PMID: 29850947 PMCID: PMC6132651 DOI: 10.1007/s00420-018-1324-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 05/22/2018] [Indexed: 11/26/2022]
Abstract
Purpose Among computer workers, visual complaints, and neck pain are highly prevalent. This study explores how occupational simulated stressors during computer work, like glare and psychosocial stress, affect physiological responses in young females with normal vision. Methods The study was a within-subject laboratory experiment with a counterbalanced, repeated design. Forty-three females performed four 10-min computer-work sessions with different stress exposures: (1) minimal stress; (2) visual stress (direct glare); (3) psychological stress; and (4) combined visual and psychological stress. Muscle activity and muscle blood flow in trapezius, muscle blood flow in orbicularis oculi, heart rate, blood pressure, blink rate and postural angles were continuously recorded. Immediately after each computer-work session, fixation disparity was measured and a questionnaire regarding perceived workstation lighting and stress was completed. Results Exposure to direct glare resulted in increased trapezius muscle blood flow, increased blink rate, and forward bending of the head. Psychological stress induced a transient increase in trapezius muscle activity and a more forward-bent posture. Bending forward towards the computer screen was correlated with higher productivity (reading speed), indicating a concentration or stress response. Forward bent posture was also associated with changes in fixation disparity. Furthermore, during computer work per se, trapezius muscle activity and blood flow, orbicularis oculi muscle blood flow, and heart rate were increased compared to rest. Conclusions Exposure to glare and psychological stress during computer work were shown to influence the trapezius muscle, posture, and blink rate in young, healthy females with normal binocular vision, but in different ways. Accordingly, both visual and psychological factors must be taken into account when optimizing computer workstations to reduce physiological responses that may cause excessive eyestrain and musculoskeletal load.
Collapse
Affiliation(s)
- Randi Mork
- Department of Public Health Science, Norwegian University of Life Sciences, Ås, Norway.
- Department of Optometry, Radiography and Lighting Design, University of South-Eastern Norway, National Centre for Optics, Vision and Eye Care, P.O. Box 235, 3603, Kongsberg, Norway.
| | - Helle K Falkenberg
- Department of Optometry, Radiography and Lighting Design, University of South-Eastern Norway, National Centre for Optics, Vision and Eye Care, P.O. Box 235, 3603, Kongsberg, Norway
| | | | - Hanne Mari S Thorud
- Department of Optometry, Radiography and Lighting Design, University of South-Eastern Norway, National Centre for Optics, Vision and Eye Care, P.O. Box 235, 3603, Kongsberg, Norway
| |
Collapse
|
19
|
Botschko Y, Yarkoni M, Joshua M. Smooth Pursuit Eye Movement of Monkeys Naive to Laboratory Setups With Pictures and Artificial Stimuli. Front Syst Neurosci 2018; 12:15. [PMID: 29719503 PMCID: PMC5913553 DOI: 10.3389/fnsys.2018.00015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Accepted: 03/28/2018] [Indexed: 12/03/2022] Open
Abstract
When animal behavior is studied in a laboratory environment, the animals are often extensively trained to shape their behavior. A crucial question is whether the behavior observed after training is part of the natural repertoire of the animal or represents an outlier in the animal’s natural capabilities. This can be investigated by assessing the extent to which the target behavior is manifested during the initial stages of training and the time course of learning. We explored this issue by examining smooth pursuit eye movements in monkeys naïve to smooth pursuit tasks. We recorded the eye movements of monkeys from the 1st days of training on a step-ramp paradigm. We used bright spots, monkey pictures and scrambled versions of the pictures as moving targets. We found that during the initial stages of training, the pursuit initiation was largest for the monkey pictures and in some direction conditions close to target velocity. When the pursuit initiation was large, the monkeys mostly continued to track the target with smooth pursuit movements while correcting for displacement errors with small saccades. Two weeks of training increased the pursuit eye velocity in all stimulus conditions, whereas further extensive training enhanced pursuit slightly more. The training decreased the coefficient of variation of the eye velocity. Anisotropies that grade pursuit across directions were observed from the 1st day of training and mostly persisted across training. Thus, smooth pursuit in the step-ramp paradigm appears to be part of the natural repertoire of monkeys’ behavior and training adjusts monkeys’ natural predisposed behavior.
Collapse
Affiliation(s)
- Yehudit Botschko
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Merav Yarkoni
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mati Joshua
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
20
|
|
21
|
Interaction between the oculomotor and postural systems during a dual-task: Compensatory reductions in head sway following visually-induced postural perturbations promote the production of accurate double-step saccades in standing human adults. PLoS One 2017; 12:e0173678. [PMID: 28296958 PMCID: PMC5351857 DOI: 10.1371/journal.pone.0173678] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Accepted: 02/25/2017] [Indexed: 11/19/2022] Open
Abstract
Humans routinely scan their environment for useful information using saccadic eye movements and/or coordinated movements of the eyes and other body segments such the head and the torso. Most previous eye movement studies were conducted with seated subject and showed that single saccades and sequences of saccades (e.g. double-step saccades) made to briefly flashed stimuli were equally accurate and precise. As one can easily appreciate, most gaze shifts performed daily by a given person are not produced from a seated position, but rather from a standing position either as subjects perform an action from an upright stance or as they walk from one place to another. In the experiments presented here, we developed a new dual-task paradigm in order to study the interaction between the gaze control system and the postural system. Healthy adults (n = 12) were required to both maintain balance and produce accurate single-step and double-step eye saccades from a standing position. Visually-induced changes in head sway were evoked using wide-field background stimuli that either moved in the mediolateral direction or in the anteroposterior direction. We found that, as in the seated condition, single- and double-step saccades were very precise and accurate when made from a standing position, but that a tighter control of head sway was necessary in the more complex double-step saccades condition for equivalent results to be obtained. Our perturbation results support the "common goal" hypothesis that state that if necessary, as was the case during the more complex oculomotor task, context-dependent modulations of the postural system can be triggered to reduced instability and therefore support the accomplishment of a suprapostural goal.
Collapse
|
22
|
Kurnikova A, Moore JD, Liao SM, Deschênes M, Kleinfeld D. Coordination of Orofacial Motor Actions into Exploratory Behavior by Rat. Curr Biol 2017; 27:688-696. [PMID: 28216320 DOI: 10.1016/j.cub.2017.01.013] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 12/11/2016] [Accepted: 01/09/2017] [Indexed: 11/25/2022]
Abstract
The delineation of sensorimotor circuits that guide exploration begins with an understanding of the pattern of motor outputs [1]. These motor patterns provide a clue to the form of the underlying circuits [2-4] (but see [5]). We focus on the behaviors that rodents use to explore their peripersonal space through goal-directed positioning of their nose, head, and vibrissae. Rodents sniff in response to novel odors, reward expectation, and as part of social interactions [6-12]. Sniffing serves olfaction [13, 14], while whisking synchronized to sniffing serves vibrissa-based touch [6, 15, 16]. We quantify the ethology of exploratory nose and head movements in relation to breathing. We find that sniffing is accompanied by prominent lateral and vertical deflections of the nose, i.e., twitches, which are driven by activation of the deflector nasi muscles [17]. On the timescale of individual breaths, nose motion is rhythmic and has a maximum deflection following the onset of inspiration. On a longer timescale, excursions of the nose persist for several breaths and are accompanied by an asymmetry in vibrissa positioning toward the same side of the face. Such directed deflections can be triggered by a lateralized source of odor. Lastly, bobbing of the head as the animal cranes and explores is phase-locked to sniffing and to movement of the nose. These data, along with prior results on the resetting of the whisk cycle at the onset of inspiration [15, 16, 18], reveal that the onset of each breath initiates a "snapshot" of the orofacial sensory environment. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Anastasia Kurnikova
- Neurosciences Graduate Program, University of California San Diego, La Jolla, CA 92093, USA
| | - Jeffrey D Moore
- Department of Physics, University of California, San Diego, La Jolla, CA 92093, USA; Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA 02138, USA
| | - Song-Mao Liao
- Department of Physics, University of California, San Diego, La Jolla, CA 92093, USA
| | - Martin Deschênes
- Centre de Recherche Université Laval Robert-Giffard, Québec City, Québec G1J 2R3, Canada
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, La Jolla, CA 92093, USA; Section of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA; Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
23
|
Shadmehr R. Distinct neural circuits for control of movement vs. holding still. J Neurophysiol 2017; 117:1431-1460. [PMID: 28053244 DOI: 10.1152/jn.00840.2016] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Revised: 01/03/2017] [Accepted: 01/03/2017] [Indexed: 11/22/2022] Open
Abstract
In generating a point-to-point movement, the brain does more than produce the transient commands needed to move the body part; it also produces the sustained commands that are needed to hold the body part at its destination. In the oculomotor system, these functions are mapped onto two distinct circuits: a premotor circuit that specializes in generating the transient activity that displaces the eyes and a "neural integrator" that transforms that transient input into sustained activity that holds the eyes. Different parts of the cerebellum adaptively control the motor commands during these two phases: the oculomotor vermis participates in fine tuning the transient neural signals that move the eyes, monitoring the activity of the premotor circuit via efference copy, whereas the flocculus participates in controlling the sustained neural signals that hold the eyes, monitoring the activity of the neural integrator. Here, I review the oculomotor literature and then ask whether this separation of control between moving and holding is a design principle that may be shared with other modalities of movement. To answer this question, I consider neurophysiological and psychophysical data in various species during control of head movements, arm movements, and locomotion, focusing on the brain stem, motor cortex, and hippocampus, respectively. The review of the data raises the possibility that across modalities of motor control, circuits that are responsible for producing commands that change the sensory state of a body part are distinct from those that produce commands that maintain that sensory state.
Collapse
Affiliation(s)
- Reza Shadmehr
- Laboratory for Computational Motor Control, Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland
| |
Collapse
|
24
|
Solman GJF, Foulsham T, Kingstone A. Eye and head movements are complementary in visual selection. ROYAL SOCIETY OPEN SCIENCE 2017; 4:160569. [PMID: 28280554 PMCID: PMC5319320 DOI: 10.1098/rsos.160569] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Accepted: 12/20/2016] [Indexed: 06/06/2023]
Abstract
In the natural environment, visual selection is accomplished by a system of nested effectors, moving the head and body within space and the eyes within the visual field. However, it is not yet known if the principles of selection for these different effectors are the same or different. We used a novel gaze-contingent display in which an asymmetric window of visibility (a horizontal or vertical slot) was yoked to either head or eye position. Participants showed highly systematic changes in behaviour, revealing clear differences in the principles underlying selection by eye and head. Eye movements were more likely to move in the direction of visible information-horizontally when viewing with a horizontal slot, and vertically with a vertical slot. Head movements showed the opposite and complementary pattern, moving to reveal new information (e.g. vertically with a horizontal slot and vice versa). These results are consistent with a nested system in which the head favours exploration of unknown regions, while the eye exploits what can be seen with finer-scale saccades.
Collapse
Affiliation(s)
- Grayden J. F. Solman
- University of Hawai'i at Mānoa, 2530 Dole Street, Sakamaki D401, Honolulu, HI 96822-2294, USA
| | | | - Alan Kingstone
- University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
25
|
Jana S, Gopal A, Murthy A. Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination. J Neurophysiol 2016; 117:348-364. [PMID: 27784809 DOI: 10.1152/jn.00688.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 10/24/2016] [Indexed: 11/22/2022] Open
Abstract
Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements.
Collapse
Affiliation(s)
- Sumitash Jana
- Center for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India; and
| | - Atul Gopal
- National Brain Research Center, Nainwal More, Manesar, Haryana, India
| | - Aditya Murthy
- Center for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India; and
| |
Collapse
|
26
|
Vestibular ablation and a semicircular canal prosthesis affect postural stability during head turns. Exp Brain Res 2016; 234:3245-3257. [PMID: 27405997 DOI: 10.1007/s00221-016-4722-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 07/04/2016] [Indexed: 10/21/2022]
Abstract
In our study, we examined postural stability during head turns for two rhesus monkeys: one animal study contrasted normal and mild bilateral vestibular ablation and a second animal study contrasted severe bilateral vestibular ablation with and without prosthetic stimulation. The monkeys freely stood, unrestrained on a balance platform and made voluntary head turns between visual targets. To quantify each animals' posture, motions of the head and trunk, as well as torque about the body's center of mass, were measured. In the mildly ablated animal, we observed less foretrunk sway in comparison with the normal state. When the canal prosthesis provided electric stimulation to the severely ablated animal, it showed a decrease in trunk sway during head turns. Because the rhesus monkey with severe bilateral vestibular loss exhibited a decrease in trunk sway when receiving vestibular prosthetic stimulation, we propose that the prosthetic electrical stimulation partially restored head velocity information. Our results provide an indication that a semicircular canal prosthesis may be an effective way to improve postural stability in patients with severe peripheral vestibular dysfunction.
Collapse
|
27
|
Quinlivan B, Butler JS, Beiser I, Williams L, McGovern E, O'Riordan S, Hutchinson M, Reilly RB. Application of virtual reality head mounted display for investigation of movement: a novel effect of orientation of attention. J Neural Eng 2016; 13:056006. [PMID: 27518212 DOI: 10.1088/1741-2560/13/5/056006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To date human kinematics research has relied on video processing, motion capture and magnetic search coil data acquisition techniques. However, the use of head mounted display virtual reality systems, as a novel research tool, could facilitate novel studies into human movement and movement disorders. These systems have the unique ability of presenting immersive 3D stimulus while also allowing participants to make ecologically valid movement-based responses. APPROACH We employed one such system (Oculus Rift DK2) in this study to present visual stimulus and acquire head-turn data from a cohort of 40 healthy adults. Participants were asked to complete head movements towards eccentrically located visual targets following valid and invalid cues. Such tasks are commonly employed for investigating the effects orientation of attention and are known as Posner cueing paradigms. Electrooculography was also recorded for a subset of 18 participants. MAIN RESULTS A delay was observed in onset of head movement and saccade onset during invalid trials, both at the group and single participant level. We found that participants initiated head turns 57.4 ms earlier during valid trials. A strong relationship between saccade onset and head movement onset was also observed during valid trials. SIGNIFICANCE This work represents the first time that the Posner cueing effect has been observed in onset of head movement in humans. The results presented here highlight the role of head-mounted display systems as a novel and practical research tool for investigations of normal and abnormal movement patterns.
Collapse
|
28
|
Effect of Direct Glare on Orbicularis Oculi and Trapezius During Computer Reading. Optom Vis Sci 2016; 93:738-49. [DOI: 10.1097/opx.0000000000000855] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
29
|
Lalazar H, Abbott LF, Vaadia E. Tuning Curves for Arm Posture Control in Motor Cortex Are Consistent with Random Connectivity. PLoS Comput Biol 2016; 12:e1004910. [PMID: 27224735 PMCID: PMC4880440 DOI: 10.1371/journal.pcbi.1004910] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2016] [Accepted: 04/12/2016] [Indexed: 11/28/2022] Open
Abstract
Neuronal responses characterized by regular tuning curves are typically assumed to arise from structured synaptic connectivity. However, many responses exhibit both regular and irregular components. To address the relationship between tuning curve properties and underlying circuitry, we analyzed neuronal activity recorded from primary motor cortex (M1) of monkeys performing a 3D arm posture control task and compared the results with a neural network model. Posture control is well suited for examining M1 neuronal tuning because it avoids the dynamic complexity of time-varying movements. As a function of hand position, the neuronal responses have a linear component, as has previously been described, as well as heterogeneous and highly irregular nonlinearities. These nonlinear components involve high spatial frequencies and therefore do not support explicit encoding of movement parameters. Yet both the linear and nonlinear components contribute to the decoding of EMG of major muscles used in the task. Remarkably, despite the presence of a strong linear component, a feedforward neural network model with entirely random connectivity can replicate the data, including both the mean and distributions of the linear and nonlinear components as well as several other features of the neuronal responses. This result shows that smoothness provided by the regularity in the inputs to M1 can impose apparent structure on neural responses, in this case a strong linear (also known as cosine) tuning component, even in the absence of ordered synaptic connectivity.
Collapse
Affiliation(s)
- Hagai Lalazar
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - L. F. Abbott
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Department of Physiology and Cellular Biophysics, Columbia University, New York, New York, United States of America
| | - Eilon Vaadia
- Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| |
Collapse
|
30
|
Lacour M, Helmchen C, Vidal PP. Vestibular compensation: the neuro-otologist's best friend. J Neurol 2016; 263 Suppl 1:S54-64. [PMID: 27083885 PMCID: PMC4833803 DOI: 10.1007/s00415-015-7903-4] [Citation(s) in RCA: 155] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 02/05/2023]
Abstract
Why vestibular compensation (VC) after an acute unilateral vestibular loss is the neuro-otologist’s best friend is the question at the heart of this paper. The different plasticity mechanisms underlying VC are first reviewed, and the authors present thereafter the dual concept of vestibulo-centric versus distributed learning processes to explain the compensation of deficits resulting from the static versus dynamic vestibular imbalance. The main challenges for the plastic events occurring in the vestibular nuclei (VN) during a post-lesion critical period are neural protection, structural reorganization and rebalance of VN activity on both sides. Data from animal models show that modulation of the ipsilesional VN activity by the contralateral drive substitutes for the normal push–pull mechanism. On the other hand, sensory and behavioural substitutions are the main mechanisms implicated in the recovery of the dynamic functions. These newly elaborated sensorimotor reorganizations are vicarious idiosyncratic strategies implicating the VN and multisensory brain regions. Imaging studies in unilateral vestibular loss patients show the implication of a large neuronal network (VN, commissural pathways, vestibulo-cerebellum, thalamus, temporoparietal cortex, hippocampus, somatosensory and visual cortical areas). Changes in gray matter volume in these multisensory brain regions are structural changes supporting the sensory substitution mechanisms of VC. Finally, the authors summarize the two ways to improve VC in humans (neuropharmacology and vestibular rehabilitation therapy), and they conclude that VC would follow a “top-down” strategy in patients with acute vestibular lesions. Future challenges to understand VC are proposed.
Collapse
Affiliation(s)
- Michel Lacour
- Université Aix-Marseille/CNRS, UMR 7260, Fédération de Recherche 3C, Centre de St Charles, 3 Place Victor Hugo, 13331, Marseille Cedex 03, France. .,, 21 Impasse des Vertus, 13710, Fuveau, France.
| | - Christoph Helmchen
- Department of Neurology, University Hospitals Schleswig-Holstein, University of Lübeck, Ratzeburger Allee 160, 23538, Lübeck, Germany
| | - Pierre-Paul Vidal
- Université Paris Descartes/CNRS, UMR-MD-SSA, COGNAC-G (COGNition and Action Group), 45 Rue des Saints Pères, 75270, Paris Cedex 06, France
| |
Collapse
|
31
|
Muscle fatigue as an investigative tool in motor control: A review with new insights on internal models and posture–movement coordination. Hum Mov Sci 2015; 44:225-33. [DOI: 10.1016/j.humov.2015.09.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Revised: 08/28/2015] [Accepted: 09/14/2015] [Indexed: 11/18/2022]
|
32
|
Gopal A, Murthy A. Eye-hand coordination during a double-step task: evidence for a common stochastic accumulator. J Neurophysiol 2015; 114:1438-54. [PMID: 26084906 PMCID: PMC4556852 DOI: 10.1152/jn.00276.2015] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 06/15/2015] [Indexed: 11/22/2022] Open
Abstract
Many studies of reaching and pointing have shown significant spatial and temporal correlations between eye and hand movements. Nevertheless, it remains unclear whether these correlations are incidental, arising from common inputs (independent model); whether these correlations represent an interaction between otherwise independent eye and hand systems (interactive model); or whether these correlations arise from a single dedicated eye-hand system (common command model). Subjects were instructed to redirect gaze and pointing movements in a double-step task in an attempt to decouple eye-hand movements and causally distinguish between the three architectures. We used a drift-diffusion framework in the context of a race model, which has been previously used to explain redirect behavior for eye and hand movements separately, to predict the pattern of eye-hand decoupling. We found that the common command architecture could best explain the observed frequency of different eye and hand response patterns to the target step. A common stochastic accumulator for eye-hand coordination also predicts comparable variances, despite significant difference in the means of the eye and hand reaction time (RT) distributions, which we tested. Consistent with this prediction, we observed that the variances of the eye and hand RTs were similar, despite much larger hand RTs (∼90 ms). Moreover, changes in mean eye RTs, which also increased eye RT variance, produced a similar increase in mean and variance of the associated hand RT. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
Collapse
Affiliation(s)
- Atul Gopal
- National Brain Research Centre, Manesar, Haryana, India; and
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
33
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
34
|
Gopal A, Viswanathan P, Murthy A. A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination. J Neurophysiol 2015; 113:2033-48. [PMID: 25568161 DOI: 10.1152/jn.00802.2014] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Accepted: 01/06/2015] [Indexed: 11/22/2022] Open
Abstract
The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning.
Collapse
Affiliation(s)
- Atul Gopal
- National Brain Research Centre, Manesar, Haryana, India; and
| | | | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
35
|
Lacour M, Bernard-Demanze L. Interaction between Vestibular Compensation Mechanisms and Vestibular Rehabilitation Therapy: 10 Recommendations for Optimal Functional Recovery. Front Neurol 2015; 5:285. [PMID: 25610424 PMCID: PMC4285093 DOI: 10.3389/fneur.2014.00285] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Accepted: 12/15/2014] [Indexed: 12/30/2022] Open
Abstract
This review questions the relationships between the plastic events responsible for the recovery of vestibular function after a unilateral vestibular loss (vestibular compensation), which has been well described in animal models in the last decades, and the vestibular rehabilitation (VR) therapy elaborated on a more empirical basis for vestibular loss patients. The main objective is not to propose a catalog of results but to provide clinicians with an understandable view on when and how to perform VR therapy, and why VR may benefit from basic knowledge and may influence the recovery process. With this perspective, 10 major recommendations are proposed as ways to identify an optimal functional recovery. Among them are the crucial role of active and early VR therapy, coincidental with a post-lesion sensitive period for neuronal network remodeling, the instructive role that VR therapy may play in this functional reorganization, the need for progression in the VR therapy protocol, which is based mainly on adaptation processes, the necessity to take into account the sensorimotor, cognitive, and emotional profile of the patient to propose individual or "à la carte" VR therapies, and the importance of motivational and ecologic contexts. More than 10 general principles are very likely, but these principles seem crucial for the fast recovery of vestibular loss patients to ensure good quality of life.
Collapse
Affiliation(s)
- Michel Lacour
- Laboratoire de Neurobiologie Intégrative et Adaptative, UMR 7260 CNRS/Université Aix-Marseille, Fédération de Recherche 3C, Centre de St Charles, Marseille, France
| | - Laurence Bernard-Demanze
- Laboratoire de Neurobiologie Intégrative et Adaptative, UMR 7260 CNRS/Université Aix-Marseille, Fédération de Recherche 3C, Centre de St Charles, Marseille, France
- Service d’otorhinolaryngologie et d’otoneurologie, CHU Nord, Assistance Publique-Hôpitaux de Marseille, Marseille, France
| |
Collapse
|
36
|
Yorzinski JL, Patricelli GL, Platt ML, Land MF. Eye and head movements shape gaze shifts in Indian peafowl. J Exp Biol 2015; 218:3771-6. [DOI: 10.1242/jeb.129544] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Accepted: 09/29/2015] [Indexed: 11/20/2022]
Abstract
Animals selectively direct their visual attention toward relevant aspects of their environments. They can shift their attention using a combination of eye, head, and body movements. While we have a growing understanding of eye and head movements in mammals, we know little about these processes in birds. We therefore measured the eye and head movements of freely-behaving Indian peafowl (Pavo cristatus) using a telemetric eye-tracker. Both eye and head movements contributed to gaze changes in peafowl. When gaze shifts were smaller, eye movements played a larger role than when gaze shifts were larger. The duration and velocity of eye and head movements were positively related to the size of the eye and head movements, respectively. In addition, the coordination of eye and head movements in peafowl differed from mammals; peafowl exhibited a near absence of the vestibulo-ocular reflex, which may partly result from the peafowl's ability to move their heads as quickly as their eyes.
Collapse
Affiliation(s)
- Jessica L. Yorzinski
- Department of Biological Sciences and Department of Animal Sciences, Purdue University, 915 West State Street, West Lafayette IN 47907, USA
- Animal Behavior Graduate Group and Department of Evolution and Ecology, University of California, Davis, CA 95616, USA
| | - Gail L. Patricelli
- Animal Behavior Graduate Group and Department of Evolution and Ecology, University of California, Davis, CA 95616, USA
| | - Michael L. Platt
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
- Marketing Department, the Wharton School, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Michael F. Land
- School of Biological Sciences, University of Sussex, Brighton, BN1 9QG, United Kingdom
| |
Collapse
|
37
|
Solman GJF, Kingstone A. Balancing energetic and cognitive resources: memory use during search depends on the orienting effector. Cognition 2014; 132:443-54. [PMID: 24946208 DOI: 10.1016/j.cognition.2014.05.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Revised: 05/07/2014] [Accepted: 05/10/2014] [Indexed: 11/28/2022]
Abstract
Search outside the laboratory involves tradeoffs among a variety of internal and external exploratory processes. Here we examine the conditions under which item specific memory from prior exposures to a search array is used to guide attention during search. We extend the hypothesis that memory use increases as perceptual search becomes more difficult by turning to an ecologically important type of search difficulty - energetic cost. Using optical motion tracking, we introduce a novel head-contingent display system, which enables the direct comparison of search using head movements and search using eye movements. Consistent with the increased energetic cost of turning the head to orient attention, we discover greater use of memory in head-contingent versus eye-contingent search, as reflected in both timing and orienting metrics. Our results extend theories of memory use in search to encompass embodied factors, and highlight the importance of accounting for the costs and constraints of the specific motor groups used in a given task when evaluating cognitive effects.
Collapse
|
38
|
Gaveau V, Pisella L, Priot AE, Fukui T, Rossetti Y, Pélisson D, Prablanc C. Automatic online control of motor adjustments in reaching and grasping. Neuropsychologia 2013; 55:25-40. [PMID: 24334110 DOI: 10.1016/j.neuropsychologia.2013.12.005] [Citation(s) in RCA: 71] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Revised: 11/16/2013] [Accepted: 12/04/2013] [Indexed: 11/16/2022]
Abstract
Following the princeps investigations of Marc Jeannerod on action-perception, specifically, goal-directed movement, this review article addresses visual and non-visual processes involved in guiding the hand in reaching or grasping tasks. The contributions of different sources of correction of ongoing movements are considered; these include visual feedback of the hand, as well as the often-neglected but important spatial updating and sharpening of goal localization following gaze-saccade orientation. The existence of an automatic online process guiding limb trajectory toward its goal is highlighted by a series of princeps experiments of goal-directed pointing movements. We then review psychophysical, electrophysiological, neuroimaging and clinical studies that have explored the properties of these automatic corrective mechanisms and their neural bases, and established their generality. Finally, the functional significance of automatic corrective mechanisms-referred to as motor flexibility-and their potential use in rehabilitation are discussed.
Collapse
Affiliation(s)
- Valérie Gaveau
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Laure Pisella
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Anne-Emmanuelle Priot
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Institut de recherche biomédicale des armées (IRBA), BP 73, 91223 Brétigny-sur-Orge cedex, France
| | - Takao Fukui
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France
| | - Yves Rossetti
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Denis Pélisson
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Claude Prablanc
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
39
|
Zetterberg C, Forsman M, Richter H. Effects of visually demanding near work on trapezius muscle activity. J Electromyogr Kinesiol 2013; 23:1190-8. [DOI: 10.1016/j.jelekin.2013.06.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 06/10/2013] [Accepted: 06/10/2013] [Indexed: 11/24/2022] Open
|
40
|
Hierarchical control of two-dimensional gaze saccades. J Comput Neurosci 2013; 36:355-82. [PMID: 24062206 DOI: 10.1007/s10827-013-0477-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2013] [Revised: 07/03/2013] [Accepted: 08/08/2013] [Indexed: 10/26/2022]
Abstract
Coordinating the movements of different body parts is a challenging process for the central nervous system because of several problems. Four of these main difficulties are: first, moving one part can move others; second, the parts can have different dynamics; third, some parts can have different motor goals; and fourth, some parts may be perturbed by outside forces. Here, we propose a novel approach for the control of linked systems with feedback loops for each part. The proximal parts have separate goals, but critically the most distal part has only the common goal. We apply this new control policy to eye-head coordination in two-dimensions, specifically head-unrestrained gaze saccades. Paradoxically, the hierarchical structure has controllers for the gaze and the head, but not for the eye (the most distal part). Our simulations demonstrate that the proposed control structure reproduces much of the published empirical data about gaze movements, e.g., it compensates for perturbations, accurately reaches goals for gaze and head from arbitrary initial positions, simulates the nine relationships of the head-unrestrained main sequence, and reproduces observations from lesion and single-unit recording experiments. We conclude by showing how our model can be easily extended to control structures with more linked segments, such as the control of coordinated eye on head on trunk movements.
Collapse
|
41
|
Nakagawa A, Sukigara M. Variable coordination of eye and head movements during the early development of attention: a longitudinal study of infants aged 12-36 months. Infant Behav Dev 2013; 36:517-25. [PMID: 23735866 DOI: 10.1016/j.infbeh.2013.04.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2012] [Revised: 01/06/2013] [Accepted: 04/30/2013] [Indexed: 11/15/2022]
Abstract
This longitudinal study investigated the effects of attentional development on peripheral stimulus localization by analyzing the eye and head movements of toddlers as they matured from 12 to 36 months. On each trial of an experiment, a central fixation point and a 30° peripheral stimulus were presented, such that in the gap condition the fixation disappeared 300 ms before the peripheral stimulus, whereas in the no-overlap condition it disappeared simultaneously as the peripheral stimulus, and in the overlap condition the fixation remained present when the peripheral target occurred. Results showed that eye and head movement latencies were highly correlated in all conditions and ages. However, at 12 months, head movements were as fast as eye movements, whereas during the subsequent development, eye movements became increasingly faster than head movements. These findings are indicative of a transition between 12 and 36 months due either to a change in attentional control, or to changes in the size of the visual field in which only eye movements occur.
Collapse
Affiliation(s)
- Atsuko Nakagawa
- Graduate School of Humanities and Social Sciences, Nagoya City University, 1, Yamanohata, Mizuho-cho, Mizuho-ku, Nagoya 467-8501, Japan.
| | | |
Collapse
|
42
|
A biologically constrained architecture for developmental learning of eye–head gaze control on a humanoid robot. Auton Robots 2013. [DOI: 10.1007/s10514-013-9335-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
43
|
King WM. Getting ahead of oneself: anticipation and the vestibulo-ocular reflex. Neuroscience 2013; 236:210-9. [PMID: 23370320 DOI: 10.1016/j.neuroscience.2012.12.032] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 12/12/2012] [Accepted: 12/13/2012] [Indexed: 10/27/2022]
Abstract
Compensatory counter-rotations of the eyes provoked by head turns are commonly attributed to the vestibulo-ocular reflex (VOR). A recent study in guinea pigs demonstrates, however, that this assumption is not always valid. During voluntary head turns, guinea pigs make highly accurate compensatory eye movements that occur with zero or even negative latencies with respect to the onset of the provoking head movements. Furthermore, the anticipatory eye movements occur in animals with bilateral peripheral vestibular lesions, thus confirming that they have an extra vestibular origin. This discovery suggests the possibility that anticipatory responses might also occur in other species including humans and non-human primates, but have been overlooked and mistakenly identified as being produced by the VOR. This review will compare primate and guinea pig vestibular physiology in light of these new findings. A unified model of vestibular and cerebellar pathways will be presented that is consistent with current data in primates and guinea pigs. The model is capable of accurately simulating compensatory eye movements to active head turns (anticipatory responses) and to passive head perturbations (VOR induced eye movements) in guinea pigs and in human subjects who use coordinated eye and head movements to shift gaze direction in space. Anticipatory responses provide new evidence and opportunities to study the role of extra vestibular signals in motor control and sensory-motor transformations. Exercises that employ voluntary head turns are frequently used to improve visual stability in patients with vestibular hypofunction. Thus, a deeper understanding of the origin and physiology of anticipatory responses could suggest new translational approaches to rehabilitative training of patients with bilateral vestibular loss.
Collapse
Affiliation(s)
- W M King
- Department of Otolaryngology and the Kresge Hearing Research Institute, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
44
|
King WM, Shanidze N. Anticipatory eye movements stabilize gaze during self-generated head movements. Ann N Y Acad Sci 2011; 1233:219-25. [PMID: 21950997 DOI: 10.1111/j.1749-6632.2011.06165.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Visual acuity and motion perception are degraded during head movements unless the eyes counter-rotate so as to stabilize the line of sight and the retinal image. The vestibulo-ocular reflex (VOR) is assumed to produce this ocular counter-rotation. Consistent with this assumption, oscillopsia is a common complaint of patients with bilateral vestibular weakness. Shanidze et al. recently described compensatory eye movements in normal guinea pigs that appear to anticipate self-generated head movements. These responses effectively stabilize gaze and occur independently of the vestibular system. These new findings suggest that the VOR stabilizes gaze during passive perturbations of the head in space, but anticipatory responses may supplement or even supplant the VOR during actively generated head movements. This report reviews these findings, potential neurophysiological mechanisms, and their potential application to human clinical treatment of patients with vestibular disease.
Collapse
Affiliation(s)
- W M King
- Department of Otolaryngology, Head/Neck Surgery, University of Michigan, Ann Arbor, Michigan, USA.
| | | |
Collapse
|
45
|
Saeb S, Weber C, Triesch J. Learning the optimal control of coordinated eye and head movements. PLoS Comput Biol 2011; 7:e1002253. [PMID: 22072953 PMCID: PMC3207939 DOI: 10.1371/journal.pcbi.1002253] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2010] [Accepted: 09/13/2011] [Indexed: 11/20/2022] Open
Abstract
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements. Human beings and many other species redirect their gaze towards targets of interest through rapid gaze shifts known as saccades. These are made approximately three to four times every second, and larger saccades result from fast and concurrent movement of the animal's eyes and head. Experimental studies have revealed that during saccades, the motor system follows certain principles such as respecting a specific relationship between the relative contribution of eye and head motor systems to total gaze shift. Various researchers have hypothesized that these principles are implications of some optimality criteria in the brain, but it remains unclear how the brain can learn such an optimal behavior. We propose a new model that uses a plausible learning mechanism to satisfy an optimality criterion. We show that after learning, the model is able to reproduce motor behavior with biologically plausible properties. In addition, it predicts the nature of the learning signals. Further experimental research is necessary to test the validity of our model.
Collapse
Affiliation(s)
- Sohrab Saeb
- Frankfurt Institute for Advanced Studies (FIAS), Goethe University Frankfurt, Germany.
| | | | | |
Collapse
|
46
|
Farshadmanesh F, Byrne P, Keith GP, Wang H, Corneil BD, Crawford JD. Cross-validated models of the relationships between neck muscle electromyography and three-dimensional head kinematics during gaze behavior. J Neurophysiol 2011; 107:573-90. [PMID: 21994269 DOI: 10.1152/jn.00315.2011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The object of this study was to model the relationship between neck electromyography (EMG) and three-dimensional (3-D) head kinematics during gaze behavior. In two monkeys, we recorded 3-D gaze, head orientation, and bilateral EMG activity in the sternocleidomastoid, splenius capitis, complexus, biventer cervicis, rectus capitis posterior major, and occipital capitis inferior muscles. Head-unrestrained animals fixated and made gaze saccades between targets within a 60° × 60° grid. We performed a stepwise regression in which polynomial model terms were retained/rejected based on their tendency to increase/decrease a cross-validation-based measure of model generalizability. This revealed several results that could not have been predicted from knowledge of musculoskeletal anatomy. During head holding, EMG activity in most muscles was related to horizontal head orientation, whereas fewer muscles correlated to vertical head orientation and none to small random variations in head torsion. A fourth-order polynomial model, with horizontal head orientation as the only independent variable, generalized nearly as well as higher order models. For head movements, we added time-varying linear and nonlinear perturbations in velocity and acceleration to the previously derived static (head holding) models. The static models still explained most of the EMG variance, but the additional motion terms, which included horizontal, vertical, and torsional contributions, significantly improved the results. Several coordinate systems were used for both static and dynamic analyses, with Fick coordinates showing a marginal (nonsignificant) advantage. Thus, during gaze fixations, recruitment within the neck muscles from which we recorded contributed primarily to position-dependent horizontal orientation terms in our data set, with more complex multidimensional contributions emerging during the head movements that accompany gaze shifts. These are crucial components of the late neuromuscular transformations in a complete model of 3-D head-neck system and should help constrain the study of premotor signals for head control during gaze behaviors.
Collapse
Affiliation(s)
- Farshad Farshadmanesh
- York Center for Vision Research, Neuroscience Graduate Diploma Program, Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Ontario
| | | | | | | | | | | |
Collapse
|
47
|
Populin LC, Rajala AZ. Target modality determines eye-head coordination in nonhuman primates: implications for gaze control. J Neurophysiol 2011; 106:2000-11. [PMID: 21795625 DOI: 10.1152/jn.00331.2011] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have studied eye-head coordination in nonhuman primates with acoustic targets after finding that they are unable to make accurate saccadic eye movements to targets of this type with the head restrained. Three male macaque monkeys with experience in localizing sounds for rewards by pointing their gaze to the perceived location of sources served as subjects. Visual targets were used as controls. The experimental sessions were configured to minimize the chances that the subject would be able to predict the modality of the target as well as its location and time of presentation. The data show that eye and head movements are coordinated differently to generate gaze shifts to acoustic targets. Chiefly, the head invariably started to move before the eye and contributed more to the gaze shift. These differences were more striking for gaze shifts of <20-25° in amplitude, to which the head contributes very little or not at all when the target is visual. Thus acoustic and visual targets trigger gaze shifts with different eye-head coordination. This, coupled to the fact that anatomic evidence involves the superior colliculus as the link between auditory spatial processing and the motor system, suggests that separate signals are likely generated within this midbrain structure.
Collapse
Affiliation(s)
- Luis C Populin
- Department of Neuroscience, University of Wisconsin-Madison, Madison, Wisconsin, USA.
| | | |
Collapse
|
48
|
Welsh TN. The relationship between attentional capture and deviations in movement trajectories in a selective reaching task. Acta Psychol (Amst) 2011; 137:300-8. [PMID: 21507363 DOI: 10.1016/j.actpsy.2011.03.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2010] [Revised: 03/17/2011] [Accepted: 03/19/2011] [Indexed: 11/19/2022] Open
Abstract
According to action-centered models of attention, attention and action systems are tightly linked such that the capture of attention by an object automatically initiates response-producing processes. In support of this link, studies have shown that movements deviate towards or away from non-target stimuli. These deviations are thought to emerge because attentional capture by non-target stimuli generates responses that summate with target responses to develop a combined movement vector. The present study tested attention-action coupling by examining movement trajectories in the presence of non-target stimuli that do or do not capture attention. Previous research has revealed that non-target cue stimuli only capture attention when they share critical features with the target. Cues that do not share this feature do not capture attention. Following these studies and their findings, participants in the present study aimed to the location of a single white square (onset singleton target) or a single red square presented with two white squares (color singleton target). In separate blocks, targets were preceded by non-predictive cues that did or did not share the target feature (color or onset singleton cues). The critical finding of the present study was that trajectory effects mirrored the temporal interference effects in that deviations were only observed when cue and target properties matched. Deviations were not observed when the cue and target properties did not match. These data provide clear support for the link between attentional capture and the activation of response-producing processes.
Collapse
Affiliation(s)
- Timothy N Welsh
- Faculty of Physical Education and Health, University of Toronto, 55 Harbord St. Toronto, ON, Canada M5S 2W6.
| |
Collapse
|
49
|
|
50
|
Goonetilleke SC, Doherty TJ, Corneil BD. A within-trial measure of the stop signal reaction time in a head-unrestrained oculomotor countermanding task. J Neurophysiol 2010; 104:3677-90. [PMID: 20962073 DOI: 10.1152/jn.00495.2010] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The countermanding (or stop-signal) task, which requires the cancellation of an impending response on the infrequent presentation of a stop signal, enables study of the contextual control of movement generation and suppression. Here we present a novel and empirical measure of the time needed to cancel an impending gaze shift by recording neck muscle activity during a head-unrestrained oculomotor countermanding paradigm. On a subset of stop signal trials, subjects generated small head movements toward a target even though gaze remained stable due to a compensatory vestibular-ocular reflex. On such trials, we observed a burst of antagonist neck muscle activity during the small head-only error. Such antagonist neck muscle activity served as an active braking pulse as its magnitude scaled with the kinematics of the head-only error. This activity was selective for trials in which the head was arrested in mid-flight and did not appear on trials without a stop signal, on noncancelled stop signal trials when the gaze shift was completed, or on stop signal trials without head motion. Importantly, the timing of this antagonist activity related best to the onset of the stop signal (lagging it by ∼180 ms), and strongly correlated with behavioral estimates of the time needed to cancel a movement (the stop signal reaction time). These results are consistent with the notion that such selective antagonist neck muscle activity arises as a peripheral expression of the oculomotor stop process that successfully cancelled the gaze shift. Studying movement cancellation within nested systems like the head-unrestrained gaze shifting system offers a unique opportunity for investigating underlying neural mechanisms as the overall goal (i.e., to cancel a gaze shift) can be achieved despite motion of other components; on such individual trials, the oculomotor stop process is expressed as an active braking pulse.
Collapse
Affiliation(s)
- Samanthi C Goonetilleke
- CIHR Group in Action and Perception, Department of Physiology and Pharmacology, University of Western Ontario, London, Ontario, Canada
| | | | | |
Collapse
|