1
|
Abstract
On average, we redirect our gaze with a frequency at about 3 Hz. In real life, gaze shifts consist of eye and head movements. Much research has focused on how the accuracy of eye movements is monitored and calibrated. By contrast, little is known about how head movements remain accurate. I wondered whether serial dependencies between artificially induced errors in head movement targeting and the immediately following head movement might recalibrate movement accuracy. I also asked whether head movement targeting errors would influence visual localization. To this end, participants wore a head mounted display and performed head movements to targets, which were displaced as soon as the start of the head movement was detected. I found that target displacements influenced head movement amplitudes in the same trial, indicating that participants could adjust their movement online to reach the new target location. However, I also found serial dependencies between the target displacement in trial n-1 and head movements amplitudes in the following trial n. I did not find serial dependencies between target displacements and visuomotor localization. The results reveal that serial dependencies recalibrate head movement accuracy.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
2
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Delle Monache S, Lacquaniti F, Bosco G. Ocular tracking of occluded ballistic trajectories: Effects of visual context and of target law of motion. J Vis 2019; 19:13. [PMID: 30952164 DOI: 10.1167/19.4.13] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In tracking a moving target, the visual context may provide cues for an observer to interpret the causal nature of the target motion and extract features to which the visual system is weakly sensitive, such as target acceleration. This information could be critical when vision of the target is temporarily impeded, requiring visual motion extrapolation processes. Here we investigated how visual context influences ocular tracking of motion either congruent or not with natural gravity. To this end, 28 subjects tracked computer-simulated ballistic trajectories either perturbed in the descending segment with altered gravity effects (0g/2g) or retaining natural-like motion (1g). Shortly after the perturbation (550 ms), targets disappeared for either 450 or 650 ms and became visible again until landing. Target motion occurred with either quasi-realistic pictorial cues or a uniform background, presented in counterbalanced order. We analyzed saccadic and pursuit movements after 0g and 2g target-motion perturbations and for corresponding intervals of unperturbed 1g trajectories, as well as after corresponding occlusions. Moreover, we considered the eye-to-target distance at target reappearance. Tracking parameters differed significantly between scenarios: With a neutral background, eye movements did not depend consistently on target motion, whereas with pictorial background they showed significant dependence, denoting better tracking of accelerated targets. These results suggest that oculomotor control is tuned to realistic properties of the visual scene.
Collapse
Affiliation(s)
- Sergio Delle Monache
- Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy.,Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy.,Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
| | - Gianfranco Bosco
- Department of Systems Medicine, Neuroscience Section, University of Rome Tor Vergata, Rome, Italy.,Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy.,Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
| |
Collapse
|
4
|
Action scheduling in multitasking: A multi-phase framework of response-order control. Atten Percept Psychophys 2019; 81:1464-1487. [DOI: 10.3758/s13414-018-01660-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
5
|
Stamenkovic A, Stapley PJ, Robins R, Hollands MA. Do postural constraints affect eye, head, and arm coordination? J Neurophysiol 2018; 120:2066-2082. [DOI: 10.1152/jn.00200.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
If a whole body reaching task is produced when standing or adopting challenging postures, it is unclear whether changes in attentional demands or the sensorimotor integration necessary for balance control influence the interaction between visuomotor and postural components of the movement. Is gaze control prioritized by the central nervous system (CNS) to produce coordinated eye movements with the head and whole body regardless of movement context? Considering the coupled nature of visuomotor and whole body postural control during action, this study aimed to understand how changing equilibrium constraints (in the form of different postural configurations) influenced the initiation of eye, head, and arm movements. We quantified the eye-head metrics and segmental kinematics as participants executed either isolated gaze shifts or whole body reaching movements to visual targets. In total, four postural configurations were compared: seated, natural stance, with the feet together (narrow stance), or while balancing on a wooden beam. Contrary to our initial predictions, the lack of distinct changes in eye-head metrics; timing of eye, head, and arm movement initiation; and gaze accuracy, in spite of kinematic differences, suggests that the CNS integrates postural constraints into the control necessary to initiate gaze shifts. This may be achieved by adopting a whole body gaze strategy that allows for the successful completion of both gaze and reaching goals. NEW & NOTEWORTHY Differences in sequence of movement among the eye, head, and arm have been shown across various paradigms during reaching. Here we show that distinct changes in eye characteristics and movement sequence, coupled with stereotyped profiles of head and gaze movement, are not observed when adopting postures requiring changes to balance constraints. This suggests that a whole body gaze strategy is prioritized by the central nervous system with postural control subservient to gaze stability requirements.
Collapse
Affiliation(s)
- Alexander Stamenkovic
- Neural Control of Movement Laboratory School of Medicine, Faculty of Science, Medicine and Health University of Wollongong, Wollongong, Australia
- Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, Australia
| | - Paul J. Stapley
- Neural Control of Movement Laboratory School of Medicine, Faculty of Science, Medicine and Health University of Wollongong, Wollongong, Australia
- Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, Australia
| | - Rebecca Robins
- Research Institute for Sports and Exercise Sciences, School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, United Kingdom
| | - Mark A. Hollands
- Research Institute for Sports and Exercise Sciences, School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, United Kingdom
| |
Collapse
|
6
|
Trunk involvement in performing upper extremity activities while seated in neurological patients with a flaccid trunk - A review. Gait Posture 2018. [PMID: 29524797 DOI: 10.1016/j.gaitpost.2018.02.028] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
BACKGROUND Trunk control is essential during seated activities. The trunk interacts with the upper extremities (UE) and head by being part of a kinematic chain and by providing a stable basis. When trunk control becomes impaired, it may have consequences for the execution of UE tasks. AIM To review trunk involvement in body movement and stability when performing seated activities and its relation with UE and head movements in neurological patients with a flaccid trunk, with a focus on childhood and development with age. METHODS AND PROCEDURES A search using PubMed was conducted and 32 out of 188 potentially eligible articles were included. OUTCOMES AND RESULTS Patients with a flaccid trunk (e.g. with spinal cord injury or cerebral palsy) tend to involve the trunk earlier while reaching than healthy persons. Different balance strategies are observed in different types of patients, like using the contralateral arm as counterweight, eliminating degrees of freedom, or reducing movement speed. CONCLUSIONS AND IMPLICATIONS The key role of the trunk in performing activities should be kept in mind when developing interventions to improve seated task performance in neurological patients with a flaccid trunk.
Collapse
|
7
|
Abstract
Traditionally, movement kinematics are thought to reflect physical properties (e.g., position and time) of movement targets. However, targets may also evoke intentional goals like “to be in a certain position at a given time”. Therefore, kinematics may be viewed not as a reaction to stimuli, but rather as the means to attain intended goals. In the present study participants performed continuous reversal movements. It was first shown that kinematics towards temporal and spatial targets differ from kinematics away from those targets. Further, kinematics are different for movements to temporal (relatively short movement times, high and late peak velocity) and spatial (relatively long movement times, early peak velocity) targets (Experiments 1 and 2). In order to obtain evidence for the influence of goal representations on kinematics, combinations of temporal and spatial targets were investigated in Experiments 3 and 4. Specifically, the conditions were: spatial targets always present with varying temporal targets, temporal targets always present with varying spatial targets, and combined and separate spatial and temporal targets. Not only the physical features, but also how the targets were represented as movement goals, were important. Thus, movement kinematics do not simply reflect stimulus properties, but rather the representation of the intended goal.
Collapse
Affiliation(s)
- Martina Rieger
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Psychology, Cognition and Action, Leipzig, Germany.
| |
Collapse
|
8
|
Esposti R, Bruttini C, Bolzoni F, Cavallari P. Anticipatory Postural Adjustments associated with reaching movements are programmed according to the availability of visual information. Exp Brain Res 2017; 235:1349-1360. [DOI: 10.1007/s00221-017-4898-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Accepted: 01/30/2017] [Indexed: 12/26/2022]
|
9
|
Coats RO, Fath AJ, Astill SL, Wann JP. Eye and hand movement strategies in older adults during a complex reaching task. Exp Brain Res 2015; 234:533-47. [PMID: 26537959 DOI: 10.1007/s00221-015-4474-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Accepted: 10/14/2015] [Indexed: 11/30/2022]
Abstract
The kinematics of upper limb movements and the coordination of eye and hand movements are affected by ageing. These age differences are exacerbated when task difficulty is increased, but the exact nature of these differences remains to be established. We examined the performance of 12 older adults (mean age = 74) and 11 younger adults (mean age = 20) on a multi-phase prehension task. Participants had to reach for a target ball with their preferred hand, pick it up and place it in a tray, then reach for a second target ball and place that in the same tray. On half the trials (stabilising condition), participants were required to hold the tray just above the surface of the table with their non-preferred hand and keep it as still as possible. Hand and eye movements were recorded. Older adults took longer to complete their movements and reached lower peak velocities than the younger adults. Group differences were most apparent in the stabilising condition, suggesting that the added complexity had a greater effect on the performance of the older adults than the young. During pickup, older adults preferred to make an eye movement to the next target as soon as possible, but spent longer fixating the current target during placement, when accuracy requirements were higher. These latter observations suggest that older adults employed a task-dependent eye movement strategy, looking quickly to the next target to allow more time for planning and execution when possible, but fixating on their hand and successful placement of the ball when necessary.
Collapse
Affiliation(s)
- Rachel O Coats
- School of Psychology, Faculty of Medicine and Health, University of Leeds, Leeds, LS2 9JT, UK.
| | - Aaron J Fath
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Sarah L Astill
- School of Biomedical Sciences, Faculty of Biological Sciences, University of Leeds, Leeds, UK
| | - John P Wann
- Department of Psychology, Royal Holloway University London, Egham, UK
| |
Collapse
|
10
|
Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions. Exp Brain Res 2014; 233:359-74. [DOI: 10.1007/s00221-014-4120-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2013] [Accepted: 09/29/2014] [Indexed: 01/01/2023]
|
11
|
Hong SK, Myung R. Temporal coupling of eye gaze and cursor on key buttons during text-entry tasks. Percept Mot Skills 2014; 118:86-95. [PMID: 24724515 DOI: 10.2466/22.25.pms.118k15w2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Coupling patterns of eye gaze and cursor movements on key buttons were investigated during a practical text-entry task. A text-entry task can be described as a series of goal-directed aiming tasks. In a typical goal-directed aiming task, eye movements generally lead cursor movements; eye gaze arrives at the target and starts moving to the next target before the cursor. However, in 10% of cases in this experiment, the cursor arrived at the target earlier than the eye gaze did, regardless of text entry speed. Eye gaze started toward the next target key button after the start of the cursor's movement in 57% of cases, which also varied with text-entry speed. The coupling patterns, which differed from those observed in typical goal-directed aiming tasks, might be due to the speed requirement of practical text-entry tasks, memory of key button positions, and the use of peripheral vision.
Collapse
|
12
|
McKay SM, Fraser JE, Maki BE. Effects of uni- and multimodal cueing on handrail grasping and associated gaze behavior in older adults. ACCIDENT; ANALYSIS AND PREVENTION 2013; 59:407-414. [PMID: 23896044 DOI: 10.1016/j.aap.2013.06.031] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2013] [Revised: 06/21/2013] [Accepted: 06/25/2013] [Indexed: 06/02/2023]
Abstract
INTRODUCTION It appears that age-related changes in visual attention may impair ability to acquire the visuospatial information needed to grasp a handrail effectively in response to sudden loss of balance. This, in turn, may increase risk of falling. To counter this problem, we developed a proximity-triggered cueing system that provides a visual cue (flashing lights) and/or verbal cue ("attention use the handrail") to attract attention to the handrail. This study examined the effect of handrail cueing on grasping of the rail and associated gaze behavior in a large cohort (n=160) of independent and ambulatory older adults (age 64-80). METHODS The handrail and cueing system was mounted on a large (2 m×6 m) motion platform configured to simulate a real-life environment. Subjects performed a daily-life task that required walking to the end of the platform, which was triggered to perturb balance by moving suddenly when they were adjacent to the rail. To prevent adaptation, each subject performed only one trial, and a deception was used to ensure that the perturbation was truly unexpected. Each subject was assigned to one of four cue conditions: visual, verbal, multimodal (visual-plus-verbal) or no cue. RESULTS Verbal cueing attracted overt visual attention to the handrail and markedly increased proactive grasping (prior to the onset of the balance perturbation) particularly when delivered unimodally. Subjects were otherwise much more likely to grasp the rail in reaction to the perturbation. A possible trend for visual cueing to improve the accuracy of these reactions was offset by adverse effects on reaction speed and on frequency of proactive grasping. CONCLUSIONS The results support the viability of using unimodal verbal cueing to reduce fall risk by increasing proactive handrail use. Conversely, they do not strongly support use of visual cueing (either alone or in combination with verbal cueing) and suggest that it may even have adverse effects. Further study is needed to evaluate effects of handrail cueing in a wide range of populations and real-life settings.
Collapse
Affiliation(s)
- Sandra M McKay
- Toronto Rehabilitation Institute (University Health Network), Canada; Centre for Studies in Aging, Sunnybrook Health Sciences Centre, Canada
| | | | | |
Collapse
|
13
|
Effects of spatial-memory decay and dual-task interference on perturbation-evoked reach-to-grasp reactions in the absence of online visual feedback. Hum Mov Sci 2013; 32:328-42. [DOI: 10.1016/j.humov.2012.11.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Revised: 08/03/2012] [Accepted: 11/01/2012] [Indexed: 11/20/2022]
|
14
|
White O, Lefèvre P, Wing AM, Bracewell RM, Thonnard JL. Active collisions in altered gravity reveal eye-hand coordination strategies. PLoS One 2012; 7:e44291. [PMID: 22984488 PMCID: PMC3440428 DOI: 10.1371/journal.pone.0044291] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Accepted: 08/01/2012] [Indexed: 11/19/2022] Open
Abstract
Most object manipulation tasks involve a series of actions demarcated by mechanical contact events, and gaze is usually directed to the locations of these events as the task unfolds. Typically, gaze foveates the target 200 ms in advance of the contact. This strategy improves manual accuracy through visual feedback and the use of gaze-related signals to guide the hand/object. Many studies have investigated eye-hand coordination in experimental and natural tasks; most of them highlighted a strong link between eye movements and hand or object kinematics. In this experiment, we analyzed gaze strategies in a collision task but in a very challenging dynamical context. Participants performed collisions while they were exposed to alternating episodes of microgravity, hypergravity and normal gravity. First, by isolating the effects of inertia in microgravity, we found that peak hand acceleration marked the transition between two modes of grip force control. Participants exerted grip forces that paralleled load force profiles, and then increased grip up to a maximum shifted after the collision. Second, we found that the oculomotor strategy adapted visual feedback of the controlled object around the collision, as demonstrated by longer durations of fixation after collision in new gravitational environments. Finally, despite large variability of arm dynamics in altered gravity, we found that saccades were remarkably time-locked to the peak hand acceleration in all conditions. In conclusion, altered gravity allowed light to be shed on predictive mechanisms used by the central nervous system to coordinate gaze, hand and grip motor actions during a mixed task that involved transport of an object and high impact loads.
Collapse
Affiliation(s)
- Olivier White
- Unité de Formation et de Recherche en Sciences et Techniques des Activités Physiques et Sportives, Université de Bourgogne, Dijon, France.
| | | | | | | | | |
Collapse
|
15
|
Germain-Robitaille M, Terrier R, Forestier N, Teasdale N. Hand–head coordination changes from discrete to reciprocal hand movements for various difficulty settings. Neurosci Lett 2012; 521:1-5. [DOI: 10.1016/j.neulet.2012.04.074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2012] [Revised: 03/09/2012] [Accepted: 04/30/2012] [Indexed: 10/28/2022]
|
16
|
Cheng KC, McKay SM, King EC, Maki BE. Reaching to recover balance in unpredictable circumstances: Is online visual control of the reach-to-grasp reaction necessary or sufficient? Exp Brain Res 2012; 218:589-99. [DOI: 10.1007/s00221-012-3051-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2011] [Accepted: 02/24/2012] [Indexed: 10/28/2022]
|
17
|
Does the “eyes lead the hand” principle apply to reach-to-grasp movements evoked by unexpected balance perturbations? Hum Mov Sci 2011; 30:368-83. [DOI: 10.1016/j.humov.2010.07.005] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2009] [Revised: 07/10/2010] [Accepted: 07/12/2010] [Indexed: 10/18/2022]
|
18
|
Carnahan H, Marteniuk RG. Hand, Eye, and Head Coordination While Pointing to Perturbed Targets. J Mot Behav 2010; 26:135-46. [PMID: 15753066 DOI: 10.1080/00222895.1994.9941668] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Normal human subjects were required to manually point to small visual targets that suddenly changed location upon finger movement initiation. They pointed either as fast or as accurately as possible. Movements of the eyes were measured by electrooculography, and the movements of the unrestrained limb and head were monitored by an optoelectric system (WATSMART), which allowed for the analysis of kinematic parameters in three-dimensional space. The temporal and kinematic reorganization of each body part in response to the target perturbations were variable, which indicated independent control for each part of the system. That is, the timing and nature of the reorganization varied for each body part. In addition, the pattern of reorganization depended upon the speed and accuracy demands of the movement task. As well, the movement termination patterns (eyes finished first, the finger reached the target, then the head stopped moving) were extremely consistent, indicating that movement termination may be a controlled variable. Finally, no evidence was found to suggest that visual information was used to amend arm movements early (before peak velocity) in the trajectory.
Collapse
Affiliation(s)
- H Carnahan
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
| | | |
Collapse
|
19
|
Brouwer AM, Knill DC. Humans use visual and remembered information about object location to plan pointing movements. J Vis 2009; 9:24.1-19. [PMID: 19271894 DOI: 10.1167/9.1.24] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Accepted: 06/18/2008] [Indexed: 11/24/2022] Open
Abstract
We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to a target. In some trials, the target shifted unnoticed while the finger made the first movement. We regressed subjects' movement trajectories against the initial and shifted target locations to infer the weights that subjects gave to remembered and visual locations. We measured the reliability of vision and memory by adding conditions in which the target only appeared after subjects made the first movement (vision only) and in which the target was initially present but disappeared during the first movement (memory only). When both visual and remembered information were available, movement trajectories were biased to the remembered target location. The different weights that subjects gave to memory and visual information on average matched the weights predicted by the variance associated with the use of vision and memory alone. This suggests that humans integrate remembered information about object locations with peripheral visual information by taking into account the relative reliability of the two sources of information.
Collapse
|
20
|
Suzuki M, Izawa A, Takahashi K, Yamazaki Y. The coordination of eye, head, and arm movements during rapid gaze orienting and arm pointing. Exp Brain Res 2007; 184:579-85. [PMID: 18060545 DOI: 10.1007/s00221-007-1222-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2007] [Accepted: 11/13/2007] [Indexed: 11/24/2022]
Abstract
This study aimed to investigate the coordination of multiple control actions involved in human horizontal gaze orienting or arm pointing to a common visual target. The subjects performed a visually triggered reaction time task in three conditions: (1) gaze orienting with a combined eye saccade and head rotation (EH), (2) arm pointing with gaze orienting by an eye saccade without head rotation (EA), and (3) arm pointing with gaze orienting by a combined eye saccade and head rotation (EHA). The subjects initiated eye movement first with nearly constant latencies across all tasks, followed by head movement in the EH task, by arm movement in the EA task, and by head and then arm movements in the EHA task. The differences of onset times between eye and head movements in the EH task, and between eye and arm movements in the EA task, were both preserved in the EHA task, leading to an eye-to-head-to-arm sequence. The onset latencies of eye and head in the EH task, eye and arm in the EA task, and eye, head and arm in the EHA task, were all positively correlated on a trial-by-trial basis. In the EHA task, however, the correlation coefficients of eye-head coupling and of eye-arm coupling were reduced and increased, respectively, compared to those estimated in the two-effector conditions (EH, EA). These results suggest that motor commands for different motor effectors are linked differently to achieve coordination in a task-dependent manner.
Collapse
Affiliation(s)
- Masataka Suzuki
- Department of Psychology, Kinjo Gakuin University, Omori 2-1723, Moriyama, Nagoya 463-8521, Japan.
| | | | | | | |
Collapse
|
21
|
Stritzke M, Trommershäuser J. Eye movements during rapid pointing under risk. Vision Res 2007; 47:2000-9. [PMID: 17532361 DOI: 10.1016/j.visres.2007.04.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2006] [Revised: 04/16/2007] [Accepted: 04/16/2007] [Indexed: 11/30/2022]
Abstract
We recorded saccadic eye movements during visually-guided rapid pointing movements under risk. We intended to determine whether saccadic end points are necessarily tied to the goals of rapid pointing movements or whether, when the visual features of a display and the goals of a pointing movement are different, saccades are driven by low-level features of the visual stimulus. Subjects pointed at a stimulus configuration consisting of a target region and a penalty region. Each target hit yielded a gain of points; each penalty hit incurred a loss of points. Late responses were penalized. The luminance of either target or penalty region was indicated by a disk which differed significantly from the background in luminance, while the other region was indicated by a thin circle. In subsequent experiments, we varied the visual salience of the stimulus configuration and found that manual responses followed near-optimal strategies maximizing expected gain, independent of the salience of the target region. We suggest that the final eye position is partially pre-programmed prior to hand movement initiation. While we found that manipulations of the visual salience of the display determined the end point of the initial saccade we also found that subsequent saccades are driven by the goal of the hand movement.
Collapse
Affiliation(s)
- Martin Stritzke
- Giessen University, Department of Psychology, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany.
| | | |
Collapse
|
22
|
Kim KH, Gillespie RB, Martin BJ. Head movement control in visually guided tasks: postural goal and optimality. Comput Biol Med 2006; 37:1009-19. [PMID: 17067566 DOI: 10.1016/j.compbiomed.2006.08.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2005] [Revised: 07/18/2006] [Accepted: 08/03/2006] [Indexed: 10/24/2022]
Abstract
This work investigates the control of horizontal head movements in the context of unconstrained visually guided head and arm/finger aiming tasks. In a first experiment, the head was free to move while gaze was directed at randomly presented eccentric targets distributed horizontally (0 degrees-120 degrees) at eye level. In a second experiment, the horizontal head orientation was constrained to predetermined positions (0 degrees, 15 degrees, 30 degrees, 45 degrees or 60 degrees rightward) while the right index finger aimed at targets with the arm fully extended. Kinematics of head movements in gaze displacements exhibits an initial component weakly correlated with target position, followed by multiple corrections. Since the eyes are assumed to already be aimed at the target when the corrections occur, it is suggested that one goal of head movement control is to achieve a desired final orientation (posture). This hypothesis is supported by results from the second experiment that reveal an association between eye/head orientation angles and errors exhibited in the visuo-spatial representation of the environment. The minimization of error then underlies the control of head movement as a postural response optimized for a given target and task condition.
Collapse
Affiliation(s)
- K Han Kim
- Human Motion Simulation Laboratory, Center for Ergonomics, The University of Michigan, 1205 Beal Avenue, Ann Arbor, MI 48109-2117, USA
| | | | | |
Collapse
|
23
|
Carey DP, Della Sala S, Ietswaart M. Neuropsychological perspectives on eye-hand coordination in visually-guided reaching. PROGRESS IN BRAIN RESEARCH 2003; 140:311-27. [PMID: 12508599 DOI: 10.1016/s0079-6123(02)40059-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/14/2023]
Abstract
Substantial progress has been made in understanding the neural control of movement in the past 30 years. Lower cost technology for tracking movements of the eyes and the hands has increased our understanding of these two systems and their interactions in both neurologically intact individuals and non-human primates. Nevertheless the neuropsychology of eye-hand coordination during visually-guided tasks such as reaching and grasping remains relatively understudied. This chapter reviews some of the relevant neurophysiology and neuropsychology of eye-hand coordination during visually-guided reaching. Current models emphasising coordinate transformations are discussed in light of new patient data showing a particular type of failure of eye-hand coordination during reaching.
Collapse
Affiliation(s)
- David P Carey
- Neuropsychology Research Group, Department of Psychology, University of Aberdeen, Old Aberdeen AB24 2UB, UK
| | | | | |
Collapse
|
24
|
Abstract
Complex learned motor sequences can be composed of a combination of a small number of elementary actions. To investigate how the brain represents such sequences, we devised an oculomotor sequence task in which the monkey had to choose the target solely by the sequential context, not by the current stimulus combination. We found that many neurons in the supplementary eye field (SEF) became active with a specific target direction (D neuron) or a specific target/distractor combination (C neuron). Furthermore, such activity was often selective for one among several sequences that included the combination (S neuron). These results suggest that the SEF contributes to the generation of saccades in many learned sequences.
Collapse
Affiliation(s)
- Xiaofeng Lu
- Department of Physiology, Juntendo University, School of Medicine, 2-1-1 Hongo, Bunkyo-ku, 113-8421, Tokyo, Japan
| | | | | |
Collapse
|
25
|
Abstract
In a number of studies, we have demonstrated that the spatial-temporal coupling of eye and hand movements is optimal for the pickup of visual information about the position of the hand and the target late in the hand's trajectory. Several experiments designed to examine temporal coupling have shown that the eyes arrive at the target area concurrently with the hand achieving peak acceleration. Between the time the hand reached peak velocity and the end of the movement, increased variability in the position of the shoulder and the elbow was accompanied by a decreased spatial variability in the hand. Presumably, this reduction in variability was due to the use of retinal and extra-retinal information about the relative positions of the eye, hand and target. However, the hand does not appear to be a slave to the eye. For example, we have been able to decouple eye movements and hand movements using Müller-Lyer configurations as targets. Predictable bias, found in primary and corrective saccadic eye movements, was not found for hand movements, if on-line visual information about the target was available during aiming. That is, the hand remained accurate even when the eye had a tendency to undershoot or overshoot the target position. However, biases of the hand were evident, at least in the initial portion of an aiming movement, when vision of the target was removed and vision of the hand remained. These findings accent the versatility of human motor control and have implications for current models of visual processing and limb control.
Collapse
Affiliation(s)
- G Binsted
- Department of Psychology, University of Alberta, Edmonton, Canada
| | | | | | | |
Collapse
|
26
|
Hollands MA, Marple-Horvat DE. Coordination of eye and leg movements during visually guided stepping. J Mot Behav 2001; 33:205-16. [PMID: 11404215 DOI: 10.1080/00222890109603151] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
In the present study, 2 related hypotheses were tested: first, that vision is used in a feedforward control mode during precision stepping onto visual targets and, second, that the oculomotor and locomotor control centers interact to produce coordinated eye and leg movements during that task. Participants' (N = 4) eye movements and step cycle transition events were monitored while they performed a task requiring precise foot placement at every step onto irregularly placed stepping stones under conditions in which the availability of visual information was either restricted or intermittently removed altogether. Accurate saccades, followed by accurate steps, to the next footfall target were almost always made even when the information had been invisible for as long as 500 ms. Despite delays in footlift caused by the temporary removal (and subsequent reinstatement) of visual information, the mean interval between the start of the eye movement and the start of the swing toward a target did not vary significantly (p >.05). In contrast, the mean interval between saccade onset away from a target and a foot landing on that target (stance onset) did vary significantly (p <.05) under the different experimental conditions. Those results support the stated hypotheses.
Collapse
Affiliation(s)
- M A Hollands
- Neural Control Laboratory, Department of Kinesiology, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada.
| | | |
Collapse
|
27
|
Elliott D, Helsen WF, Chua R. A century later: Woodworth's (1899) two-component model of goal-directed aiming. Psychol Bull 2001; 127:342-57. [PMID: 11393300 DOI: 10.1037/0033-2909.127.3.342] [Citation(s) in RCA: 336] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In 1899, R. S. Woodworth published a seminal monograph, "The Accuracy of Voluntary Movement." As well as making a number of important empirical contributions, Woodworth presented a model of speed-accuracy relations in the control of upper limb movements. The model has come to be known as the two-component model because the control of speeded limb movements was hypothesized to entail both a central and a feedback-based component. Woodworth's (1899) ideas about the control of rapid aiming movements are evaluated in the context of current empirical and theoretical contributions.
Collapse
Affiliation(s)
- D Elliott
- Department of Kinesiology, McMaster University, Hamilton, Ontario, Canada.
| | | | | |
Collapse
|
28
|
Abstract
We sought to determine the effectiveness of head posture as a contextual cue to facilitate adaptive transitions in manual control during visuomotor distortions. Subjects performed arm pointing movements by drawing on a digitizing tablet, with targets and movement trajectories displayed in real time on a computer monitor. Adaptation was induced by presenting the trajectories in an altered gain format on the monitor. The subjects were shown visual displays of their movements that corresponded to either 0.5 or 1.5 scaling of the movements made. Subjects were assigned to three groups: the head orientation group tilted the head towards the right shoulder when drawing under a 0.5 gain of display and towards the left shoulder when drawing under a 1.5 gain of display; the target orientation group had the home and target positions rotated counterclockwise when drawing under the 0.5 gain and clockwise for the 1.5 gain; the arm posture group changed the elbow angle of the arm they were not drawing with from full flexion to full extension with 0.5 and 1.5 gain display changes. To determine if contextual cues were associated with display alternations, the gain changes were returned to the standard (1.0) display. Aftereffects were assessed to determine the efficacy of the head orientation contextual cue compared to the two control cues. The head orientation cue was effectively associated with the multiple gains. The target orientation cue also demonstrated some effectiveness while the arm posture cue did not. The results demonstrate that contextual cues can be used to switch between multiple adaptive states. These data provide support for the idea that static head orientation information is a crucial component to the arm adaptation process. These data further define the functional linkage between head posture and arm pointing movements.
Collapse
Affiliation(s)
- R D Seidler
- Motor Control Laboratory, Arizona State University, PO Box 870404, Tempe, AZ 85287-0404, USA
| | | | | |
Collapse
|
29
|
Helsen WF, Elliott D, Starkes JL, Ricker KL. Coupling of eye, finger, elbow, and shoulder movements during manual aiming. J Mot Behav 2000; 32:241-8. [PMID: 10975272 DOI: 10.1080/00222890009601375] [Citation(s) in RCA: 77] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Temporal and spatial coupling of point of gaze (PG) and movements of the finger, elbow, and shoulder during a speeded aiming task were examined. Ten participants completed 40-cm aiming movements with the right arm, in a situation that allowed free movement of the eyes, head, arm, and trunk. On the majority of trials, a large initial saccade undershot the target slightly, and 1 or more smaller corrective saccades brought the eyes to the target position. The finger, elbow, and shoulder exhibited a similar pattern of undershooting their final positions, followed by small corrective movements. Eye movements usually preceded limb movements, and the eyes always arrived at the target well in advance of the finger. There was a clear temporal coupling between primary saccade completion and peak acceleration of the finger, elbow, and shoulder. The initiation of limb-segment movement usually occurred in a proximal-to-distal pattern. Increased variability in elbow and shoulder position as the movement progressed may have served to reduce variability in finger position. The spatial-temporal coupling of PG with the 3 limb segments was optimal for the pick up of visual information about the position of the finger and the target late in the movement.
Collapse
Affiliation(s)
- W F Helsen
- Department of Kenesiology, Katholieke Universiteit Leuven, Belgium.
| | | | | | | |
Collapse
|
30
|
|
31
|
Abstract
Saccadic eye and hand movements made to step displacements in target position were measured under conditions designed to dissociate the output of the ocular and manual motor systems. This was accomplished by having subjects look and point, either with or without vision of the hand (closed or open loop, respectively) at peripheral targets starting from independent initial positions. The results showed that the amplitude of open loop pointing responses increased in size when accompanied by saccades that were larger than the required hand movement. Providing the subject with visual feedback of the hand during the response or asking them to visually fixate caused this effect to disappear. Taken together, this pattern of results suggests that when vision of the hand is unavailable the programming of saccade metrics influences the control of simultaneously produced pointing movements in an on-line manner.
Collapse
|
32
|
Effects of Target Eccentricity on Temporal Costs of Point of Gaze and the Hand in Aiming. Motor Control 1997. [DOI: 10.1123/mcj.1.2.161] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
33
|
Abstract
For prehensile tasks, where objects are located beyond the normal reaching space, the trunk is bent forward to assist in the transport of the wrist to the object. Such task behaviors raise complex motor control issues such as how is the trunk movement incorporated into the motor plan. In this experiment, seated subjects were asked to reach and grasp a small and a large object placed on a table located beyond their maximal reach. Forward trunk bending was required to extend the reach distance. For such reaching movements, the wrist velocity consisted of a bell shape profile similar to those seen when the arm is the sole transport agent. In most trials, the trunk was the first to initiate movement, although there was no strict pattern of initiation order. The transport data showed that trunk and arm movement components were decoupled at the end of the reach. While the object was being grasped and lifted, the trunk continued moving for approximately 180 ms after the grasp. Wrist deceleration time expressed in absolute and relative values was sensitive to object size. The time from maximum peak aperture to the end of wrist movement also was significantly longer for grasping the small compared to the large object. No such relationships were observed for the trunk. Temporal coupling was only observed between the grip and wrist transport component. Time to maximum aperture was significantly correlated with time to peak wrist deceleration and only rarely with time to trunk deceleration peak. When the trunk participates in the transport of the wrist to an object, these findings suggest that only the wrist component is directly related to the achievement of the grasp. While the trunk assisted the arm to reach the object, the kinematic parameter recorded did not reveal any evidence of direct coupling. The presented data suggests that the planning takes place at the level of the hand and that endpoint is the primary variable controlled.
Collapse
Affiliation(s)
- M Saling
- Motor Control Laboratory, Arizona State University, Tempe 85287-0404, USA
| | | | | | | |
Collapse
|
34
|
Smeets JB, Hayhoe MM, Ballard DH. Goal-directed arm movements change eye-head coordination. Exp Brain Res 1996; 109:434-40. [PMID: 8817273 DOI: 10.1007/bf00229627] [Citation(s) in RCA: 46] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
We compared the head movements accompanying gaze shifts while our subjects executed different manual operations, requiring gaze shifts of about 30 degrees. The different tasks yielded different latencies between gaze shifts and hand movements, and different maximum velocities of the hand. These changes in eye-hand coordination had a clear effect on eye-head coordination: the latencies and maximum velocities of head and hand were correlated. The same correlation between movements of the head and hand was also found within a task. Therefore, the changes in eye-head coordination are not caused by changes in the strategy of the subjects. We conclude that head movements and saccades during gaze shifts are not based on the same command: head movements depend both on the actual saccade and on possible future gaze shifts.
Collapse
Affiliation(s)
- J B Smeets
- Vakgroep Fysiologie, Erasmus Universiteit Rotterdam, The Netherlands.
| | | | | |
Collapse
|
35
|
Rossetti Y, Koga K, Mano T. Prismatic displacement of vision induces transient changes in the timing of eye-hand coordination. PERCEPTION & PSYCHOPHYSICS 1993; 54:355-64. [PMID: 8414894 DOI: 10.3758/bf03205270] [Citation(s) in RCA: 87] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Eye-hand coordination was investigated during a task of finger pointing toward visual targets viewed through wedge prisms. Hand and eye latencies and movement times were identical during the control condition and at the end of prism exposure. A temporal reorganization of eye and hand movements was observed during the course of adaptation. During the earlier stage of prism exposure, the time gap between the end of the eye saccade and the onset of hand movement was increased from a control time of 23 to 68 msec. This suggests that a time-consuming process occurred during the early prism-exposure period. The evolution of this time gap was correlated with the evolution of pointing errors during the early stage of prism exposure, in such a way that both measures increased at the onset of prism exposure and decreased almost back to control values within about 10 trials. However, spatial error was not entirely corrected, even late in prism exposure when the temporal organization of eye and hand had returned to baseline. These data suggest that two different adaptive mechanisms were at work: a rather short-term mechanism, involved in normal coordination of spatially aligned eye and hand systems, and a long-term mechanism, responsible for remapping spatially misaligned systems. The former mechanism can be strategically employed to quickly optimize accuracy in a situation involving misalignment, but completely adaptive behavior must await the slower-acting latter mechanism to achieve long-term spatial alignment.
Collapse
|
36
|
Brown SH, Kessler KR, Hefter H, Cooke JD, Freund HJ. Role of the cerebellum in visuomotor coordination. I. Delayed eye and arm initiation in patients with mild cerebellar ataxia. Exp Brain Res 1993; 94:478-88. [PMID: 8359262 DOI: 10.1007/bf00230206] [Citation(s) in RCA: 27] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
The initiation of coupled eye and arm movements was studied in six patients with mild cerebellar dysfunction and in six age-matched control subjects. The experimental paradigm consisted of 40 deg step-tracking elbow movements made under different feedback conditions. During tracking with the eyes only, saccadic latencies in patients were within normal limits. When patients were required to make coordinated eye and arm movements, however, eye movement onset was significantly delayed. In addition, removal of visual information about arm versus target position had a pronounced differential effect on movement latencies. When the target was extinguished for 3 s immediately following a step change in target position, both eye and arm onset times were further prolonged compared to movements made to continuously visible targets. When visual information concerning arm position was removed, onset times were reduced. Eye and arm latencies in control subjects were unaffected by changes in visual feedback. The results of this study clearly demonstrate that, in contrast to earlier reports of normal saccadic latencies associated with cerebellar dysfunction, initiation of both eye and arm movements is prolonged during coordinated visuomotor tracking thus supporting a coordinative role for the cerebellum during oculo-manual tracking tasks.
Collapse
Affiliation(s)
- S H Brown
- Department of Physiology, University of Western Ontario, London, Canada
| | | | | | | | | |
Collapse
|