1
|
Improving Haptic Response for Contextual Human Robot Interaction. SENSORS 2022; 22:s22052040. [PMID: 35271188 PMCID: PMC8914947 DOI: 10.3390/s22052040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 03/01/2022] [Accepted: 03/02/2022] [Indexed: 02/01/2023]
Abstract
For haptic interaction, a user in a virtual environment needs to interact with proxies attached to a robot. The device must be at the exact location defined in the virtual environment in time. However, due to device limitations, delays are always unavoidable. One of the solutions to improve the device response is to infer human intended motion and move the robot at the earliest time possible to the desired goal. This paper presents an experimental study to improve the prediction time and reduce the robot time taken to reach the desired position. We developed motion strategies based on the hand motion and eye-gaze direction to determine the point of user interaction in a virtual environment. To assess the performance of the strategies, we conducted a subject-based experiment using an exergame for reach and grab tasks designed for upper limb rehabilitation training. The experimental results in this study revealed that eye-gaze-based prediction significantly improved the detection time by 37% and the robot time taken to reach the target by 27%. Further analysis provided more insight on the effect of the eye-gaze window and the hand threshold on the device response for the experimental task.
Collapse
|
2
|
Cámara C, López-Moliner J, Brenner E, de la Malla C. Looking away from a moving target does not disrupt the way in which the movement toward the target is guided. J Vis 2021; 20:5. [PMID: 32407436 PMCID: PMC7409596 DOI: 10.1167/jov.20.5.5] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
People usually follow a moving object with their gaze if they intend to interact with it. What would happen if they did not? We recorded eye and finger movements while participants moved a cursor toward a moving target. An unpredictable delay in updating the position of the cursor on the basis of that of the invisible finger made it essential to use visual information to guide the finger's ongoing movement. Decreasing the contrast between the cursor and the background from trial to trial made it difficult to see the cursor without looking at it. In separate experiments, either participants were free to hit the target anywhere along its trajectory or they had to move along a specified path. In the two experiments, participants tracked the cursor rather than the target with their gaze on 13% and 32% of the trials, respectively. They hit fewer targets when the contrast was low or a path was imposed. Not looking at the target did not disrupt the visual guidance that was required to deal with the delays that we imposed. Our results suggest that peripheral vision can be used to guide one item to another, irrespective of which item one is looking at.
Collapse
|
3
|
Constancy of Preparatory Postural Adjustments for Reaching to Virtual Targets across Different Postural Configurations. Neuroscience 2020; 455:223-239. [PMID: 33246066 DOI: 10.1016/j.neuroscience.2020.11.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 11/05/2020] [Accepted: 11/06/2020] [Indexed: 11/21/2022]
Abstract
Postural and movement components must be coordinated without significant disturbance to balance when reaching from a standing position. Traditional theories propose that muscle activity prior to movement onset create the mechanics to counteract the internal torques generated by the future limb movement, reducing possible instability via centre of mass (CoM) displacement. However, during goal-directed reach movements executed on a fixed base of support (BoS), preparatory postural adjustments (or pPAs) promote movement of the CoM within the BoS. Considering this dichotomy, the current study investigated if pPAs constitute part of a whole-body strategy that is tied to the efficient execution of movement, rather than the constraints of balance. We reasoned that if pPAs were tied primarily to balance control, they would modulate as a function of perceived instability. Alternatively, if tied to dynamics necessary for movement initiation, they would remain unchanged, with feedback-based changes being sufficient to retain balance following volitional arm movement. Participants executed beyond-arm reaching movements in four different postural configurations that altered the quality of the BoS. Quantification of these changes to stability did not drastically alter the tuning or timing of preparatory muscle activity despite modifications to arm and CoM trajectories necessary to complete the reaching movement. In contrast to traditional views, preparatory postural muscle activity is not always tuned for balance maintenance or even as a calculation of upcoming instability but may reflect a requirement of voluntary movement towards a pre-defined location.
Collapse
|
4
|
Luo C, Franchak JM. Head and body structure infants' visual experiences during mobile, naturalistic play. PLoS One 2020; 15:e0242009. [PMID: 33170881 PMCID: PMC7654772 DOI: 10.1371/journal.pone.0242009] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 10/24/2020] [Indexed: 11/25/2022] Open
Abstract
Infants’ visual experiences are important for learning, and may depend on how information is structured in the visual field. This study examined how objects are distributed in 12-month-old infants’ field of view in a mobile play setting. Infants wore a mobile eye tracker that recorded their field of view and eye movements while they freely played with toys and a caregiver. We measured how centered and spread object locations were in infants’ field of view, and investigated how infant posture, object looking, and object distance affected the centering and spread. We found that far toys were less centered in infants’ field of view while infants were prone compared to when sitting or upright. Overall, toys became more centered in view and less spread in location when infants were looking at toys regardless of posture and toy distance. In sum, this study showed that infants’ visual experiences are shaped by the physical relation between infants’ bodies and the locations of objects in the world. However, infants are able to compensate for postural and environmental constraints by actively moving their head and eyes when choosing to look at an object.
Collapse
Affiliation(s)
- Chuan Luo
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - John M. Franchak
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
- * E-mail:
| |
Collapse
|
5
|
Hadjidimitrakis K. Coupling of head and hand movements during eye-head-hand coordination: there is more to reaching than meets eye. J Neurophysiol 2020; 123:1579-1582. [PMID: 32233904 DOI: 10.1152/jn.00099.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Does arm reaching affect eye-head shifts? Does the head alter eye-hand coordinated movements? Sensorimotor research has focused on either eye-head or eye-hand coordination, with only occasional works studying all these effectors together. Arora et al. (Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. J Neurophysiol 122: 1946-1961, 2019) examined eye-head-hand coordination for the first time in nonhuman primates and provide evidence suggesting that head and hand movements are more coupled than traditionally considered.
Collapse
|
6
|
Franchak JM. Visual exploratory behavior and its development. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.07.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Gregori V, Cognolato M, Saetta G, Atzori M, Gijsberts A. On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping. Front Bioeng Biotechnol 2019; 7:316. [PMID: 31799243 PMCID: PMC6874164 DOI: 10.3389/fbioe.2019.00316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/24/2019] [Indexed: 11/15/2022] Open
Abstract
Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
Collapse
Affiliation(s)
- Valentina Gregori
- Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy.,VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matteo Cognolato
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.,Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Gianluca Saetta
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | | | - Arjan Gijsberts
- VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
8
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
9
|
de la Malla C, Rushton SK, Clark K, Smeets JBJ, Brenner E. The predictability of a target’s motion influences gaze, head, and hand movements when trying to intercept it. J Neurophysiol 2019; 121:2416-2427. [DOI: 10.1152/jn.00917.2017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Does the predictability of a target’s movement and of the interception location influence how the target is intercepted? In a first experiment, we manipulated the predictability of the interception location. A target moved along a haphazardly curved path, and subjects attempted to tap on it when it entered a hitting zone. The hitting zone was either a large ring surrounding the target’s starting position (ring condition) or a small disk that became visible before the target appeared (disk condition). The interception location gradually became apparent in the ring condition, whereas it was immediately apparent in the disk condition. In the ring condition, subjects pursued the target with their gaze. Their heads and hands gradually moved in the direction of the future tap position. In the disk condition, subjects immediately directed their gaze toward the hitting zone by moving both their eyes and heads. They also moved their hands to the future tap position sooner than in the ring condition. In a second and third experiment, we made the target’s movement more predictable. Although this made the targets easier to pursue, subjects now shifted their gaze to the hitting zone soon after the target appeared in the ring condition. In the disk condition, they still usually shifted their gaze to the hitting zone at the beginning of the trial. Together, the experiments show that predictability of the interception location is more important than predictability of target movement in determining how we move to intercept targets. NEW & NOTEWORTHY We show that if people are required to intercept a target at a known location, they direct their gaze to the interception point as soon as they can rather than pursuing the target with their eyes for as long as possible. The predictability of the interception location rather than the predictability of the path to that location largely determines how the eyes, head, and hand move.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Spain
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Simon K. Rushton
- School of Psychology, Cardiff University, Cardiff, United Kingdom
| | - Kait Clark
- School of Psychology, Cardiff University, Cardiff, United Kingdom
- Department of Health and Social Sciences, University of the West of England, Bristol, United Kingdom
| | - Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
10
|
Affiliation(s)
- Jolande Fooken
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
- Center for Brain Health, University of British Columbia, Vancouver, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
11
|
Voudouris D, Smeets JBJ, Fiehler K, Brenner E. Gaze when reaching to grasp a glass. J Vis 2018; 18:16. [PMID: 30167674 DOI: 10.1167/18.8.16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip-thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' gaze was biased to some extent toward the position of the next action, but gaze was not influenced consistently across participants. Gaze was also not influenced consistently across the experiments for individual participants-even for those who participated in both experiments. We conclude that gaze is not simply determined by the identity of the digit or by details of the contact points, such as their visibility, but that gaze is just as sensitive to other factors, such as where one will manipulate the object after grasping.
Collapse
Affiliation(s)
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig University, Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Stamenkovic A, Stapley PJ, Robins R, Hollands MA. Do postural constraints affect eye, head, and arm coordination? J Neurophysiol 2018; 120:2066-2082. [DOI: 10.1152/jn.00200.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
If a whole body reaching task is produced when standing or adopting challenging postures, it is unclear whether changes in attentional demands or the sensorimotor integration necessary for balance control influence the interaction between visuomotor and postural components of the movement. Is gaze control prioritized by the central nervous system (CNS) to produce coordinated eye movements with the head and whole body regardless of movement context? Considering the coupled nature of visuomotor and whole body postural control during action, this study aimed to understand how changing equilibrium constraints (in the form of different postural configurations) influenced the initiation of eye, head, and arm movements. We quantified the eye-head metrics and segmental kinematics as participants executed either isolated gaze shifts or whole body reaching movements to visual targets. In total, four postural configurations were compared: seated, natural stance, with the feet together (narrow stance), or while balancing on a wooden beam. Contrary to our initial predictions, the lack of distinct changes in eye-head metrics; timing of eye, head, and arm movement initiation; and gaze accuracy, in spite of kinematic differences, suggests that the CNS integrates postural constraints into the control necessary to initiate gaze shifts. This may be achieved by adopting a whole body gaze strategy that allows for the successful completion of both gaze and reaching goals. NEW & NOTEWORTHY Differences in sequence of movement among the eye, head, and arm have been shown across various paradigms during reaching. Here we show that distinct changes in eye characteristics and movement sequence, coupled with stereotyped profiles of head and gaze movement, are not observed when adopting postures requiring changes to balance constraints. This suggests that a whole body gaze strategy is prioritized by the central nervous system with postural control subservient to gaze stability requirements.
Collapse
Affiliation(s)
- Alexander Stamenkovic
- Neural Control of Movement Laboratory School of Medicine, Faculty of Science, Medicine and Health University of Wollongong, Wollongong, Australia
- Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, Australia
| | - Paul J. Stapley
- Neural Control of Movement Laboratory School of Medicine, Faculty of Science, Medicine and Health University of Wollongong, Wollongong, Australia
- Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, Australia
| | - Rebecca Robins
- Research Institute for Sports and Exercise Sciences, School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, United Kingdom
| | - Mark A. Hollands
- Research Institute for Sports and Exercise Sciences, School of Sport and Exercise Sciences, Faculty of Science, Liverpool John Moores University, Liverpool, United Kingdom
| |
Collapse
|
13
|
Malienko A, Harrar V, Khan AZ. Contrasting effects of exogenous cueing on saccades and reaches. J Vis 2018; 18:4. [DOI: 10.1167/18.9.4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Anton Malienko
- Vision, Attention and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Vanessa Harrar
- Vision, Attention and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Aarlenne Z. Khan
- Vision, Attention and Action Laboratory (VISATTAC), School of Optometry, University of Montreal, Montreal, Quebec, Canada
| |
Collapse
|
14
|
Malla CDL, Smeets JBJ, Brenner E. Potential Systematic Interception Errors are Avoided When Tracking the Target with One's Eyes. Sci Rep 2017; 7:10793. [PMID: 28883471 PMCID: PMC5589827 DOI: 10.1038/s41598-017-11200-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2016] [Accepted: 08/21/2017] [Indexed: 11/22/2022] Open
Abstract
Directing our gaze towards a moving target has two known advantages for judging its trajectory: the spatial resolution with which the target is seen is maximized, and signals related to the eyes' movements are combined with retinal cues to better judge the target's motion. We here explore whether tracking a target with one's eyes also prevents factors that are known to give rise to systematic errors in judging retinal speeds from resulting in systematic errors in interception. Subjects intercepted white or patterned disks that moved from left to right across a large screen at various constant velocities while either visually tracking the target or fixating the position at which they were required to intercept the target. We biased retinal motion perception by moving the pattern within the patterned targets. This manipulation led to large systematic errors in interception when subjects were fixating, but not when they were tracking the target. The reduction in the errors did not depend on how smoothly the eyes were tracking the target shortly before intercepting it. We propose that tracking targets with one's eyes when one wants to intercept them makes one less susceptible to biases in judging their motion.
Collapse
Affiliation(s)
- Cristina de la Malla
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, NL - 1081BT, Amsterdam, The Netherlands.
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, NL - 1081BT, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, NL - 1081BT, Amsterdam, The Netherlands
| |
Collapse
|
15
|
Kurz J, Hegele M, Reiser M, Munzert J. Impact of task difficulty on gaze behavior in a sequential object manipulation task. Exp Brain Res 2017; 235:3479-3486. [PMID: 28840269 DOI: 10.1007/s00221-017-5062-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 08/08/2017] [Indexed: 01/08/2023]
Abstract
Task difficulty affects both gaze behavior and hand movements. Therefore, the present study aimed to investigate how task difficulty modulates gaze behaviour with respect to the balance between visually monitoring the ongoing action and prospectively collecting visual information about the future course of the ongoing action. For this, we examined sequences of reach and transport movements of water glasses that differed in task difficulty using glasses filled to different levels. Participants had to grasp water glasses with different filling levels (100, 94, 88, 82, and 76%) and transport them to a target. Subsequently, they had to grasp the next water glass and transport it to a target on the opposite side. Results showed significant differences in both gaze and movement kinematics for higher filling levels. However, there were no relevant differences between the 88, 82, and 76% filling levels. Results revealed a significant influence of task difficulty on the interaction between gaze and kinematics during transport and a strong influence of task difficulty on gaze during the release phase between different grasp-to-place movements. In summary, we found a movement and gaze pattern revealing an influence of task difficulty that was especially evident for the later phases of transport and release.
Collapse
Affiliation(s)
- Johannes Kurz
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394, Giessen, Germany.
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394, Giessen, Germany
| | - Mathias Reiser
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Kugelberg 62, 35394, Giessen, Germany
| |
Collapse
|
16
|
Kreyenmeier P, Fooken J, Spering M. Context effects on smooth pursuit and manual interception of a disappearing target. J Neurophysiol 2017; 118:404-415. [PMID: 28515287 DOI: 10.1152/jn.00217.2017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 04/25/2017] [Accepted: 05/12/2017] [Indexed: 11/22/2022] Open
Abstract
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments (n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points.NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuro-Cognitive Psychology, Ludwig Maximilian University, Munich, Germany
| | - Jolande Fooken
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; .,Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.,Center for Brain Health, University of British Columbia, Vancouver, Canada.,Institute for Information, Computing and Cognitive Systems, University of British Columbia, Vancouver, Canada; and.,International Collaboration on Repair Discoveries, Vancouver, Canada
| |
Collapse
|
17
|
|
18
|
Gonzalez DA, Niechwiej-Szwedo E. The effects of monocular viewing on hand-eye coordination during sequential grasping and placing movements. Vision Res 2016; 128:30-38. [DOI: 10.1016/j.visres.2016.08.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 07/27/2016] [Accepted: 08/15/2016] [Indexed: 01/12/2023]
|
19
|
Voudouris D, Smeets JBJ, Brenner E. Fixation Biases towards the Index Finger in Almost-Natural Grasping. PLoS One 2016; 11:e0146864. [PMID: 26766551 PMCID: PMC4713150 DOI: 10.1371/journal.pone.0146864] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Accepted: 12/21/2015] [Indexed: 12/02/2022] Open
Abstract
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.
Collapse
Affiliation(s)
- Dimitris Voudouris
- Department of Human Movement Sciences, VU University Amsterdam, Amsterdam, The Netherlands
- Department of Psychology, Justus-Liebig University Giessen, Giessen, Germany
- * E-mail:
| | - Jeroen B. J. Smeets
- Department of Human Movement Sciences, VU University Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, VU University Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
20
|
Desanghere L, Marotta JJ. The influence of object shape and center of mass on grasp and gaze. Front Psychol 2015; 6:1537. [PMID: 26528207 PMCID: PMC4607879 DOI: 10.3389/fpsyg.2015.01537] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 09/22/2015] [Indexed: 11/13/2022] Open
Abstract
Recent experiments examining where participants look when grasping an object found that fixations favor the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object's function and center of mass (COM) location, these investigations have generally used simple symmetrical objects - where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object's shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction.
Collapse
Affiliation(s)
- Loni Desanghere
- Perception and Action Laboratory, Department of Psychology, University of Manitoba, WinnipegMB, Canada
- Postgraduate Medical Education, College of Medicine, University of Saskatchewan, SaskatoonSK, Canada
| | - Jonathan J. Marotta
- Perception and Action Laboratory, Department of Psychology, University of Manitoba, WinnipegMB, Canada
| |
Collapse
|
21
|
Anticipatory gaze strategies when grasping moving objects. Exp Brain Res 2015; 233:3413-23. [PMID: 26289482 DOI: 10.1007/s00221-015-4413-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2015] [Accepted: 08/08/2015] [Indexed: 10/23/2022]
Abstract
Grasping moving objects involves both spatial and temporal predictions. The hand is aimed at a location where it will meet the object, rather than the position at which the object is seen when the reach is initiated. Previous eye-hand coordination research from our laboratory, utilizing stationary objects, has shown that participants' initial gaze tends to be directed towards the eventual location of the index finger when making a precision grasp. This experiment examined how the speed and direction of a computer-generated block's movement affect gaze and selection of grasp points. Results showed that when the target first appeared, participants anticipated the target's eventual movement by fixating well ahead of its leading edge in the direction of eventual motion. Once target movement began, participants shifted their fixation to the leading edge of the target. Upon reach initiation, participants then fixated towards the top edge of the target. As seen in our previous work with stationary objects, final fixations tended towards the final index finger contact point on the target. Moreover, gaze and kinematic analyses revealed that it was direction that most influenced fixation locations and grasp points. Interestingly, participants fixated further ahead of the target's leading edge when the direction of motion was leftward, particularly at the slower speed-possibly the result of mechanical constraints of intercepting leftward-moving targets with one's right hand.
Collapse
|
22
|
't Hart BM, Einhäuser W. Mind the step: complementary effects of an implicit task on eye and head movements in real-life gaze allocation. Exp Brain Res 2012; 223:233-49. [PMID: 23001370 DOI: 10.1007/s00221-012-3254-x] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2012] [Accepted: 08/30/2012] [Indexed: 11/28/2022]
Abstract
Gaze in real-world scenarios is controlled by a huge variety of parameters, such as stimulus features, instructions or context, all of which have been studied systematically in laboratory studies. It is, however, unclear how these results transfer to real-world situations, when participants are largely unconstrained in their behavior. Here we measure eye and head orientation and gaze in two conditions, in which we ask participants to negotiate paths in a real-world outdoor environment. The implicit task set is varied by using paths of different irregularity: In one condition, the path consists of irregularly placed steps, and in the other condition, a cobbled road is used. With both paths located adjacently, the visual environment (i.e., context and features) for both conditions is virtually identical, as is the instruction. We show that terrain regularity causes differences in head orientation and gaze behavior, specifically in the vertical direction. Participants direct head and eyes lower when terrain irregularity increases. While head orientation is not affected otherwise, vertical spread of eye-in-head orientation also increases significantly for more irregular terrain. This is accompanied by altered patterns of eye movements, which compensate for the lower average gaze to still inspect the visual environment. Our results quantify the importance of implicit task demands for gaze allocation in the real world, and imply qualitatively distinct contributions of eyes and head in gaze allocation. This underlines the care that needs to be taken when inferring real-world behavior from constrained laboratory data.
Collapse
Affiliation(s)
- Bernard Marius 't Hart
- Neurophysics, Philipps-University Marburg, Karl-von-Frisch-Str. 8a (Altes MPI), 35032 Marburg, Germany.
| | | |
Collapse
|
23
|
Do walkers follow their heads? Investigating the role of head rotation in locomotor control. Exp Brain Res 2012; 219:175-90. [PMID: 22466410 DOI: 10.1007/s00221-012-3077-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2011] [Accepted: 03/14/2012] [Indexed: 10/28/2022]
Abstract
Eye and head rotations are normally correlated with changes in walking direction; however, it is unknown whether they play a causal role in the control of steering. The objective of the present study was to answer two questions about the role of head rotations in steering control when walking to a goal. First, are head rotations sufficient to elicit a change in walking direction? Second, are head rotations necessary to initiate a change in walking direction or guide steering to a goal? To answer these questions, participants either walked toward a goal located 7 m away or were cued to steer to the left or right by 37°. On a subset of trials, participants were either cued to voluntarily turn their heads to the left or right, or they underwent an involuntary head perturbation via a head-mounted air jet. The results showed that large voluntary head turns (35°) yielded slight path deviations (1°-2°) in the same or opposite direction as the head turn, depending on conditions, which have alternative explanations. Involuntary head rotations did not elicit path deviations despite comparable head rotation magnitudes. In addition, the walking trajectory when turning toward an eccentric goal was the same regardless of head orientation. Steering can thus be decoupled from head rotation during walking. We conclude that head rotations are neither a sufficient nor a necessary component of steering control, because they do not induce a turn and they are not required to initiate a turn or to guide the locomotor trajectory to a goal.
Collapse
|
24
|
Jeong S, Arie H, Lee M, Tani J. Neuro-robotics study on integrative learning of proactive visual attention and motor behaviors. Cogn Neurodyn 2011; 6:43-59. [PMID: 23372619 DOI: 10.1007/s11571-011-9176-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2011] [Revised: 08/17/2011] [Accepted: 09/19/2011] [Indexed: 11/30/2022] Open
Abstract
The current paper proposes a novel model for integrative learning of proactive visual attention and sensory-motor control as inspired by the premotor theory of visual attention. The model is characterized by coupling a slow dynamics network with a fast dynamics network and by inheriting our prior proposed multiple timescales recurrent neural networks model (MTRNN) that may correspond to the fronto-parietal networks in the cortical brains. The neuro-robotics experiments in a task of manipulating multiple objects utilizing the proposed model demonstrated that some degrees of generalization in terms of position and object size variation can be achieved by organizing seamless integration of the proactive object-related visual attention and the related sensory-motor control into a set of action primitives in the distributed neural activities appearing in the fast dynamics network. It was also shown that such action primitives can be combined in compositional ways in acquiring novel actions in the slow dynamics network. The experimental results presented substantiate the premotor theory of visual attention.
Collapse
Affiliation(s)
- Sungmoon Jeong
- School of Electronics Engineering, Kyungpook National University, 1370 Sankyuk-Dong, Puk-Gu, Taegu, 702-701 Korea
| | | | | | | |
Collapse
|
25
|
Terrier R, Forestier N, Berrigan F, Germain-Robitaille M, Lavallière M, Teasdale N. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings. J Neuroeng Rehabil 2011; 8:10. [PMID: 21320315 PMCID: PMC3045308 DOI: 10.1186/1743-0003-8-10] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2010] [Accepted: 02/14/2011] [Indexed: 11/10/2022] Open
Abstract
Background Rapid discrete goal-directed movements are characterized by a well known coordination pattern between the gaze and the hand displacements. The gaze always starts prior to the hand movement and reaches the target before hand velocity peak. Surprisingly, the effect of the target size on the temporal gaze-hand coordination has not been directly investigated. Moreover, goal-directed movements are often produced in a reciprocal rather than in a discrete manner. The objectives of this work were to assess the effect of the target size on temporal gaze-hand coordination during fast 1) discrete and 2) reciprocal pointings. Methods Subjects performed fast discrete (experiment 1) and reciprocal (experiment 2) pointings with an amplitude of 50 cm and four target diameters (7.6, 3.8, 1.9 and 0.95 cm) leading to indexes of difficulty (ID = log2[2A/D]) of 3.7, 4.7, 5.7 and 6.7 bits. Gaze and hand displacements were synchronously recorded. Temporal gaze-hand coordination parameters were compared between experiments (discrete and reciprocal pointings) and IDs using analyses of variance (ANOVAs). Results Data showed that the magnitude of the gaze-hand lead pattern was much higher for discrete than for reciprocal pointings. Moreover, while it was constant for discrete pointings, it decreased systematically with an increasing ID for reciprocal pointings because of the longer duration of gaze anchoring on target. Conclusion Overall, the temporal gaze-hand coordination analysis revealed that even for high IDs, fast reciprocal pointings could not be considered as a concatenation of discrete units. Moreover, our data clearly illustrate the smooth adaptation of temporal gaze-hand coordination to terminal accuracy requirements during fast reciprocal pointings. It will be interesting for further researches to investigate if the methodology used in the experiment 2 allows assessing the effect of sensori-motor deficits on gaze-hand coordination.
Collapse
Affiliation(s)
- Romain Terrier
- Laboratoire de Physiologie de l'Exercice (E.A. 4338), Département STAPS, UFR CISM, Université de Savoie, 73376 Le Bourget du lac cedex, France.
| | | | | | | | | | | |
Collapse
|
26
|
Baldauf D, Deubel H. Attentional landscapes in reaching and grasping. Vision Res 2010; 50:999-1013. [DOI: 10.1016/j.visres.2010.02.008] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2009] [Revised: 02/06/2010] [Accepted: 02/10/2010] [Indexed: 11/30/2022]
|
27
|
Thumser ZC, Oommen BS, Kofman IS, Stahl JS. Idiosyncratic variations in eye-head coupling observed in the laboratory also manifest during spontaneous behavior in a natural setting. Exp Brain Res 2008; 191:419-34. [PMID: 18704380 DOI: 10.1007/s00221-008-1534-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2008] [Accepted: 08/02/2008] [Indexed: 11/28/2022]
Abstract
The tendency to generate head movements during saccades varies from person to person. Head movement tendencies can be measured as subjects fixate sequences of illuminated targets, but the extent to which such measures reflect eye-head coupling during more natural behaviors is unknown. We quantified head movement tendencies in 20 normal subjects in a conventional laboratory experiment and in an outdoor setting in which the subjects directed their gaze spontaneously. In the laboratory, head movement tendencies during centrifugal saccades could be described by the eye-only range (EOR), customary ocular motor range (COMR), and the customary head orientation range (CHOR). An analogous EOR, COMR, and CHOR could be extracted from the centrifugal saccades executed in the outdoor setting. An additional six measures were introduced to describe the preferred ranges of eyes-in-head and head-on-torso manifest throughout the outdoor recording, i.e., not limited to the orientations following centrifugal saccades. These 12 measured variables could be distilled by factor analysis to one indoor and six outdoor factors. The factors reflect separable tendencies related to preferred ranges of visual search, head eccentricity, and eye eccentricity. Multiple correlations were found between the indoor and outdoor factors. The results demonstrate that there are multiple types of head movement tendencies, but some of these influence behavior across rather different experimental settings and tasks. Thus behavior in the two settings likely relies on common neural mechanisms, and the laboratory assays of head movement tendencies succeed in probing the mechanisms underlying eye-head coupling during more natural behaviors.
Collapse
Affiliation(s)
- Zachary C Thumser
- Louis Stokes Cleveland Department of Veterans Affairs Medical Center, Cleveland, OH 44106, USA
| | | | | | | |
Collapse
|
28
|
Foulsham T, Kingstone A, Underwood G. Turning the world around: Patterns in saccade direction vary with picture orientation. Vision Res 2008; 48:1777-90. [PMID: 18599105 DOI: 10.1016/j.visres.2008.05.018] [Citation(s) in RCA: 63] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2007] [Revised: 05/21/2008] [Accepted: 05/22/2008] [Indexed: 10/21/2022]
|
29
|
Suzuki M, Izawa A, Takahashi K, Yamazaki Y. The coordination of eye, head, and arm movements during rapid gaze orienting and arm pointing. Exp Brain Res 2007; 184:579-85. [PMID: 18060545 DOI: 10.1007/s00221-007-1222-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2007] [Accepted: 11/13/2007] [Indexed: 11/24/2022]
Abstract
This study aimed to investigate the coordination of multiple control actions involved in human horizontal gaze orienting or arm pointing to a common visual target. The subjects performed a visually triggered reaction time task in three conditions: (1) gaze orienting with a combined eye saccade and head rotation (EH), (2) arm pointing with gaze orienting by an eye saccade without head rotation (EA), and (3) arm pointing with gaze orienting by a combined eye saccade and head rotation (EHA). The subjects initiated eye movement first with nearly constant latencies across all tasks, followed by head movement in the EH task, by arm movement in the EA task, and by head and then arm movements in the EHA task. The differences of onset times between eye and head movements in the EH task, and between eye and arm movements in the EA task, were both preserved in the EHA task, leading to an eye-to-head-to-arm sequence. The onset latencies of eye and head in the EH task, eye and arm in the EA task, and eye, head and arm in the EHA task, were all positively correlated on a trial-by-trial basis. In the EHA task, however, the correlation coefficients of eye-head coupling and of eye-arm coupling were reduced and increased, respectively, compared to those estimated in the two-effector conditions (EH, EA). These results suggest that motor commands for different motor effectors are linked differently to achieve coordination in a task-dependent manner.
Collapse
Affiliation(s)
- Masataka Suzuki
- Department of Psychology, Kinjo Gakuin University, Omori 2-1723, Moriyama, Nagoya 463-8521, Japan.
| | | | | | | |
Collapse
|
30
|
Ansuini C, Giosa L, Turella L, Altoè G, Castiello U. An object for an action, the same object for other actions: effects on hand shaping. Exp Brain Res 2007; 185:111-9. [PMID: 17909766 DOI: 10.1007/s00221-007-1136-4] [Citation(s) in RCA: 122] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2007] [Accepted: 09/12/2007] [Indexed: 10/22/2022]
Abstract
Objects can be grasped in several ways due to their physical properties, the context surrounding the object, and the goal of the grasping agent. The aim of the present study was to investigate whether the prior-to-contact grasping kinematics of the same object vary as a result of different goals of the person grasping it. Subjects were requested to reach toward and grasp a bottle filled with water, and then complete one of the following tasks: (1) Grasp it without performing any subsequent action; (2) Lift and throw it; (3) Pour the water into a container; (4) Place it accurately on a target area; (5) Pass it to another person. We measured the angular excursions at both metacarpal-phalangeal (mcp) and proximal interphalangeal (pip) joints of all digits, and abduction angles of adjacent digit pairs by means of resistive sensors embedded in a glove. The results showed that the presence and the nature of the task to be performed following grasping affect the positioning of the fingers during the reaching phase. We contend that a one-to-one association between a sensory stimulus and a motor response does not capture all the aspects involved in grasping. The theoretical approach within which we frame our discussion considers internal models of anticipatory control which may provide a suitable explanation of our results.
Collapse
Affiliation(s)
- Caterina Ansuini
- Dipartimento di Psicologia Generale, Università di Padova, via Venezia 8, 35131, Padova, Italy
| | | | | | | | | |
Collapse
|
31
|
Abstract
Human head movement control can be considered as part of the oculomotor system since the control of gaze involves coordination of the eyes and head. Humans show a remarkable degree of flexibility in eye-head coordination strategies, nonetheless an individual will often demonstrate stereotypical patterns of eye-head behaviour for a given visual task. This review examines eye-head coordination in laboratory-based visual tasks, such as saccadic gaze shifts and combined eye-head pursuit, and in common tasks in daily life, such as reading. The effect of the aging process on eye-head coordination is then reviewed from infancy through to senescence. Consideration is also given to how pathology can affect eye-head coordination from the lowest through to the highest levels of oculomotor control, comparing conditions as diverse as eye movement restrictions and schizophrenia. Given the adaptability of the eye-head system we postulate that this flexible system is under the control of the frontal cortical regions, which assist in planning, coordinating and executing behaviour. We provide evidence for this based on changes in eye-head coordination dependant on the context and expectation of presented visual stimuli, as well as from changes in eye-head coordination caused by frontal lobe dysfunction.
Collapse
Affiliation(s)
- Frank Antony Proudlock
- Ophthalmology Group, RKCSB, Leicester Royal Infirmary, University Hospitals of Leicester, University of Leicester, Leicester, UK.
| | | |
Collapse
|
32
|
Mennie N, Hayhoe M, Sullivan B. Look-ahead fixations: anticipatory eye movements in natural tasks. Exp Brain Res 2006; 179:427-42. [PMID: 17171337 DOI: 10.1007/s00221-006-0804-0] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2005] [Accepted: 11/10/2006] [Indexed: 10/23/2022]
Abstract
During performance of natural tasks subjects sometimes fixate objects that are manipulated several seconds later. Such early looks are known as "look-ahead fixations" (Pelz and Canosa in Vision Res 41(25-26):3587-3596, 2001). To date, little is known about their function. To investigate the possible role of these fixations, we measured fixation patterns in a model-building task. Subjects assembled models in two sequences where reaching and grasping were interrupted in one sequence by an additional action. Results show look-ahead fixations prior to 20% of the reaching and grasping movements, occurring on average 3 s before the reach. Their frequency was influenced by task sequence, suggesting that they are purposeful and have a role in task planning. To see if look-aheads influenced the subsequent eye movement during the reach, we measured eye-hand latencies and found they increased by 122 ms following a look-ahead to the target. The initial saccades to the target that accompanied a reach were also more accurate following a look-ahead. These results demonstrate that look-aheads influence subsequent visuo-motor coordination, and imply that visual information on the temporal and spatial structure of the scene was retained across intervening fixations and influenced subsequent movement programming. Additionally, head movements that accompanied look-aheads were significantly smaller in amplitude (by 10 degrees) than those that accompanied reaches to the same locations, supporting previous evidence that head movements play a role in the control of hand movements. This study provides evidence of the anticipatory use of gaze in acquiring information about objects for future manipulation.
Collapse
Affiliation(s)
- Neil Mennie
- School of Psychology, University of Nottingham, University Park, Nottingham, NG7 2RD, UK.
| | | | | |
Collapse
|
33
|
Flanagan JR, Bowman MC, Johansson RS. Control strategies in object manipulation tasks. Curr Opin Neurobiol 2006; 16:650-9. [PMID: 17084619 DOI: 10.1016/j.conb.2006.10.005] [Citation(s) in RCA: 225] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2006] [Accepted: 10/04/2006] [Indexed: 10/23/2022]
Abstract
The remarkable manipulative skill of the human hand is not the result of rapid sensorimotor processes, nor of fast or powerful effector mechanisms. Rather, the secret lies in the way manual tasks are organized and controlled by the nervous system. At the heart of this organization is prediction. Successful manipulation requires the ability both to predict the motor commands required to grasp, lift, and move objects and to predict the sensory events that arise as a consequence of these commands.
Collapse
Affiliation(s)
- J Randall Flanagan
- Department of Psychology, Centre for Neuroscience Studies, Queen's University, Kingston, ON, K7L 3N6, Canada
| | | | | |
Collapse
|
34
|
Oommen BS, Smith RM, Stahl JS. The influence of future gaze orientation upon eye-head coupling during saccades. Exp Brain Res 2003; 155:9-18. [PMID: 15064879 DOI: 10.1007/s00221-003-1694-z] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2003] [Accepted: 08/19/2003] [Indexed: 10/26/2022]
Abstract
Mammals with foveas (or analogous retinal specializations) frequently shift gaze without moving the head, and their behavior contrasts sharply with "afoveate" mammals, in which eye and head movements are strongly coupled. The ability to move the eyes without moving the head could reflect a gating mechanism that blocks a default eye-head synergy when an attempted head movement would be energetically wasteful. Based upon such considerations of efficiency, we predicted that for saccades to targets lying within the ocular motor range, the tendency to generate a head movement would depend upon a subject's expectations regarding future directions of gaze. We tested this hypothesis in two experiments with normal human subjects instructed to fixate sequences of lighted targets on a semicircular array. In the target direction experiment, we determined whether subjects were more likely to move the head during a small gaze shift if they expected that they would be momentarily required to make a second, larger shift in the same direction. Adding the onward-directed target increased significantly the distribution of final head positions (customary head orientation range, CHOR) observed during fixation of the primary target from 16.6+/-4.9 degrees to 25.2+/-7.8 degrees. The difference reflected an increase in the probability, and possibly the amplitude, of head movements. In the target duration experiment, we determined whether head movements were potentiated when subjects expected that gaze would be held in the vicinity of the target for a longer period of time. Prolonging fixation increased CHOR significantly from 53.7+/-18.8 degrees to 63.2+/-15.9 degrees. Larger head movements were evoked for any given target eccentricity, due to a narrowing in the gap between the x-intercepts of the head amplitude:target eccentricity relationship. The results are consistent with the idea that foveate mammals use knowledge of future gaze direction to influence the coupling of saccadic commands to premotor circuitry of the head. While the circuits ultimately mediating the coupling may lie within the brainstem, our results suggest that the cerebrum plays a supervisory role, since it is a likely seat of expectation regarding target behavior. Eye-head coupling may reflect separate gating and scaling mechanisms, and changes in head movement tendencies may reflect parametric modulation of either mechanism.
Collapse
Affiliation(s)
- Brian S Oommen
- Departments of Neurology, Louis Stokes Cleveland Veterans Affairs Medical Center and Case Western Reserve University, Cleveland, OH 44106, USA
| | | | | |
Collapse
|
35
|
Nagel M, Zangemeister WH. The effect of transcranial magnetic stimulation over the cerebellum on the synkinesis of coordinated eye and head movements. J Neurol Sci 2003; 213:35-45. [PMID: 12873753 DOI: 10.1016/s0022-510x(03)00145-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
We made a study of coordinated saccadic eye and head movements following random and predictable horizontal visual targets by applying transcranial magnetic stimulation (TMS) over the cerebellum before the start of the gaze movement. We have found three effects of TMS on eye/head movements under these conditions. SACCADIC LATENCY-EFFECT: When stimulation took place shortly before movements commenced, significantly shorter latencies were found between target presentation and commencement of saccades: For predictable, to a lesser extent for random targets and TMS up to 75 ms before start of the saccade, latencies were significantly decreased when compared with no application of TMS. Without stimulation, latencies to random targets were within a range of 120-200 ms. EYE-HEAD INTERACTION-EFFECT: Without TMS, for amplitudes greater than 25 degrees, head movements usually preceded eye movements, as expected, especially for predictive responses. With the application of TMS shortly after the target display, the number of eye movements which preceded head movements, was significantly increased (p<0.001), and the delay between eye and head movements was reduced or reversed (p<0.001), compared with gaze movements without the use of TMS. SACCADIC PEAK VELOCITY-EFFECT: Applying transcranial magnetic stimulation at 5-25 ms after the position change of the 60 degrees target, and 50-5 s before the start of eye movement, mean peak velocity of synkinetic saccades increased up to 600 degrees/s, compared with 350-400 degrees/s without the use of TMS. We conclude that transient functional cerebellar deficits caused by the application of TMS can change the central synkinesis of eye-head coordination.
Collapse
Affiliation(s)
- M Nagel
- Department of Psychiatry, University Hospital Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany.
| | | |
Collapse
|
36
|
Terao Y, Andersson NEM, Flanagan JR, Johansson RS. Engagement of gaze in capturing targets for future sequential manual actions. J Neurophysiol 2002; 88:1716-25. [PMID: 12364501 DOI: 10.1152/jn.2002.88.4.1716] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We investigated the role of saccadic gaze fixations in encoding target locations for planning a future manual task consisting of a sequence of discrete target-oriented actions. We hypothesized that fixations of the individual targets are necessary for accurate encoding of target locations and that there is a transfer of sequence information from visual encoding to manual recall. Subjects viewed four targets presented at random positions on a screen. After various delays following target extinction, the subjects marked the remembered target locations on the screen with the tip of a hand-held stick. When the targets were presented simultaneously among distracting elements, the overall accuracy of marking increased with presentation time and total number of targets fixated because the subjects had to serially fixate the individual targets to locate them. Without distractors, the marking accuracy was similarly high regardless of duration of target presentation (0.25-8 s) and number of targets fixated; it was comparable to that with distractors when all four targets had been fixated. This indicates parallel encoding of target locations largely based on peripheral vision. Location memory was stable in these tasks over the delay periods investigated (0.5-8 s). With parallel encoding there was a "shrinkage" in the visuomotor transformation, i.e., the distances between the markings were systematically smaller than the corresponding inter-target distances. When the targets were presented sequentially without distractors, marking accuracy improved with the total number of targets fixated and shrinkage in the visuomotor transformation occurred only with parallel encoding, i.e., when subjects did not fixate the targets. In all experimental conditions for trials in which targets were fixated during encoding, there was little correspondence between the marking sequence and the sequence in which the targets were fixated. We conclude that subjects benefit from fixating targets for subsequent target-oriented manual actions when the targets are presented among distractors and when presented sequentially; when distinct targets are presented simultaneously against a blank background, they are efficiently encoded in parallel largely by peripheral vision.
Collapse
Affiliation(s)
- Yasuo Terao
- Section for Physiology, Department of Integrative Medical Biology, Umeå University, SE-901 87 Umeå, Sweden.
| | | | | | | |
Collapse
|
37
|
Herst AN, Epelboim J, Steinman RM. Temporal coordination of the human head and eye during a natural sequential tapping task. Vision Res 2002; 41:3307-19. [PMID: 11718775 DOI: 10.1016/s0042-6989(01)00158-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The 'natural' temporal coordination of head and eye was examined as four subjects tapped a sequence of targets arranged in 3D on a worktable in front of them. The head started to move before the eye 48% of the time. Both the head and eye started to move 'simultaneously' (within 8 ms of each other) 37% of the time. The eye started to move before the eye only 15% of the time. Gaze-shifts required to perform the tapping task were relatively large, 68% of them were between 27 degrees and 57 degrees. Gaze-shifts were symmetrical. There were almost as many lefts as rights. Very little inter- or intra-subject variability was observed. These results were not expected on the basis of prior studies of head/eye coordination performed under less natural conditions. They also were not expected given the results of two rather similar, relatively natural, prior experiments. We conclude that more observations under natural conditions will have to be made before we understand why, when and how human beings coordinate head and eyes as they perform everyday tasks in the work-a-day world.
Collapse
Affiliation(s)
- A N Herst
- Department of Psychology, University of Maryland, College Park, MD 20742-4411, USA.
| | | | | |
Collapse
|
38
|
|
39
|
Abstract
We analyzed the coordination between gaze behavior, fingertip movements, and movements of the manipulated object when subjects reached for and grasped a bar and moved it to press a target-switch. Subjects almost exclusively fixated certain landmarks critical for the control of the task. Landmarks at which contact events took place were obligatory gaze targets. These included the grasp site on the bar, the target, and the support surface where the bar was returned after target contact. Any obstacle in the direct movement path and the tip of the bar were optional landmarks. Subjects never fixated the hand or the moving bar. Gaze and hand/bar movements were linked concerning landmarks, with gaze leading. The instant that gaze exited a given landmark coincided with a kinematic event at that landmark in a manner suggesting that subjects monitored critical kinematic events for phasic verification of task progress and subgoal completion. For both the obstacle and target, subjects directed saccades and fixations to sites that were offset from the physical extension of the objects. Fixations related to an obstacle appeared to specify a location around which the extending tip of the bar should travel. We conclude that gaze supports hand movement planning by marking key positions to which the fingertips or grasped object are subsequently directed. The salience of gaze targets arises from the functional sensorimotor requirements of the task. We further suggest that gaze control contributes to the development and maintenance of sensorimotor correlation matrices that support predictive motor control in manipulation.
Collapse
|