1
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
2
|
Cataldo A, Di Luca M, Deroy O, Hayward V. Touching with the eyes: Oculomotor self-touch induces illusory body ownership. iScience 2023; 26:106180. [PMID: 36895648 PMCID: PMC9988563 DOI: 10.1016/j.isci.2023.106180] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 11/22/2022] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Self-touch plays a central role in the construction and plasticity of the bodily self. But which mechanisms support this role? Previous accounts emphasize the convergence of proprioceptive and tactile signals from the touching and the touched body parts. Here, we hypothesise that proprioceptive information is not necessary for self-touch modulation of body-ownership. Because eye movements do not rely on proprioceptive signals as limb movements do, we developed a novel oculomotor self-touch paradigm where voluntary eye movements generated corresponding tactile sensations. We then compared the effectiveness of eye versus hand self-touch movements in generating an illusion of owning a rubber hand. Voluntary oculomotor self-touch was as effective as hand-driven self-touch, suggesting that proprioception does not contribute to body ownership during self-touch. Self-touch may contribute to a unified sense of bodily self by binding voluntary actions toward our own body with their tactile consequences.
Collapse
Affiliation(s)
- Antonio Cataldo
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Cognition, Values and Behaviour, Ludwig Maximilian University, 80333 München, Germany.,Institute of Cognitive Neuroscience, University College London, Alexandra House 17 Queen Square, London WC1N 3AZ, UK
| | - Massimiliano Di Luca
- Formerly with Facebook Reality Labs, Redmond, WA, USA.,School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Ophelia Deroy
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Cognition, Values and Behaviour, Ludwig Maximilian University, 80333 München, Germany
| | - Vincent Hayward
- Institute of Philosophy, School of Advanced Study, University of London, Senate House, London WC1E 7HU, UK.,Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, 75005 Paris, France
| |
Collapse
|
3
|
Ayala N, Zafar A, Niechwiej-Szwedo E. Gaze behaviour: A window into distinct cognitive processes revealed by the Tower of London test. Vision Res 2022; 199:108072. [PMID: 35623185 DOI: 10.1016/j.visres.2022.108072] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 04/20/2022] [Accepted: 05/07/2022] [Indexed: 10/18/2022]
Abstract
The analysis of gaze behaviour during complex tasks provides a promising non-invasive method to examine how specific eye movement patterns relate to various aspects of cognition and action. Notably, the association between aspects of gaze behaviour and subsequent goal-directed action during high-level visuospatial problem solving remains elusive. Therefore, the current study comprehensively examined gaze behaviour using traditional and entropy-based gaze analyses in healthy adults (N = 27) while they performed the Freiburg version of the Tower of London task. Results demonstrated that both gaze analyses provided crucial temporal and spatial information related to planning, solution elaboration and execution. Specifically, gaze biases toward task-relevant areas (i.e., the work space) and an increase in gaze complexity (i.e., gaze transition entropy) during optimal performance reflected changes in cognitive demands as task difficulty increased. A comparison between optimal and non-optimal performance revealed sub-optimal gaze patterns that occurred in the early stages of planning, which were taken to reflect poor information extraction from the task environment and impaired maintenance of information in visuospatial working memory. Gaze behaviour during movement execution indicated an increased need to extract and process information from the goal space. Consequently, movement execution time increased in order to reverse erroneous movements and re-sequence the problem solution. Taken together, the traditional and entropy-based gaze analyses applied in the present study provide a promising approach to identify eye movement patterns that support neurocognitive performance on tasks relying on visuospatial planning and problem solving.
Collapse
Affiliation(s)
- Naila Ayala
- Department of Kinesiology and Health Sciences, University of Waterloo, Canada
| | - Abdullah Zafar
- Department of Kinesiology and Health Sciences, University of Waterloo, Canada
| | | |
Collapse
|
4
|
Rand MK, Ringenbach SDR. Delay of gaze fixation during reaching movement with the non-dominant hand to a distant target. Exp Brain Res 2022; 240:1629-1647. [PMID: 35366070 DOI: 10.1007/s00221-022-06357-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 03/22/2022] [Indexed: 11/26/2022]
Abstract
The present study examined the effects of hand and task difficulty on eye-hand coordination related to gaze fixation behavior (i.e., fixating a gaze to the target until reach completion) in single reaching movements. Twenty right-handed young adults made reaches on a digitizer, while looking at a visual target and feedback of hand movements on a computer monitor. Task difficulty was altered by having three target distances. In a small portion of trials, visual feedback was randomly removed at the target presentation. The effect of a moderate amount of practice was also examined using a randomized trial schedule across target-distance and visual-feedback conditions in each hand. The results showed that the gaze distances covered during the early reaching phase were reduced, and the gaze fixation to the target was delayed when reaches were performed with the left hand and when the target distance increased. These results suggest that when the use of the non-dominant hand or an increased task difficulty reduces the predictability of hand movements and its sensory consequences, eye-hand coordination is modified to enhance visual monitoring of the reach progress prior to gaze fixation. The randomized practice facilitated this process. Nevertheless, variability of reach trajectory was more increased without visual feedback for right-hand reaches, indicating that control of the dominant arm integrates more visual feedback information during reaches. These results together suggest that the earlier gaze fixation and greater integration of visual feedback during right-hand reaches contribute to the faster and more accurate performance in the final reaching phase.
Collapse
Affiliation(s)
- Miya K Rand
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.
- College of Health Solutions, Arizona State University, Phoenix, AZ, USA.
| | | |
Collapse
|
5
|
Eye-hand coordination: memory-guided grasping during obstacle avoidance. Exp Brain Res 2021; 240:453-466. [PMID: 34787684 DOI: 10.1007/s00221-021-06271-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
When reaching to grasp previously seen, now out-of-view objects, we rely on stored perceptual representations to guide our actions, likely encoded by the ventral visual stream. So-called memory-guided actions are numerous in daily life, for instance, as we reach to grasp a coffee cup hidden behind our morning newspaper. Little research has examined obstacle avoidance during memory-guided grasping, though it is possible obstacles with increased perceptual salience will provoke exacerbated avoidance maneuvers, like exaggerated deviations in eye and hand position away from obtrusive obstacles. We examined the obstacle avoidance strategies adopted as subjects reached to grasp a 3D target object under visually-guided (closed loop or open loop with full vision prior to movement onset) and memory-guided (short- or long-delay) conditions. On any given trial, subjects reached between a pair of flanker obstacles to grasp a target object. The positions and widths of the obstacles were manipulated, though their inner edges remained a constant distance apart. While reach and grasp behavior was consistent with the obstacle avoidance literature, in that reach, grasp, and gaze positions were biased away from obstacles most obtrusive to the reaching hand, our results reveal distinctive avoidance approaches undertaken depend on the availability of visual feedback. Contrary to expectation, we found subjects reaching to grasp after a long delay in the absence of visual feedback failed to modify their final fixation and grasp positions to accommodate the different positions of obstacles, demonstrating a more moderate, rather than exaggerative, obstacle avoidance strategy.
Collapse
|
6
|
Goettker A, Fiehler K, Voudouris D. Somatosensory target information is used for reaching but not for saccadic eye movements. J Neurophysiol 2020; 124:1092-1102. [DOI: 10.1152/jn.00258.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A systematic investigation of contributions of different somatosensory modalities (proprioception, kinesthesia, tactile) for goal-directed movements is missing. Here we demonstrate that while eye movements are not affected by different types of somatosensory information, reach precision improves when two different types of information are available. Moreover, reach accuracy and gaze precision to unseen somatosensory targets improve when performing coordinated eye-hand movements, suggesting bidirectional contributions of efferent information in reach and eye movement control.
Collapse
Affiliation(s)
- Alexander Goettker
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
7
|
Two types of memory-based (pantomime) reaches distinguished by gaze anchoring in reach-to-grasp tasks. Behav Brain Res 2020; 381:112438. [PMID: 31857149 DOI: 10.1016/j.bbr.2019.112438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/13/2019] [Accepted: 12/14/2019] [Indexed: 11/24/2022]
Abstract
Comparisons of target-based reaching vs memory-based (pantomime) reaching have been used to obtain insight into the visuomotor control of reaching. The present study examined the contribution of gaze anchoring, reaching to a target that is under continuous gaze, to both target-based and memory-based reaching. Participants made target-based reaches for discs located on a table or food items located on a pedestal or they replaced the objects. They then made memory-based reaches in which they pantomimed their target-based reaches. Participants were fitted with hand sensors for kinematic tracking and an eye tracker to monitor gaze. When making target-based reaches, participants directed gaze to the target location from reach onset to offset without interrupting saccades. Similar gaze anchoring was present for memory-based reaches when the surface upon which the target had been placed remained. When the target and its surface were both removed there was no systematic relationship between gaze and the reach. Gaze anchoring was also present when participants replaced a target on a surface, a movement featuring a reach but little grasp. That memory-based reaches can be either gaze anchor-associated or gaze anchor-independent is discussed in relation to contemporary views of the neural control of reaching.
Collapse
|
8
|
Mathew J, de Rugy A, Danion FR. How optimal is bimanual tracking? The key role of hand coordination in space. J Neurophysiol 2020; 123:511-521. [PMID: 31693447 DOI: 10.1152/jn.00119.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When coordinating two hands to achieve a common goal, the nervous system has to assign responsibility to each hand. Optimal control theory suggests that this problem is solved by minimizing costs such as the variability of movement and effort. However, the natural tendency to produce similar movements during bimanual tasks has been somewhat ignored by this approach. We consider a task in which participants were asked to track a moving target by means of a single cursor controlled simultaneously by the two hands. Two types of hand-cursor mappings were tested: one in which the cursor position resulted from the average location of two hands (Mean) and one in which horizontal and vertical positions of the cursor were driven separately by each hand (Split). As expected, unimanual tracking performance was better with the dominant hand than with the more variable nondominant hand. More interestingly, instead of exploiting this effect by increasing the use of the dominant hand, the contributions from both hands remained symmetrical during bimanual cooperative tasks. Indeed, for both mappings, and even after 6min of practice, the right and left hands remained strongly correlated, performing similar movements in extrinsic space. Persistence of this bimanual coupling demonstrates that participants prefer to maintain similar movements at the expense of unnecessary movements (in the Split task) and of increased noise from the nondominant hand (in the Mean task). Altogether, the findings suggest that bimanual tracking exploits hand coordination in space rather than minimizing motor costs associated with variability and effort.NEW & NOTEWORTHY When two hands are coordinated to achieve a common goal, optimal control theory proposes that the brain assigns responsibility to each hand by minimizing movement variability and effort. Nevertheless, we show that participants perform bimanual tracking using similar contributions from the dominant and nondominant hands, despite unnecessary movements and a less accurate nondominant hand. Our findings suggest that bimanual tracking exploits hand coordination in space rather than minimizing motor costs associated with variability and effort.
Collapse
Affiliation(s)
- James Mathew
- Aix Marseille Université, Centre National de la Recherche Scientifique, Institut de Neurosciences de la Timone, UMR 7289, Marseille, France
| | - Aymar de Rugy
- Université de Bordeaux, Centre National de la Recherche Scientifique, Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287, Bordeaux, France.,Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, The University of Queensland, Brisbane, Queensland, Australia
| | - Frederic R Danion
- Aix Marseille Université, Centre National de la Recherche Scientifique, Institut de Neurosciences de la Timone, UMR 7289, Marseille, France
| |
Collapse
|
9
|
Mathew J, Flanagan JR, Danion FR. Gaze behavior during visuomotor tracking with complex hand-cursor dynamics. J Vis 2019; 19:24. [PMID: 31868897 DOI: 10.1167/19.14.24] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The ability to track a moving target with the hand has been extensively studied, but few studies have characterized gaze behavior during this task. Here we investigate gaze behavior when participants learn a new mapping between hand and cursor motion, such that the cursor represented the position of a virtual mass attached to the grasped handle via a virtual spring. Depending on the experimental condition, haptic feedback consistent with mass-spring dynamics could also be provided. For comparison a simple one-to-one hand-cursor mapping was also tested. We hypothesized that gaze would be drawn, at times, to the cursor in the mass-spring conditions, especially in the absence of haptic feedback. As expected hand tracking performance was less accurate under the spring mapping, but gaze behavior was virtually unaffected by the spring mapping, regardless of whether haptic feedback was provided. Specifically, relative gaze position between target and cursor, rate of saccades, and gain of smooth pursuit were similar under both mappings and both haptic feedback conditions. We conclude that even when participants are exposed to a challenging hand-cursor mapping, gaze is primarily concerned about ongoing target motion suggesting that peripheral vision is sufficient to monitor cursor position and to update hand movement control.
Collapse
Affiliation(s)
- James Mathew
- Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France.,Current affiliation: Institute of Neuroscience, Institute of Communication & Information Technologies, Electronics & Applied Mathematics, Université Catholique de Louvain, Louvain-la-neuve, Belgium
| | - J Randall Flanagan
- Department of Psychology and Centre for Neurosciences Studies, Queens University, Ontario, Canada
| | - Frederic R Danion
- Aix-Marseille Université, CNRS, Institut de Neurosciences de la Timone, Marseille, France
| |
Collapse
|
10
|
Foerster RM. The function of "looking-at-nothing" for sequential sensorimotor tasks: Eye movements to remembered action-target locations. J Eye Mov Res 2019; 12:10.16910/jemr.12.2.2. [PMID: 33828728 PMCID: PMC7881903 DOI: 10.16910/jemr.12.2.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
When performing manual actions, eye movements precede hand movements to target locations: Before we grasp an object, we look at it. Eye-hand guidance is even preserved when visual targets are unavailable, e.g., grasping behind an occlusion. This "looking-atnothing" behavior might be functional, e.g., as "deictic pointer" for manual control or as memory-retrieval cue, or a by-product of automatization. Here, it is studied if looking at empty locations before acting on them is beneficial for sensorimotor performance. In five experiments, participants completed a click sequence on eight visual targets for 0-100 trials while they had either to fixate on the screen center or could move their eyes freely. During 50-100 consecutive trials, participants clicked the same sequence on a blank screen with free or fixed gaze. During both phases, participants looked at target locations when gaze shifts were allowed. With visual targets, target fixations led to faster, more precise clicking, fewer errors, and sparser cursor-paths than central fixation. Without visual information, a tiny free-gaze benefit could sometimes be observed and was rather a memory than a motor-calculation benefit. Interestingly, central fixation during learning forced early explicit encoding causing a strong benefit for acting on remembered targets later, independent of whether eyes moved then.
Collapse
Affiliation(s)
- Rebecca M Foerster
- Center for Interdisciplinary Research (ZiF) & Department of Psychology & Cluster of Excellence 'Cognitive Interaction Technology' (CITEC), Germany
| |
Collapse
|
11
|
Asymmetrical Relationship between Prediction and Control during Visuomotor Adaptation. eNeuro 2018; 5:eN-NWR-0280-18. [PMID: 30627629 PMCID: PMC6325531 DOI: 10.1523/eneuro.0280-18.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 10/24/2018] [Accepted: 10/25/2018] [Indexed: 11/23/2022] Open
Abstract
Current theories suggest that the ability to control the body and to predict its associated sensory consequences is key for skilled motor behavior. It is also suggested that these abilities need to be updated when the mapping between motor commands and sensory consequences is altered. Here we challenge this view by investigating the transfer of adaptation to rotated visual feedback between one task in which human participants had to control a cursor with their hand in order to track a moving target, and another in which they had to predict with their eyes the visual consequences of their hand movement on the cursor. Hand and eye tracking performances were evaluated respectively through cursor–target and eye–cursor distance. Results reveal a striking dissociation: although prior adaptation of hand tracking greatly facilitates eye tracking, the adaptation of eye tracking does not transfer to hand tracking. We conclude that although the update of control is associated with the update of prediction, prediction can be updated independently of control. To account for this pattern of results, we propose that task demands mediate the update of prediction and control. Although a joint update of prediction and control seemed mandatory for success in our hand tracking task, the update of control was only facultative for success in our eye tracking task. More generally, those results promote the view that prediction and control are mediated by separate neural processes and suggest that people can learn to predict movement consequences without necessarily promoting their ability to control these movements.
Collapse
|
12
|
Danion FR, Flanagan JR. Different gaze strategies during eye versus hand tracking of a moving target. Sci Rep 2018; 8:10059. [PMID: 29968806 PMCID: PMC6030130 DOI: 10.1038/s41598-018-28434-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 06/19/2018] [Indexed: 11/09/2022] Open
Abstract
The ability to visually track, using smooth pursuit eye movements, moving objects is critical in both perceptual and action tasks. Here, by asking participants to view a moving target or track it with their hand, we tested whether different task demands give rise to different gaze strategies. We hypothesized that during hand tracking, in comparison to eye tracking, the frequency of catch-up saccades would be lower, and the smooth pursuit gain would be greater, because it limits the loss of stable retinal and extra-retinal information due to saccades. In our study participants viewed a visual target that followed a smooth but unpredictable trajectory in a horizontal plane and were instructed to either track the target with their gaze or with a cursor controlled by a manipulandum. Although the mean distance between gaze and target was comparable in both tasks, we found, consistent with our hypothesis, an increase in smooth pursuit gain and a decrease in the frequency of catch-up saccades during hand tracking. We suggest that this difference in gaze behavior arises from different tasks demands. Whereas keeping gaze close to the target is important in both tasks, obtaining stable retinal and extra-retinal information is critical for guiding hand movement.
Collapse
Affiliation(s)
- Frederic R Danion
- Aix Marseille University, CNRS, Institut de Neurosciences de la Timone, Marseille, France.
| | - J Randall Flanagan
- Department of Psychology and Centre for Neurosciences Studies, Queen's University, Ontario, Canada
| |
Collapse
|
13
|
Foerster RM. "Looking-at-nothing" during sequential sensorimotor actions: Long-term memory-based eye scanning of remembered target locations. Vision Res 2018; 144:29-37. [PMID: 29432778 DOI: 10.1016/j.visres.2018.01.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 01/12/2018] [Accepted: 01/15/2018] [Indexed: 11/29/2022]
Abstract
Before acting humans saccade to a target object to extract relevant visual information. Even when acting on remembered objects, locations previously occupied by relevant objects are fixated during imagery and memory tasks - a phenomenon called "looking-at-nothing". While looking-at-nothing was robustly found in tasks encouraging declarative memory built-up, results are mixed in the case of procedural sensorimotor tasks. Eye-guidance to manual targets in complete darkness was observed in a task practiced for days beforehand, while investigations using only a single session did not find fixations to remembered action targets. Here, it is asked whether looking-at-nothing can be found in a single sensorimotor session and thus independent from sleep consolidation, and how it progresses when visual information is repeatedly unavailable. Eye movements were investigated in a computerized version of the trail making test. Participants clicked on numbered circles in ascending sequence. Fifty trials were performed with the same spatial arrangement of 9 visual targets to enable long-term memory consolidation. During 50 consecutive trials, participants had to click the remembered target sequence on an empty screen. Participants scanned the visual targets and also the empty target locations sequentially with their eyes, however, the latter less precise than the former. Over the course of the memory trials, manual and oculomotor sequential target scanning became more similar to the visual trials. Results argue for robust looking-at-nothing during procedural sensorimotor tasks provided that long-term memory information is sufficient.
Collapse
Affiliation(s)
- Rebecca M Foerster
- Department of Psychology, Bielefeld University, Germany; Cluster of Excellence Cognitive Interaction Technology, Bielefeld University, Germany.
| |
Collapse
|
14
|
Foerster RM. Task-Irrelevant Expectation Violations in Sequential Manual Actions: Evidence for a "Check-after-Surprise" Mode of Visual Attention and Eye-Hand Decoupling. Front Psychol 2016; 7:1845. [PMID: 27933016 PMCID: PMC5120088 DOI: 10.3389/fpsyg.2016.01845] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 11/07/2016] [Indexed: 11/13/2022] Open
Abstract
When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when. We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task-relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target’s visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection.
Collapse
Affiliation(s)
- Rebecca M Foerster
- Neuro-cognitive Psychology, Department of Psychology & Cluster of Excellence Cognitive Interaction Technology 'CITEC', Bielefeld University Bielefeld, Germany
| |
Collapse
|
15
|
Gaze–grasp coordination in obstacle avoidance: differences between binocular and monocular viewing. Exp Brain Res 2015; 233:3489-505. [DOI: 10.1007/s00221-015-4421-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Accepted: 08/14/2015] [Indexed: 10/23/2022]
|
16
|
The role of eye movements in motor sequence learning. Hum Mov Sci 2015; 40:220-36. [DOI: 10.1016/j.humov.2015.01.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2014] [Revised: 12/16/2014] [Accepted: 01/03/2015] [Indexed: 11/22/2022]
|
17
|
Abstract
Coordinated eye movements are crucial for precision control of our hands. A commonly believed neural mechanism underlying eye-hand coordination is interaction between the neural networks controlling each effector, exchanging, and matching information, such as movement target location and onset time. Alternatively, eye-hand coordination may result simply from common inputs to independent eye and hand control pathways. Thus far, it remains unknown whether and where either of these two possible mechanisms exists. A candidate location for the former mechanism, interpathway communication, includes the posterior parietal cortex (PPC) where distinct effector-specific areas reside. If the PPC were within the network for eye-hand coordination, perturbing it would affect both eye and hand movements that are concurrently planned. In contrast, if eye-hand coordination arises solely from common inputs, perturbing one effector pathway, e.g., the parietal reach region (PRR), would not affect the other effector. To test these hypotheses, we inactivated part of PRR in the macaque, located in the medial bank of the intraparietal sulcus encompassing the medial intraparietal area and area 5V. When each effector moved alone, PRR inactivation shortened reach but not saccade amplitudes, compatible with the known reach-selective activity of PRR. However, when both effectors moved concurrently, PRR inactivation shortened both reach and saccade amplitudes, and decoupled their reaction times. Therefore, consistent with the interpathway communication hypothesis, we propose that the planning of concurrent eye and hand movements causes the spatial information in PRR to influence the otherwise independent eye control pathways, and that their temporal coupling requires an intact PRR.
Collapse
|
18
|
Paulus M, Fikkert P. Conflicting Social Cues: Fourteen- and 24-Month-Old Infants' Reliance on Gaze and Pointing Cues in Word Learning. JOURNAL OF COGNITION AND DEVELOPMENT 2013. [DOI: 10.1080/15248372.2012.698435] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
19
|
Flanagan JR, Rotman G, Reichelt AF, Johansson RS. The role of observers' gaze behaviour when watching object manipulation tasks: predicting and evaluating the consequences of action. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130063. [PMID: 24018725 PMCID: PMC3758206 DOI: 10.1098/rstb.2013.0063] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
When watching an actor manipulate objects, observers, like the actor, naturally direct their gaze to each object as the hand approaches and typically maintain gaze on the object until the hand departs. Here, we probed the function of observers' eye movements, focusing on two possibilities: (i) that observers' gaze behaviour arises from processes involved in the prediction of the target object of the actor's reaching movement and (ii) that this gaze behaviour supports the evaluation of mechanical events that arise from interactions between the actor's hand and objects. Observers watched an actor reach for and lift one of two presented objects. The observers' task was either to predict the target object or judge its weight. Proactive gaze behaviour, similar to that seen in self-guided action-observation, was seen in the weight judgement task, which requires evaluating mechanical events associated with lifting, but not in the target prediction task. We submit that an important function of gaze behaviour in self-guided action observation is the evaluation of mechanical events associated with interactions between the hand and object. By comparing predicted and actual mechanical events, observers, like actors, can gain knowledge about the world, including information about objects they may subsequently act upon.
Collapse
Affiliation(s)
- J. Randall Flanagan
- Department of Psychology, Queen's University, Kingston, Ontario, CanadaK7L 3N6
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, CanadaK7L 3N6
| | - Gerben Rotman
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, CanadaK7L 3N6
| | - Andreas F. Reichelt
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, CanadaK7L 3N6
| | - Roland S. Johansson
- Section for Physiology, Department of Integrative Medical Biology, Umeå University, 901 87 Umeå, Sweden
| |
Collapse
|
20
|
Prime SL, Marotta JJ. Gaze strategies during visually-guided versus memory-guided grasping. Exp Brain Res 2012; 225:291-305. [PMID: 23239197 DOI: 10.1007/s00221-012-3358-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2012] [Accepted: 11/22/2012] [Indexed: 11/28/2022]
Abstract
Vision plays a crucial role in guiding motor actions. But sometimes we cannot use vision and must rely on our memory to guide action-e.g. remembering where we placed our eyeglasses on the bedside table when reaching for them with the lights off. Recent studies show subjects look towards the index finger grasp position during visually-guided precision grasping. But, where do people look during memory-guided grasping? Here, we explored the gaze behaviour of subjects as they grasped a centrally placed symmetrical block under open- and closed-loop conditions. In Experiment 1, subjects performed grasps in either a visually-guided task or memory-guided task. The results show that during visually-guided grasping, gaze was first directed towards the index finger's grasp point on the block, suggesting gaze targets future grasp points during the planning of the grasp. Gaze during memory-guided grasping was aimed closer to the blocks' centre of mass from block presentation to the completion of the grasp. In Experiment 2, subjects performed an 'immediate grasping' task in which vision of the block was removed immediately at the onset of the reach. Similar to the visually-guided results from Experiment 1, gaze was primarily directed towards the index finger location. These results support the 2-stream theory of vision in that motor planning with visual feedback at the onset of the movement is driven primarily by real-time visuomotor computations of the dorsal stream, whereas grasping remembered objects without visual feedback is driven primarily by the perceptual memory representations mediated by the ventral stream.
Collapse
Affiliation(s)
- Steven L Prime
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand.
| | | |
Collapse
|
21
|
The brain uses efference copy information to optimise spatial memory. Exp Brain Res 2012; 224:189-97. [PMID: 23073714 DOI: 10.1007/s00221-012-3298-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Accepted: 10/03/2012] [Indexed: 10/27/2022]
Abstract
Does a motor response to a target improve the subsequent recall of the target position or can we simply use peripheral position information to guide an accurate response? We suggest that a motor plan of the hand can be enhanced with actual motor and efference copy feedback (GoGo trials), which is absent in the passive observation of a stimulus (NoGo trials). To investigate this effect during eye and hand coordination movements, we presented stimuli in two formats (memory guided or visually guided) under three modality conditions (eyes only, hands only (with eyes fixated), or eyes and hand together). We found that during coordinated movements, both the eye and hand response times were facilitated when efference feedback of the movement was provided. Furthermore, both eye and hand movements to remembered locations were significantly more accurate in the GoGo than in the NoGo trial types. These results reveal that an efference copy of a motor plan enhances memory for a location that is not only observed in eye movements, but also translated downstream into a hand movement. These results have significant implications on how we plan, code and guide behavioural responses, and how we can optimise accuracy and timing to a given target.
Collapse
|
22
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|
23
|
Bédard P, Wu M, Sanes JN. Brain activation related to combinations of gaze position, visual input, and goal-directed hand movements. Cereb Cortex 2010; 21:1273-82. [PMID: 20974688 DOI: 10.1093/cercor/bhq205] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Humans reach to and acquire objects by transforming visual targets into action commands. How the brain integrates goals specified in a visual framework to signals into a suitable framework for an action plan requires clarification whether visual input, per se, interacts with gaze position to formulate action plans. To further evaluate brain control of visual-motor integration, we assessed brain activation, using functional magnetic resonance imaging. Humans performed goal-directed movements toward visible or remembered targets while fixating gaze left or right from center. We dissociated movement planning from performance using a delayed-response task and manipulated target visibility by its availability throughout the delay or blanking it 500 ms after onset. We found strong effects of gaze orientation on brain activation during planning and interactive effects of target visibility and gaze orientation on movement-related activation during performance in parietal and premotor cortices (PM), cerebellum, and basal ganglia, with more activation for rightward gaze at a visible target and no gaze modulation for movements directed toward remembered targets. These results demonstrate effects of gaze position on PM and movement-related processes and provide new information how visual signals interact with gaze position in transforming visual inputs into motor goals.
Collapse
Affiliation(s)
- Patrick Bédard
- Department of Neuroscience, Alpert Medical School of Brown University, Providence, RI 02912, USA
| | | | | |
Collapse
|
24
|
Byrne PA, Crawford JD. Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach. J Neurophysiol 2010; 103:3054-69. [DOI: 10.1152/jn.01008.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark “shift” during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric–allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration—despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment—had a strong influence on egocentric–allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Collapse
Affiliation(s)
- Patrick A. Byrne
- Centre for Vision Research,
- Canadian Action and Perception Network, and
| | - J. Douglas Crawford
- Centre for Vision Research,
- Canadian Action and Perception Network, and
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| |
Collapse
|
25
|
Degani AM, Danna-Dos-Santos A, Robert T, Latash ML. Kinematic synergies during saccades involving whole-body rotation: a study based on the uncontrolled manifold hypothesis. Hum Mov Sci 2010; 29:243-58. [PMID: 20346529 DOI: 10.1016/j.humov.2010.02.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2009] [Revised: 01/29/2010] [Accepted: 02/05/2010] [Indexed: 11/26/2022]
Abstract
We used the framework of the uncontrolled manifold hypothesis to study the coordination of body segments and eye movements in standing persons during the task of shifting the gaze to a target positioned behind the body. The task was performed at a comfortable speed and fast. Multi-segment and head-eye synergies were quantified as co-varied changes in elemental variables (body segment rotations and eye rotation) that stabilized (reduced the across trials variability) of head rotation in space and gaze trajectory. Head position in space was stabilized by co-varied rotations of body segments prior to the action, during its later stages, and after its completion. The synergy index showed a drop that started prior to the action initiation (anticipatory synergy adjustment) and continued during the phase of quick head rotation. Gaze direction was stabilized only at movement completion and immediately after the saccade at movement initiation under the "fast" instruction. The study documents for the first time anticipatory synergy adjustments during whole-body actions. It shows multi-joint synergies stabilizing head trajectory in space. In contrast, there was no synergy between head and eye rotations during saccades that would achieve a relatively invariant gaze trajectory.
Collapse
Affiliation(s)
- Adriana M Degani
- Department of Kinesiology, The Pennsylvania State University, University Park, PA 16802, USA
| | | | | | | |
Collapse
|
26
|
Keith GP, DeSouza JFX, Yan X, Wang H, Crawford JD. A method for mapping response fields and determining intrinsic reference frames of single-unit activity: applied to 3D head-unrestrained gaze shifts. J Neurosci Methods 2009; 180:171-84. [PMID: 19427544 DOI: 10.1016/j.jneumeth.2009.03.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Revised: 03/08/2009] [Accepted: 03/09/2009] [Indexed: 10/21/2022]
Abstract
Natural movements towards a target show metric variations between trials. When movements combine contributions from multiple body-parts, such as head-unrestrained gaze shifts involving both eye and head rotation, the individual body-part movements may vary even more than the overall movement. The goal of this investigation was to develop a general method for both mapping sensory or motor response fields of neurons and determining their intrinsic reference frames, where these movement variations are actually utilized rather than avoided. We used head-unrestrained gaze shifts, three-dimensional (3D) geometry, and naturalistic distributions of eye and head orientation to explore the theoretical relationship between the intrinsic reference frame of a sensorimotor neuron's response field and the coherence of the activity when this response field is fitted non-parametrically using different kernel bandwidths in different reference frames. We measure how well the regression surface predicts unfitted data using the PREdictive Sum-of-Squares (PRESS) statistic. The reference frame with the smallest PRESS statistic was categorized as the intrinsic reference frame if the PRESS statistic was significantly larger in other reference frames. We show that the method works best when targets are at regularly spaced positions within the response field's active region, and that the method identifies the best kernel bandwidth for response field estimation. We describe how gain-field effects may be dealt with, and how to test neurons within a population that fall on a continuum between specific reference frames. This method may be applied to any spatially coherent single-unit activity related to sensation and/or movement during naturally varying behaviors.
Collapse
Affiliation(s)
- Gerald P Keith
- Canadian Action and Perception Network, York University, 4700 Keele Street, Toronto, Ontario M3J1P3, Canada
| | | | | | | | | |
Collapse
|