1
|
Caruana N, Nalepka P, Perez GA, Inkley C, Munro C, Rapaport H, Brett S, Kaplan DM, Richardson MJ, Pellicano E. Autistic young people adaptively use gaze to facilitate joint attention during multi-gestural dyadic interactions. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2024; 28:1565-1581. [PMID: 38006222 PMCID: PMC11134991 DOI: 10.1177/13623613231211967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
LAY ABSTRACT Autistic people have been said to have 'problems' with joint attention, that is, looking where someone else is looking. Past studies of joint attention have used tasks that require autistic people to continuously look at and respond to eye-gaze cues. But joint attention can also be done using other social cues, like pointing. This study looked at whether autistic and non-autistic young people use another person's eye gaze during joint attention in a task that did not require them to look at their partner's face. In the task, each participant worked together with their partner to find a computer-generated object in virtual reality. Sometimes the participant had to help guide their partner to the object, and other times, they followed their partner's lead. Participants were told to point to guide one another but were not told to use eye gaze. Both autistic and non-autistic participants often looked at their partner's face during joint attention interactions and were faster to respond to their partner's hand-pointing when the partner also looked at the object before pointing. This shows that autistic people can and do use information from another person's eyes, even when they don't have to. It is possible that, by not forcing autistic young people to look at their partner's face and eyes, they were better able to gather information from their partner's face when needed, without being overwhelmed. This shows how important it is to design tasks that provide autistic people with opportunities to show what they can do.
Collapse
|
2
|
Coudiere A, Danion FR. Eye-hand coordination all the way: from discrete to continuous hand movements. J Neurophysiol 2024; 131:652-667. [PMID: 38381528 DOI: 10.1152/jn.00314.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 01/31/2024] [Accepted: 02/18/2024] [Indexed: 02/23/2024] Open
Abstract
The differentiation between continuous and discrete actions is key for behavioral neuroscience. Although many studies have characterized eye-hand coordination during discrete (e.g., reaching) and continuous (e.g., pursuit tracking) actions, all these studies were conducted separately, using different setups and participants. In addition, how eye-hand coordination might operate at the frontier between discrete and continuous movements remains unexplored. Here we filled these gaps by means of a task that could elicit different movement dynamics. Twenty-eight participants were asked to simultaneously track with their eyes and a joystick a visual target that followed an unpredictable trajectory and whose position was updated at different rates (from 1.5 to 240 Hz). This procedure allowed us to examine actions ranging from discrete point-to-point movements (low refresh rate) to continuous pursuit (high refresh rate). For comparison, we also tested a manual tracking condition with the eyes fixed and a pure eye tracking condition (hand fixed). The results showed an abrupt transition between discrete and continuous hand movements around 3 Hz contrasting with a smooth trade-off between fixations and smooth pursuit. Nevertheless, hand and eye tracking accuracy remained strongly correlated, with each of these depending on whether the other effector was recruited. Moreover, gaze-cursor distance and lag were smaller when eye and hand performed the task conjointly than separately. Altogether, despite some dissimilarities in eye and hand dynamics when transitioning between discrete and continuous movements, our results emphasize that eye-hand coordination continues to smoothly operate and support the notion of synergies across eye movement types.NEW & NOTEWORTHY The differentiation between continuous and discrete actions is key for behavioral neuroscience. By using a visuomotor task in which we manipulate the target refresh rate to trigger different movement dynamics, we explored eye-hand coordination all the way from discrete to continuous actions. Despite abrupt changes in hand dynamics, eye-hand coordination continues to operate via a gradual trade-off between fixations and smooth pursuit, an observation confirming the notion of synergies across eye movement types.
Collapse
Affiliation(s)
- Adrien Coudiere
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| | - Frederic R Danion
- CNRS, Université de Poitiers, Université de Tours, CeRCA, Poitiers, France
| |
Collapse
|
3
|
Bloch C, Tepest R, Koeroglu S, Feikes K, Jording M, Vogeley K, Falter-Wagner CM. Interacting with autistic virtual characters: intrapersonal synchrony of nonverbal behavior affects participants' perception. Eur Arch Psychiatry Clin Neurosci 2024:10.1007/s00406-023-01750-3. [PMID: 38270620 DOI: 10.1007/s00406-023-01750-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 12/18/2023] [Indexed: 01/26/2024]
Abstract
Temporal coordination of communicative behavior is not only located between but also within interaction partners (e.g., gaze and gestures). This intrapersonal synchrony (IaPS) is assumed to constitute interpersonal alignment. Studies show systematic variations in IaPS in individuals with autism, which may affect the degree of interpersonal temporal coordination. In the current study, we reversed the approach and mapped the measured nonverbal behavior of interactants with and without ASD from a previous study onto virtual characters to study the effects of the differential IaPS on observers (N = 68), both with and without ASD (crossed design). During a communication task with both characters, who indicated targets with gaze and delayed pointing gestures, we measured response times, gaze behavior, and post hoc impression formation. Results show that character behavior indicative of ASD resulted in overall enlarged decoding times in observers and this effect was even pronounced in observers with ASD. A classification of observer's gaze types indicated differentiated decoding strategies. Whereas non-autistic observers presented with a rather consistent eyes-focused strategy associated with efficient and fast responses, observers with ASD presented with highly variable decoding strategies. In contrast to communication efficiency, impression formation was not influenced by IaPS. The results underline the importance of timing differences in both production and perception processes during multimodal nonverbal communication in interactants with and without ASD. In essence, the current findings locate the manifestation of reduced reciprocity in autism not merely in the person, but in the interactional dynamics of dyads.
Collapse
Affiliation(s)
- Carola Bloch
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, 80336, Munich, Germany.
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany.
| | - Ralf Tepest
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Sevim Koeroglu
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Kyra Feikes
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
| | - Mathis Jording
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Juelich, 52425, Juelich, Germany
| | - Kai Vogeley
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937, Cologne, Germany
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Juelich, 52425, Juelich, Germany
| | - Christine M Falter-Wagner
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, 80336, Munich, Germany
| |
Collapse
|
4
|
Matsumiya K, Furukawa S. Perceptual decisions interfere more with eye movements than with reach movements. Commun Biol 2023; 6:882. [PMID: 37648896 PMCID: PMC10468498 DOI: 10.1038/s42003-023-05249-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 08/16/2023] [Indexed: 09/01/2023] Open
Abstract
Perceptual judgements are formed through invisible cognitive processes. Reading out these judgements is essential for advancing our understanding of decision making and requires inferring covert cognitive states based on overt motor actions. Although intuition suggests that these actions must be related to the formation of decisions about where to move body parts, actions have been reported to be influenced by perceptual judgements even when the action is irrelevant to the perceptual judgement. However, despite performing multiple actions in our daily lives, how perceptual judgements influence multiple judgement-irrelevant actions is unknown. Here we show that perceptual judgements affect only saccadic eye movements when simultaneous judgement-irrelevant saccades and reaches are made, demonstrating that perceptual judgement-related signals continuously flow into the oculomotor system alone when multiple judgement-irrelevant actions are performed. This suggests that saccades are useful for making inferences about covert perceptual decisions, even when the actions are not tied to decision making.
Collapse
Affiliation(s)
| | - Shota Furukawa
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| |
Collapse
|
5
|
Bloch C, Tepest R, Jording M, Vogeley K, Falter-Wagner CM. Intrapersonal synchrony analysis reveals a weaker temporal coherence between gaze and gestures in adults with autism spectrum disorder. Sci Rep 2022; 12:20417. [PMID: 36437262 PMCID: PMC9701674 DOI: 10.1038/s41598-022-24605-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 11/17/2022] [Indexed: 11/29/2022] Open
Abstract
The temporal encoding of nonverbal signals within individuals, referred to as intrapersonal synchrony (IaPS), is an implicit process and essential feature of human communication. Based on existing evidence, IaPS is thought to be a marker of nonverbal behavior characteristics in autism spectrum disorders (ASD), but there is a lack of empirical evidence. The aim of this study was to quantify IaPS in adults during an experimentally controlled real-life interaction task. A sample of adults with a confirmed ASD diagnosis and a matched sample of typically-developed adults were tested (N = 48). Participants were required to indicate the appearance of a target invisible to their interaction partner nonverbally through gaze and pointing gestures. Special eye-tracking software allowed automated extraction of temporal delays between nonverbal signals and their intrapersonal variability with millisecond temporal resolution as indices for IaPS. Likelihood ratio tests of multilevel models showed enlarged delays between nonverbal signals in ASD. Larger delays were associated with greater intrapersonal variability in delays. The results provide a quantitative constraint on nonverbal temporality in typically-developed adults and suggest weaker temporal coherence between nonverbal signals in adults with ASD. The results provide a potential diagnostic marker and inspire predictive coding theories about the role of IaPS in interpersonal synchronization processes.
Collapse
Affiliation(s)
- Carola Bloch
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, Nussbaumstraße 7, 80336, Munich, Germany.
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany.
| | - Ralf Tepest
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Mathis Jording
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Kai Vogeley
- Department of Psychiatry and Psychotherapy, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich, Jülich, Germany
| | - Christine M Falter-Wagner
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Clinic, Ludwig-Maximilians-University, Nussbaumstraße 7, 80336, Munich, Germany.
| |
Collapse
|
6
|
Caruana N, Inkley C, Nalepka P, Kaplan DM, Richardson MJ. Gaze facilitates responsivity during hand coordinated joint attention. Sci Rep 2021; 11:21037. [PMID: 34702900 PMCID: PMC8548595 DOI: 10.1038/s41598-021-00476-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 10/13/2021] [Indexed: 11/18/2022] Open
Abstract
The coordination of attention between individuals is a fundamental part of everyday human social interaction. Previous work has focused on the role of gaze information for guiding responses during joint attention episodes. However, in many contexts, hand gestures such as pointing provide another valuable source of information about the locus of attention. The current study developed a novel virtual reality paradigm to investigate the extent to which initiator gaze information is used by responders to guide joint attention responses in the presence of more visually salient and spatially precise pointing gestures. Dyads were instructed to use pointing gestures to complete a cooperative joint attention task in a virtual environment. Eye and hand tracking enabled real-time interaction and provided objective measures of gaze and pointing behaviours. Initiators displayed gaze behaviours that were spatially congruent with the subsequent pointing gestures. Responders overtly attended to the initiator’s gaze during the joint attention episode. However, both these initiator and responder behaviours were highly variable across individuals. Critically, when responders did overtly attend to their partner’s face, their saccadic reaction times were faster when the initiator’s gaze was also congruent with the pointing gesture, and thus predictive of the joint attention location. These results indicate that humans attend to and process gaze information to facilitate joint attention responsivity, even in contexts where gaze information is implicit to the task and joint attention is explicitly cued by more spatially precise and visually salient pointing gestures.
Collapse
Affiliation(s)
- Nathan Caruana
- Department of Cognitive Science, Macquarie University, 16 University Ave, Sydney, NSW, 2109, Australia. .,Perception in Action Research Centre, Macquarie University, Sydney, Australia.
| | - Christine Inkley
- Department of Cognitive Science, Macquarie University, 16 University Ave, Sydney, NSW, 2109, Australia
| | - Patrick Nalepka
- Perception in Action Research Centre, Macquarie University, Sydney, Australia.,Department of Psychology, Macquarie University, Sydney, Australia.,Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia
| | - David M Kaplan
- Department of Cognitive Science, Macquarie University, 16 University Ave, Sydney, NSW, 2109, Australia.,Perception in Action Research Centre, Macquarie University, Sydney, Australia.,Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia
| | - Michael J Richardson
- Perception in Action Research Centre, Macquarie University, Sydney, Australia.,Department of Psychology, Macquarie University, Sydney, Australia.,Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia
| |
Collapse
|
7
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
8
|
Abstract
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other's mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.
Collapse
Affiliation(s)
- Lucas Battich
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany.
- Graduate School of Systemic Neurosciences, Ludwig Maximilian University Munich, Munich, Germany.
| | - Merle Fairhurst
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institut für Psychologie, Fakultät für Humanwissenschaften, Universität der Bundeswehr München, Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
9
|
Heuer A, Ohl S, Rolfs M. Memory for action: a functional view of selection in visual working memory. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1764156] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- Anna Heuer
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sven Ohl
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Martin Rolfs
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
10
|
Jana S, Murthy A. Spatiotemporal Coupling between Eye and Hand Trajectories during Curved Hand Movements. J Mot Behav 2020; 53:47-58. [PMID: 32046608 DOI: 10.1080/00222895.2020.1723481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Eye and hand movements are often made in isolation but for reaching movements they are usually coupled. Despite this, evidence for spatial coupling between the eye and hand effector is mixed and have usually been restricted to straight-line movements, while real-world hand movements have complex trajectories. Here, using a novel obstacle avoidance task where an obstacle appeared in an infrequent number of trials, we establish a stronger link between the saccade and hand trajectory during more naturalistic curved hand trajectories. We illustrate that the hand trajectory was coupled to the end-point of the saccade which was executed just prior to the hand movement onset. Interestingly, while the saccade end-point was related to whether the hand trajectory followed a straight or a curved path, the y-component of saccade end-point was related to whether the hand took a path passing from over or below the obstacle. Further, we observed a relationship between saccade locations and hand sub-movements where the number and timing of saccades and number of hand velocity peaks were related. These results illustrate a robust spatiotemporal and kinematic coupling between saccades and complex hand movement trajectories suggesting a shared kinematic representation underlying eye-hand movements.
Collapse
Affiliation(s)
- Sumitash Jana
- Centre for Neuroscience, Indian Institute of Science, Bangalore, India
| | - Aditya Murthy
- Centre for Neuroscience, Indian Institute of Science, Bangalore, India
| |
Collapse
|
11
|
Independent selection of eye and hand targets suggests effector-specific attentional mechanisms. Sci Rep 2018; 8:9434. [PMID: 29930389 PMCID: PMC6013452 DOI: 10.1038/s41598-018-27723-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 06/04/2018] [Indexed: 11/23/2022] Open
Abstract
Both eye and hand movements bind visual attention to their target locations during movement preparation. However, it remains contentious whether eye and hand targets are selected jointly by a single selection system, or individually by independent systems. To unravel the controversy, we investigated the deployment of visual attention – a proxy of motor target selection – in coordinated eye-hand movements. Results show that attention builds up in parallel both at the eye and the hand target. Importantly, the allocation of attention to one effector’s motor target was not affected by the concurrent preparation of the other effector’s movement at any time during movement preparation. This demonstrates that eye and hand targets are represented in separate, effector-specific maps of action-relevant locations. The eye-hand synchronisation that is frequently observed on the behavioral level must emerge from mutual influences of the two effector systems at later, post-attentional processing stages.
Collapse
|
12
|
Bakker RS, Selen LPJ, Medendorp WP. Reference frames in the decisions of hand choice. J Neurophysiol 2018; 119:1809-1817. [DOI: 10.1152/jn.00738.2017] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
For the brain to decide on a reaching movement, it needs to select which hand to use. A number of body-centered factors affect this decision, such as the anticipated movement costs of each arm, recent choice success, handedness, and task demands. While the position of each hand relative to the target is also known to be an important spatial factor, it is unclear which reference frames coordinate the spatial aspects in the decisions of hand choice. Here we tested the role of gaze- and head-centered reference frames in a hand selection task. With their head and gaze oriented in different directions, we measured hand choice of 19 right-handed subjects instructed to make unimanual reaching movements to targets at various directions relative to their body. Using an adaptive procedure, we determined the target angle that led to equiprobable right/left hand choices. When gaze remained fixed relative to the body this balanced target angle shifted systematically with head orientation, and when head orientation remained fixed this choice measure shifted with gaze. These results suggest that a mixture of head- and gaze-centered reference frames is involved in the spatially guided decisions of hand choice, perhaps to flexibly bind this process to the mechanisms of target selection. NEW & NOTEWORTHY Decisions of target and hand choice are fundamental aspects of human reaching movements. While the reference frames involved in target choice have been identified, it is unclear which reference frames are involved in hand selection. We tested the role of gaze- and head-centered reference frames in a hand selection task. Findings emphasize the role of both spatial reference frames in the decisions of hand choice, in addition to known body-centered computations such anticipated movement costs and handedness.
Collapse
Affiliation(s)
- Romy S. Bakker
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Luc P. J. Selen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - W. Pieter Medendorp
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Falciati L, Maioli C. Dynamic Changes in Upper-Limb Corticospinal Excitability during a 'Pro-/Anti-saccade' Double-Choice Task. Front Hum Neurosci 2018; 11:624. [PMID: 29326576 PMCID: PMC5741690 DOI: 10.3389/fnhum.2017.00624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 12/07/2017] [Indexed: 11/18/2022] Open
Abstract
Under natural behavioral conditions, visually guided eye movements are linked to direction-specific modulations of cortico-spinal system (CSS) excitability in upper-limb muscles, even in absence of a manual response. These excitability changes have been shown to be compatible with a covert motor program encoding a manual movement toward the same target of the eyes. The aim of this study is to investigate whether this implicit oculo-manual coupling is enforced following every saccade execution or it depends on the behavioral context. Twenty-two healthy young adults participated in the study. Single-pulse transcranial magnetic stimulation was applied to the motor cortex at nine different time epochs during a double-choice eye task, in which the decision to execute a prosaccade or an antisaccade was made on the color of a peripheral visual cue. By analyzing the amplitude of the motor evoked potentials (MEP) in three distal muscles of the resting upper-limb, a facilitation peak of CSS excitability was found in two of them at 120 ms before the eyes begin to move. Furthermore, a long-lasting, generalized reduced corticomotor excitability develops following the eye response. Finally, a quite large modulation of MEP amplitude, depending on the direction of the saccade, is observed only in the first dorsal interosseous muscle, in a narrow time window at about 150 ms before the eye movement, irrespective of the type of the ocular response (pro-/anti-saccade). This change in CSS excitability is not tied up to the timing of the occurrence of the visual cue but, instead, appears to be tightly time-related to the saccade onset. Observed excitability changes differ in many respects from those previously reported with different behavioral paradigms. A main finding of our study is that the implicit coupling between eye and hand motor systems is contingent upon the particular motor set determined by the cognitive aspects of the performed oculomotor task. In particular, the direction-specific modulation in CSS excitability described in this study appears to be related to perceptual and decision-making processes rather than representing an implicit upper-limb motor program, coupled to the saccade execution.
Collapse
Affiliation(s)
- Luca Falciati
- Dipartimento di Scienze Cliniche e Sperimentali, Università degli Studi di Brescia, Brescia, Italy
| | - Claudio Maioli
- Dipartimento di Scienze Cliniche e Sperimentali, Università degli Studi di Brescia, Brescia, Italy
| |
Collapse
|
14
|
Rand MK, Rentsch S. Eye-hand coordination during visuomotor adaptation: effects of hemispace and joint coordination. Exp Brain Res 2017; 235:3645-3661. [PMID: 28900673 DOI: 10.1007/s00221-017-5088-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2017] [Accepted: 09/08/2017] [Indexed: 11/27/2022]
Abstract
We previously examined adaptive changes of eye-hand coordination during learning of a visuomotor rotation. Gazes during reaching movements were initially directed to a feedback cursor in early practice, but were gradually shifted toward the target with more practice, indicating an emerging gaze anchoring behavior. This adaptive pattern reflected a functional change of gaze control from exploring the cursor-hand relation to guiding the hand to the task goal. The present study further examined the effects of hemispace and joint coordination associated with target directions on this behavior. Young adults performed center-out reaching movements to four targets with their right hand on a horizontal digitizer, while looking at a rotated visual feedback cursor on a computer monitor. To examine the effect of hemispace related to visual stimuli, two out of the four targets were located in the ipsilateral workspace relative to the hand used, the other two in the contralateral workspace. To examine the effect of hemispace related to manual actions, two among the four targets were related to reaches made in the ipsilateral workspace, the other two to reaches made in the contralateral workspace. Furthermore, to examine the effect of the complexity of joint coordination, two among the four targets were reaches involving a direct path from the start to the target involving elbow movements (simple), whereas the other two targets were reaches involving both shoulder and elbow movements (complex). The results showed that the gaze anchoring behavior gradually emerged during practice for reaches made in all target directions. The speed of this change was affected mainly by the hemispace related to manual actions, whereas the other two effects were minimal. The gaze anchoring occurred faster for the ipsilateral reaches than for the contralateral reaches; gazes prior to the gaze anchoring were also directed less at the cursor vicinity but more at the mid-area between the starting point and the target. These results suggest that ipsilateral reaches result in a better predictability of the cursor-hand relation under the visuomotor rotation, thereby prompting an earlier functional change of gaze control through practice from a reactive to a predictive control.
Collapse
Affiliation(s)
- Miya K Rand
- Leibniz Research Centre for Working Environment and Human Factors, 67 Ardeystraße, Dortmund, 44139, Germany.
| | - Sebastian Rentsch
- Leibniz Research Centre for Working Environment and Human Factors, 67 Ardeystraße, Dortmund, 44139, Germany
- Department of Sport and Sport Science, Technical University of Dortmund, 3 Otto-Hahn-Straße, Dortmund, 44227, Germany
| |
Collapse
|
15
|
Gamble CM, Song JH. Dynamic modulation of illusory and physical target size on separate and coordinated eye and hand movements. J Vis 2017; 17:23. [PMID: 28362898 PMCID: PMC5381334 DOI: 10.1167/17.3.23] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In everyday behavior, two of the most common visually guided actions-eye and hand movements-can be performed independently, but are often synergistically coupled. In this study, we examine whether the same visual representation is used for different stages of saccades and pointing, namely movement preparation and execution, and whether this usage is consistent between independent and naturalistic coordinated eye and hand movements. To address these questions, we used the Ponzo illusion to dissociate the perceived and physical sizes of visual targets and measured the effects on movement preparation and execution for independent and coordinated saccades and pointing. During independent movements, we demonstrated that both physically and perceptually larger targets produced faster preparation for both effectors. Furthermore, participants who showed a greater influence of the illusion on saccade preparation also showed a greater influence on pointing preparation, suggesting that a shared mechanism involved in preparation across effectors is influenced by illusions. However, only physical but not perceptual target sizes influenced saccade and pointing execution. When pointing was coordinated with saccades, we observed different dynamics: pointing no longer showed modulation from illusory size, while saccades showed illusion modulation for both preparation and execution. Interestingly, in independent and coordinated movements, the illusion modulated saccade preparation more than pointing preparation, with this effect more pronounced during coordination. These results suggest a shared mechanism, dominated by the eyes, may underlie visually guided action preparation across effectors. Furthermore, the influence of illusions on action may operate within such a mechanism, leading to dynamic interactions between action modalities based on task demands.
Collapse
Affiliation(s)
- Christine M Gamble
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI,
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USABrown Institute for Brain Science, Brown University, Providence, RI, ://research.clps.brown.edu/songlab/
| |
Collapse
|
16
|
Coherent neuronal ensembles are rapidly recruited when making a look-reach decision. Nat Neurosci 2016; 19:327-34. [PMID: 26752158 PMCID: PMC4731255 DOI: 10.1038/nn.4210] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Accepted: 12/01/2015] [Indexed: 11/09/2022]
Abstract
Selecting and planning actions recruits neurons across many areas of the brain but how ensembles of neurons work together to make decisions is unknown. Temporally-coherent neural activity may provide a mechanism by which neurons coordinate their activity in order to make decisions. If so, neurons that are part of coherent ensembles may predict movement choices before other ensembles of neurons. We recorded neuronal activity in the lateral and medial banks of the intraparietal sulcus (IPS) of the posterior parietal cortex, while monkeys made choices about where to look and reach and decoded the activity to predict the choices. Ensembles of neurons that displayed coherent patterns of spiking activity extending across the IPS, “dual coherent” ensembles, predicted movement choices substantially earlier than other neuronal ensembles. We propose that dual-coherent spike timing reflects interactions between groups of neurons that play an important role in how we make decisions.
Collapse
|
17
|
Maioli C, Falciati L. Covert preparation of a manual response in a 'go'/'no-go' saccadic task is driven by execution of the eye movement and not by visual stimulus occurrence. Front Hum Neurosci 2015; 9:556. [PMID: 26483664 PMCID: PMC4591432 DOI: 10.3389/fnhum.2015.00556] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 09/22/2015] [Indexed: 11/13/2022] Open
Abstract
It has been recently demonstrated that visually guided saccades are linked to changes in muscle excitability in the relaxed upper limb, which are compatible with a covert motor plan encoding a hand movement toward the gaze target. In this study we investigated whether these excitability changes are time locked to the visual stimulus, as predicted by influential attention models, or are strictly dependent on saccade execution. Single-pulse transcranial magnetic stimulation was applied to the motor cortex at eight different time delays during a 'go'/'no-go' task, which involved overt or covert orienting of attention. By analyzing the time course of excitability in three hand muscles, synchronized with the onset of either the attentional cue or the eye movement, we demonstrated that side- and muscle-specific excitability changes were strictly time locked to the saccadic response and were not correlated to the onset of the visual attentive stimulus. Furthermore, muscle excitability changes were absent following a covert shift of attention. We conclude that a sub-threshold manual motor plan is automatically activated by the saccade decision-making process, as part of a covert eye-hand coordination program. We found no evidence for a representation of spatial attention within the upper limb motor map.
Collapse
Affiliation(s)
- Claudio Maioli
- Department of Clinical and Experimental Sciences and National Institute of Neuroscience, University of Brescia, BresciaItaly
| | | |
Collapse
|
18
|
Action and perception are temporally coupled by a common mechanism that leads to a timing misperception. J Neurosci 2015; 35:1493-504. [PMID: 25632126 DOI: 10.1523/jneurosci.2054-14.2015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
We move our eyes to explore the world, but visual areas determining where to look next (action) are different from those determining what we are seeing (perception). Whether, or how, action and perception are temporally coordinated is not known. The preparation time course of an action (e.g., a saccade) has been widely studied with the gap/overlap paradigm with temporal asynchronies (TA) between peripheral target onset and fixation point offset (gap, synchronous, or overlap). However, whether the subjects perceive the gap or overlap, and when they perceive it, has not been studied. We adapted the gap/overlap paradigm to study the temporal coupling of action and perception. Human subjects made saccades to targets with different TAs with respect to fixation point offset and reported whether they perceived the stimuli as separated by a gap or overlapped in time. Both saccadic and perceptual report reaction times changed in the same way as a function of TA. The TA dependencies of the time change for action and perception were very similar, suggesting a common neural substrate. Unexpectedly, in the perceptual task, subjects misperceived lights overlapping by less than ∼100 ms as separated in time (overlap seen as gap). We present an attention-perception model with a map of prominence in the superior colliculus that modulates the stimulus signal's effectiveness in the action and perception pathways. This common source of modulation determines how competition between stimuli is resolved, causes the TA dependence of action and perception to be the same, and causes the misperception.
Collapse
|
19
|
Corbetta D, Thurman SL, Wiener RF, Guan Y, Williams JL. Mapping the feel of the arm with the sight of the object: on the embodied origins of infant reaching. Front Psychol 2014; 5:576. [PMID: 24966847 PMCID: PMC4052117 DOI: 10.3389/fpsyg.2014.00576] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 05/23/2014] [Indexed: 11/23/2022] Open
Abstract
For decades, the emergence and progression of infant reaching was assumed to be largely under the control of vision. More recently, however, the guiding role of vision in the emergence of reaching has been downplayed. Studies found that young infants can reach in the dark without seeing their hand and that corrections in infants' initial hand trajectories are not the result of visual guidance of the hand, but rather the product of poor movement speed calibration to the goal. As a result, it has been proposed that learning to reach is an embodied process requiring infants to explore proprioceptively different movement solutions, before they can accurately map their actions onto the intended goal. Such an account, however, could still assume a preponderant (or prospective) role of vision, where the movement is being monitored with the scope of approximating a future goal-location defined visually. At reach onset, it is unknown if infants map their action onto their vision, vision onto their action, or both. To examine how infants learn to map the feel of their hand with the sight of the object, we tracked the object-directed looking behavior (via eye-tracking) of three infants followed weekly over an 11-week period throughout the transition to reaching. We also examined where they contacted the object. We find that with some objects, infants do not learn to align their reach to where they look, but rather learn to align their look to where they reach. We propose that the emergence of reaching is the product of a deeply embodied process, in which infants first learn how to direct their movement in space using proprioceptive and haptic feedback from self-produced movement contingencies with the environment. As they do so, they learn to map visual attention onto these bodily centered experiences, not the reverse. We suggest that this early visuo-motor mapping is critical for the formation of visually-elicited, prospective movement control.
Collapse
Affiliation(s)
- Daniela Corbetta
- Director Infant Perception-Action Laboratory, Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | | | - Rebecca F. Wiener
- Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | - Yu Guan
- Department of Psychology, The University of TennesseeKnoxville, TN, USA
| | | |
Collapse
|
20
|
Mooshagian E, Wang C, Ferdoash A, Snyder LH. Movement order and saccade direction affect a common measure of eye-hand coordination in bimanual reaching. J Neurophysiol 2014; 112:730-9. [PMID: 24848462 DOI: 10.1152/jn.00234.2014] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Studies of visually guided unimanual reaching have established that a saccade usually precedes each reach and that the reaction times (RTs) for the saccade and reach are highly correlated. The correlation of eye and hand RT is commonly taken as a measure of eye-hand coordination and is thought to assist visuospatial guidance of the hand. We asked what happens during a bimanual reach task. As with a unimanual reach, a saccade was executed first. Although latencies were fastest on unimanual trials, eye and hand RT correlation was identical whether just one or both hands reached to a single target. The average correlation was significantly reduced, however, when each hand reached simultaneously to a different target. We considered three factors that might explain the drop. We found that correlation strength depended on which hand reached first and on which hand reached to the same target as the saccade. Surprisingly, these two factors were largely independent, and the identity of the hand, left or right, had little effect. Eye-hand correlation was similar to that seen with unimanual reaching only when the hand that moved to the same target as the saccade was also the first hand to move. Thus both timing as well as spatial pattern are important in determining eye-hand coordination.
Collapse
Affiliation(s)
- Eric Mooshagian
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri
| | - Cunguo Wang
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri
| | - Afreen Ferdoash
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri
| | - Lawrence H Snyder
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri
| |
Collapse
|
21
|
Yu C, Smith LB. Joint attention without gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination. PLoS One 2013; 8:e79659. [PMID: 24236151 PMCID: PMC3827436 DOI: 10.1371/journal.pone.0079659] [Citation(s) in RCA: 172] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2013] [Accepted: 09/24/2013] [Indexed: 11/24/2022] Open
Abstract
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.
Collapse
Affiliation(s)
- Chen Yu
- Department of Psychological and Brain Sciences, Cognitive Science Program, Indiana University Bloomington, Bloomington, Indiana, United States of America
- * E-mail:
| | - Linda B. Smith
- Department of Psychological and Brain Sciences, Cognitive Science Program, Indiana University Bloomington, Bloomington, Indiana, United States of America
| |
Collapse
|
22
|
Yttri EA, Wang C, Liu Y, Snyder LH. The parietal reach region is limb specific and not involved in eye-hand coordination. J Neurophysiol 2013; 111:520-32. [PMID: 24198328 DOI: 10.1152/jn.00058.2013] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Primates frequently reach toward visual targets. Neurons in early visual areas respond to stimuli in the contralateral visual hemifield and without regard to which limb will be used to reach toward that target. In contrast, neurons in motor areas typically respond when reaches are performed using the contralateral limb and with minimal regard to the visuospatial location of the target. The parietal reach region (PRR) is located early in the visuomotor processing hierarchy. PRR neurons are significantly modulated when targets for either limb or eye movement appear, similar to early sensory areas; however, they respond to targets in either visual field, similar to motor areas. The activity could reflect the subject's attentional locus, movement of a specific effector, or a related function, such as coordinating eye-arm movements. To examine the role of PRR in the visuomotor pathway, we reversibly inactivated PRR. Inactivation effects were specific to contralateral limb movements, leaving ipsilateral limb and saccadic movements intact. Neither visual hemifield bias nor visual attention deficits were observed. Thus our results are consistent with a motoric rather than visual organization in PRR, despite its early location in the visuomotor pathway. We found no effects on the temporal coupling of coordinated saccades and reaches, suggesting that this mechanism lies downstream of PRR. In sum, this study clarifies the role of PRR in the visuomotor hierarchy: despite its early position, it is a limb-specific area influencing reach planning and is positioned upstream from an active eye-hand coordination-coupling mechanism.
Collapse
Affiliation(s)
- Eric A Yttri
- Department of Anatomy and Neurobiology, Washington University School of Medicine, St. Louis, Missouri
| | | | | | | |
Collapse
|
23
|
Falciati L, Gianesini T, Maioli C. Covert oculo-manual coupling induced by visually guided saccades. Front Hum Neurosci 2013; 7:664. [PMID: 24133442 PMCID: PMC3794306 DOI: 10.3389/fnhum.2013.00664] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2013] [Accepted: 09/24/2013] [Indexed: 11/16/2022] Open
Abstract
Hand pointing to objects under visual guidance is one of the most common motor behaviors in everyday life. In natural conditions, gaze and arm movements are commonly aimed at the same target and the accuracy of both systems is considerably enhanced if eye and hand move together. Evidence supports the viewpoint that gaze and limb control systems are not independent but at least partially share a common neural controller. The aim of the present study was to verify whether a saccade execution induces excitability changes in the upper-limb corticospinal system (CSS), even in the absence of a manual response. This effect would provide evidence for the existence of a common drive for ocular and arm motor systems during fast aiming movements. Single-pulse TMS was applied to the left motor cortex of 19 subjects during a task involving visually guided saccades, and motor evoked potentials (MEPs) induced in hand and wrist muscles of the contralateral relaxed arm were recorded. Subjects had to make visually guided saccades to one of 6 positions along the horizontal meridian (±5°, ±10°, or ±15°). During each trial, TMS was randomly delivered at one of 3 different time delays: shortly after the end of the saccade or 300 or 540 ms after saccade onset. Fast eye movements toward a peripheral target were accompanied by changes in upper-limb CSS excitability. MEP amplitude was highest immediately after the end of the saccade and gradually decreased at longer TMS delays. In addition to the change in overall CSS excitability, MEPs were specifically modulated in different muscles, depending on the target position and the TMS delay. By applying a simple model of a manual pointing movement, we demonstrated that the observed changes in CSS excitability are compatible with the facilitation of an arm motor program for a movement aimed at the same target of the gaze. These results provide evidence in favor of the existence of a common drive for both eye and arm motor systems.
Collapse
Affiliation(s)
- Luca Falciati
- Department of Clinical and Experimental Sciences and National Institute of Neuroscience, University of Brescia Brescia, Italy
| | | | | |
Collapse
|
24
|
Carey DP, Liddle J. Hemifield or hemispace: what accounts for the ipsilateral advantages in visually guided aiming? Exp Brain Res 2013; 230:323-31. [PMID: 23955102 DOI: 10.1007/s00221-013-3657-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2012] [Accepted: 07/23/2013] [Indexed: 11/25/2022]
Abstract
Aiming movements to targets presented on the same side as the reaching limb are faster and more accurate than movements made across the body. These advantages are typically attributed to within-hemisphere sensorimotor control. However, contrary to the within- versus between-hemisphere model, we have shown that some of these advantages tend to go with the side of the movement, rather than the side of the target (Carey et al. Exp Brain Res 112:496-504, 1996; Carey and Otto-de Haart Neuropsychologia 39:894, 2001). Barthélémy and Boulinghez (Exp Brain Res 147:305-312, 2002) acknowledge that our biomechanical account fits data for post-onset movement parameters such as peak velocity and duration, yet they report evidence for some within- versus between-hemisphere contributions to reaction time (RT) advantages. To examine a possible difference between early and late movement kinematics fitting these alternative models, we have dissociated field and space in a different way, which required arm movements with differential inertial consequences, as well as unpredictability of target location in terms of visual field. The data suggest that visual field may contribute some of the variance to hemispatial effects, but only for the right hand. In a second experiment, we used an antipointing task to examine hemispatial versus visual field effects on RTs and to revisit the possible hand difference identified in experiment 1. We found that hemispace accounted for all of the ipsilateral advantages, including RT, for both right and left hands. Results are discussed in terms of the computational requirements of eye-hand coordination in relative unconstrained conditions.
Collapse
Affiliation(s)
- David P Carey
- Perception, Action and Memory Research Group, School of Psychology, Bangor University, Bangor, LL57 2AS, UK,
| | | |
Collapse
|
25
|
Eye-Hand Coordination in Children with High Functioning Autism and Asperger’s Disorder Using a Gap-Overlap Paradigm. J Autism Dev Disord 2012; 43:841-50. [DOI: 10.1007/s10803-012-1623-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
26
|
Abstract
This paper presents two methods that we applied to our research to record infant gaze in the context of goal-oriented actions using different eye-tracking devices: head-mounted and remote eye-tracking. For each type of eye-tracking system, we discuss their advantages and disadvantages, we describe the particular experimental setups we used to study infant looking and reaching, explain how we were able to use and synchronize these systems with other sources of data collection (video recordings and motion capture) in order to analyze gaze and movements directed toward 3D objects within a common time frame. Finally, for each method, we briefly present some results from our studies to illustrate the different levels of analyses that may be carried out using these different types of eye-tracking devices. These examples aim to highlight some of the novel questions that may be addressed using eye-tracking in the context of goal-directed actions.
Collapse
|
27
|
|
28
|
Reyes-Puerta V, Philipp R, Lindner W, Hoffmann KP. Neuronal activity in the superior colliculus related to saccade initiation during coordinated gaze-reach movements. Eur J Neurosci 2011; 34:1966-82. [DOI: 10.1111/j.1460-9568.2011.07911.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
29
|
|
30
|
The role of saccades in multitasking: towards an output-related view of eye movements. PSYCHOLOGICAL RESEARCH 2011; 75:452-65. [DOI: 10.1007/s00426-011-0352-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2010] [Accepted: 06/02/2011] [Indexed: 10/18/2022]
|
31
|
Aasa U, Jensen BR, Sandfeld J, Richter H, Lyskov E, Crenshaw AG. The impact of object size and precision demands on fatigue during computer mouse use. ACTA ACUST UNITED AC 2011. [DOI: 10.3109/14038196.2011.583269] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
32
|
Continuous visual control of interception. Hum Mov Sci 2011; 30:475-94. [PMID: 21353717 DOI: 10.1016/j.humov.2010.12.007] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2010] [Revised: 11/29/2010] [Accepted: 12/18/2010] [Indexed: 11/21/2022]
Abstract
People generally try to keep their eyes on a moving target that they intend to catch or hit. In the present study we first examined how important it is to do so. We did this by designing two interception tasks that promote different eye movements. In both tasks it was important to be accurate relative to both the moving target and the static environment. We found that performance was more variable in relation to the structure that was not fixated. This suggests that the resolution of visual information that is gathered during the movement is important for continuously improving predictions about critical aspects of the task, such as anticipating where the target will be at some time in the future. If so, variability in performance should increase if the target briefly disappears from view just before being hit, even if the target moves completely predictably. We demonstrate that it does, indicating that new visual information is used to improve precision throughout the movement.
Collapse
|
33
|
Jonikaitis D, Deubel H. Independent Allocation of Attention to Eye and Hand Targets in Coordinated Eye-Hand Movements. Psychol Sci 2011; 22:339-47. [DOI: 10.1177/0956797610397666] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
When reaching for objects, people frequently look where they reach. This raises the question of whether the targets for the eye and hand in concurrent eye and hand movements are selected by a unitary attentional system or by independent mechanisms. We used the deployment of visual attention as an index of the selection of movement targets and asked observers to reach and look to either the same location or separate locations. Results show that during the preparation of coordinated movements, attention is allocated in parallel to the targets of a saccade and a reaching movement. Attentional allocations for the two movements interact synergistically when both are directed to a common goal. Delaying the eye movement delays the attentional shift to the saccade target while leaving attentional deployment to the reach target unaffected. Our findings demonstrate that attentional resources are allocated independently to the targets of eye and hand movements and suggest that the goals for these effectors are selected by separate attentional mechanisms.
Collapse
|
34
|
Reyes-Puerta V, Philipp R, Lindner W, Hoffmann KP. Role of the Rostral Superior Colliculus in Gaze Anchoring During Reach Movements. J Neurophysiol 2010; 103:3153-66. [DOI: 10.1152/jn.00989.2009] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When reaching for an object, primates usually look at their target before touching it with the hand. This gaze movement prior to the arm movement allows target fixation, which is usually prolonged until the target is reached. In this manner, a stable image of the object is provided on the fovea during the reach, which is crucial for guiding the final part of the hand trajectory by visual feedback. Here we investigated a neural substrate possibly responsible for this behavior. In particular we tested the influence of reaching movements on neurons recorded at the rostral pole of the superior colliculus (rSC), an area classically related to fixation. Most rSC neurons showed a significant increase in their activity during reaching. Moreover, this increase was particularly high when the reaching movements were preceded by corresponding saccades to the targets to be reached, probably revealing a stronger coupling of the oculo-manual neural system during such a natural task. However, none of the parameters tested—including movement kinematics and target location—was found to be closely related to the observed increase in neural activity. Thus the increase in activity during reaching was found to be rather nonspecific except for its dependence on whether the reach was produced in isolation or in combination with a gaze movement. These results identify the rSC as a neural substrate sufficient for gaze anchoring during natural reaching movements, placing its activity at the core of the neural system dedicated to eye-hand coordination.
Collapse
Affiliation(s)
- Vicente Reyes-Puerta
- Faculty of Biology and Biotechnology and
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| | | | | | - Klaus-Peter Hoffmann
- Faculty of Biology and Biotechnology and
- International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
35
|
Dancause N, Schieber MH. The impact of head direction on lateralized choices of target and hand. Exp Brain Res 2010; 201:821-35. [PMID: 20012538 PMCID: PMC2840061 DOI: 10.1007/s00221-009-2097-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2009] [Accepted: 11/13/2009] [Indexed: 10/20/2022]
Abstract
We examined choices made by monkeys performing a task in which two food-well targets were positioned on either side of the monkey, and LEDs provided instructions on hand use and food target availability. We have previously reported that when gaze and head direction were unrestricted, lateralized choices were biased primarily by hand preference and secondarily by a preference to retrieve a target ipsilateral to the preferred hand. Here, we used a similar behavioral paradigm, but now during trial instructions the monkeys were required to maintain head direction aimed toward a left, a center, or a right fixation LED. When a lateralized head direction was required during presentation of the instructional cues, monkeys were more likely to choose the hand and target ipsilateral to the head direction. Lateralized head direction more strongly biased the monkeys' choice of hand than their choice of target, but hand preference produced even stronger bias on target choices than did head direction. Although target cues were presented before hand cues, our data indicate that target and hand choices were made interactively. We also found that the monkeys' choices were better correlated with their success rate for particular combinations of hand and target than with movement times.
Collapse
Affiliation(s)
- Numa Dancause
- Département de Physiologie, Université de Montréal, C.P. 6128, Succursale Centre-ville, Montreal, QC, H3C 3J7, Canada.
| | | |
Collapse
|
36
|
Song JH, McPeek RM. Eye-hand coordination during target selection in a pop-out visual search. J Neurophysiol 2009; 102:2681-92. [PMID: 19726722 DOI: 10.1152/jn.91352.2008] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined the coordination of saccades and reaches in a visual search task in which monkeys were rewarded for reaching to an odd-colored target among distractors. Eye movements were unconstrained, and monkeys typically made one or more saccades before initiating a reach. Target selection for reaching and saccades was highly correlated with the hand and eyes landing near the same final stimulus both for correct reaches to the target and for incorrect reaches to a distractor. Incorrect reaches showed a bias in target selection: they were directed to the distractor in the same hemifield as the target more often than to other distractors. A similar bias was seen in target selection for the initial saccade in correct reaching trials with multiple saccades. We also examined the temporal coupling of saccades and reaches. In trials with a single saccade, a reaching movement was made after a fairly stereotyped delay. In multiple-saccade trials, a reach to the target could be initiated near or even before the onset of the final target-directed saccade. In these trials, the initial trajectory of the reach was often directed toward the fixated distractor before veering toward the target around the time of the final saccade. In virtually all cases, the eyes arrived at the target before the hand, and remained fixated until reach completion. Overall, these results are consistent with flexible temporal coupling of saccade and reach initiation, but fairly tight coupling of target selection for the two types of action.
Collapse
Affiliation(s)
- Joo-Hyun Song
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St, San Francisco, CA 94115, USA.
| | | |
Collapse
|
37
|
Glazebrook C, Gonzalez D, Hansen S, Elliott D. The role of vision for online control of manual aiming movements in persons with autism spectrum disorders. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2009; 13:411-33. [PMID: 19535469 DOI: 10.1177/1362361309105659] [Citation(s) in RCA: 107] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent studies suggest motor skills are not entirely spared in individuals with an autism spectrum disorder (ASD). Previous reports demonstrated that young adults with ASD were able to land accurately on a target despite increased temporal and spatial variability during their movement. This study explored how a group of adolescents and young adults with an ASD used vision and proprioception to land successfully on one of two targets. Participants performed eye movements and/or manual reaching movements, either with or without vision. Although eye movements were executed in a similar timeframe, participants with ASD took longer to plan and execute manual reaching movements. They also exhibited significantly greater variability during eye and hand movements, but were able to land on the target regardless of the vision condition. In general, individuals with autism used vision and proprioception. However, they took considerably more time to perform movements that required greater visual-proprioceptive integration.
Collapse
|
38
|
Conditions that alter saccadic eye movement latencies and affect target choice to visual stimuli and to electrical stimulation of area V1 in the monkey. Vis Neurosci 2008; 25:661-73. [PMID: 19079822 DOI: 10.1017/s0952523808080863] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this study, we examined procedures that alter saccadic latencies and target selection to visual stimuli and electrical stimulation of area V1 in the monkey. It has been shown that saccadic eye movement latencies to singly presented visual targets form a bimodal distribution when the fixation spot is turned off a number of milliseconds prior to the appearance of the target (the gap period); the first mode has been termed express saccades and the second regular saccades. When the termination of the fixation spot is coincident with the appearance of the target (0 ms gap), express saccades are rarely generated. We show here that a bimodal distribution of saccadic latencies can also be obtained when an array of visual stimuli is presented prior to the appearance of the visual target, provided the elements of the array overlap spatially with the visual target. The overall latency of the saccadic eye movements elicited by electrical stimulation of area V1 is significantly shortened both when a gap is introduced between the termination of the fixation spot and the stimulation and when an array is presented. However, under these conditions, the distribution of saccadic latencies is unimodal. When two visual targets are presented after the fixation spot, introducing a gap has no effect on which target is chosen. By contrast, when electrical stimulation is paired with a visual target, introducing a gap greatly increases the frequency with which the electrical stimulation site is chosen.
Collapse
|
39
|
Song JH, Takahashi N, McPeek RM. Target selection for visually guided reaching in macaque. J Neurophysiol 2007; 99:14-24. [PMID: 17989239 DOI: 10.1152/jn.01106.2007] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We examined target selection for visually guided reaching in monkeys using a visual search task in which an odd-colored target was presented with distractors. The colors of the target and distractors were randomly switched in each trial between red and green, and the number of distractors was varied. Previous studies of saccades and attention have shown that target selection in this task is easier when a greater number of homogenous distractors is present. We found that monkeys made fewer reaches to distractors and that reaches to the target were completed more quickly when a greater number of homogenous distractors was present. When the target was presented in a sparse array of distractors, reaches had longer movement durations and greater trajectory curvature. Reaching errors were directed more often to a distractor adjacent to the target, suggesting a spatially coarse-to-fine progression during target selection. Reaches were also influenced by the properties of trials in the recent past. When the colors of the target and distractors remained the same from trial to trial rather than switching, reaches were completed more quickly and accurately, indicating that color priming across trials facilitates target selection. Moreover, when difficult search trials were randomly intermixed with easier trials without distractors, reach latencies were influenced by the difficulty of previous trials, indicating that motor initiation strategies are gradually adjusted based on accumulated experience. Overall, these results are consistent with reaching results in humans, indicating that the monkey provides a sound model for understanding the neural underpinnings of reach target selection.
Collapse
Affiliation(s)
- Joo-Hyun Song
- Smith-Kettlewell Eye Research Institute, 2318 Fillmore St., San Francisco, CA 94115, USA.
| | | | | |
Collapse
|