1
|
Lenglart L, Cartaud A, Quesque F, Sampaio A, Coello Y. Object coding in peripersonal space depends on object ownership. Q J Exp Psychol (Hove) 2023; 76:1925-1939. [PMID: 36113191 DOI: 10.1177/17470218221128306] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Previous studies have shown that objects located in the peripersonal space (PPS) receive enhanced attention, as compared with extrapersonal space (EPS), However, most objects in the environment belong to someone in particular and how object ownership influences object coding in relation to PPS representation is still unclear. In the present study, after having chosen their own mug, participants performed a reachability judgement task of self-owned and other-owned mugs presented at different distances while facing a virtual character. This task was followed, on each trial, by a localisation task in which participants had to indicate where the mug, removed from view, was previously located. The two tasks were separated by a 900-ms visual mask during which the virtual character was unnoticeably shifted by 3° to evaluate the spatial frame-of-reference used. The results showed that self-owned mugs were processed faster than other-owned mugs, but only when located in the PPS. Furthermore, reachability judgements were biased for self-owned mugs, leading to an extension of the PPS representation, especially for participants with a high score on the fantasy scale of Interpersonal Reactivity Index (IRI). Finally, the virtual character shift altered the localisation performance but only for the distant mugs, suggesting a progressive shift from egocentric to allocentric frame-of-reference when moving from the PPS to EPS, irrespective of object ownership. Overall, our data reveal that the representations of ownership and PPS interact to facilitate the processing of manipulable objects, to an extent that depends on individual sensitivity to the social presence of others.
Collapse
Affiliation(s)
- Lucie Lenglart
- CNRS, UMR 9193, SCALab-Sciences Cognitives et Sciences Affectives, University of Lille, Villeneuve d'Ascq Cedex, France
| | - Alice Cartaud
- CNRS, UMR 9193, SCALab-Sciences Cognitives et Sciences Affectives, University of Lille, Villeneuve d'Ascq Cedex, France
| | - François Quesque
- Inserm, CHU Lille, U1172, LilNCog-Lille Neuroscience & Cognition, University of Lille, Villeneuve d'Ascq Cedex, France
| | - Adriana Sampaio
- Psychological Neuroscience Lab, Psychology Research Centre (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Yann Coello
- CNRS, UMR 9193, SCALab-Sciences Cognitives et Sciences Affectives, University of Lille, Villeneuve d'Ascq Cedex, France
| |
Collapse
|
2
|
Gigliotti MF, Bartolo A, Coello Y. Paying attention to the outcome of others' actions has dissociated effects on observer's peripersonal space representation and exploitation. Sci Rep 2023; 13:10178. [PMID: 37349516 PMCID: PMC10287734 DOI: 10.1038/s41598-023-37189-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 06/17/2023] [Indexed: 06/24/2023] Open
Abstract
The representation of peripersonal space (PPS representation) and the selection of motor actions within it (PPS exploitation) are influenced by action outcomes and reward prospects. The present study tested whether observing the outcome of others' actions altered the observer's PPS representation and exploitation. Participants (observers) performed a reachability-judgement task (assessing PPS representation) before and after having observed a confederate (actors) performing a stimuli-selection task on a touch-screen table. In the stimuli-selection task, the stimuli selected could either yield a reward or not, but the probability to select a reward-yielding stimulus was biased in space, being either 50%, 25% or 75% in the actor's proximal or distal space. After the observation phase, participants performed the stimuli-selection task (assessing PPS exploitation), but with no spatial bias in the distribution of reward-yielding stimuli. Results revealed an effect of actors' actions outcome on observers' PPS representation, which changed according to the distribution of reward-yielding stimuli in the actors' proximal and distal spaces. No significant effect of actors' actions outcome was found on observers' PPS exploitation. As a whole, the results suggest dissociated effects of observing the outcome of others' actions on PPS representation and exploitation.
Collapse
Affiliation(s)
- Maria Francesca Gigliotti
- CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, University of Lille-SHS, Villeneuve d'Ascq, 59000, Lille, France
| | - Angela Bartolo
- CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, University of Lille-SHS, Villeneuve d'Ascq, 59000, Lille, France
| | - Yann Coello
- CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, University of Lille-SHS, Villeneuve d'Ascq, 59000, Lille, France.
| |
Collapse
|
3
|
Geers L, Coello Y. The influence of face mask on social spaces depends on the behavioral immune system. Front Neurosci 2022; 16:991578. [PMID: 36440271 PMCID: PMC9691846 DOI: 10.3389/fnins.2022.991578] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 10/26/2022] [Indexed: 02/18/2024] Open
Abstract
Interacting with objects and people requires specifying localized spaces where these interactions can take place. Previous studies suggest that the space for interacting with objects (i.e., the peripersonal space) contributes to defining the space for interacting with people (i.e., personal and interpersonal spaces). Furthermore, situational factors, such as wearing a face mask, have been shown to influence social spaces, but how they influence the relation between action and social spaces and are modulated by individual factors is still not well understood. In this context, the present study investigated the relationship between action peripersonal and social personal and interpersonal spaces in participants approached by male and female virtual characters wearing or not wearing a face mask. We also measured individual factors related to the behavioral immune system, namely willingness to take risks, perceived infectability and germ aversion. The results showed that compared to peripersonal space, personal space was smaller and interpersonal space was larger, but the three spaces were positively correlated. All spaces were altered by gender, being shorter when participants faced female characters. Personal and interpersonal spaces were reduced with virtual characters wearing a face mask, especially in participants highly aversive to risks and germs. Altogether, these findings suggest that the regulation of the social spaces depends on the representation of action peripersonal space, but with an extra margin that is modulated by situational and personal factors in relation to the behavioral immune system.
Collapse
Affiliation(s)
| | - Yann Coello
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Lille, France
| |
Collapse
|
4
|
Butz MV. Resourceful Event-Predictive Inference: The Nature of Cognitive Effort. Front Psychol 2022; 13:867328. [PMID: 35846607 PMCID: PMC9280204 DOI: 10.3389/fpsyg.2022.867328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/13/2022] [Indexed: 11/29/2022] Open
Abstract
Pursuing a precise, focused train of thought requires cognitive effort. Even more effort is necessary when more alternatives need to be considered or when the imagined situation becomes more complex. Cognitive resources available to us limit the cognitive effort we can spend. In line with previous work, an information-theoretic, Bayesian brain approach to cognitive effort is pursued: to solve tasks in our environment, our brain needs to invest information, that is, negative entropy, to impose structure, or focus, away from a uniform structure or other task-incompatible, latent structures. To get a more complete formalization of cognitive effort, a resourceful event-predictive inference model (REPI) is introduced, which offers computational and algorithmic explanations about the latent structure of our generative models, the active inference dynamics that unfold within, and the cognitive effort required to steer the dynamics-to, for example, purposefully process sensory signals, decide on responses, and invoke their execution. REPI suggests that we invest cognitive resources to infer preparatory priors, activate responses, and anticipate action consequences. Due to our limited resources, though, the inference dynamics are prone to task-irrelevant distractions. For example, the task-irrelevant side of the imperative stimulus causes the Simon effect and, due to similar reasons, we fail to optimally switch between tasks. An actual model implementation simulates such task interactions and offers first estimates of the involved cognitive effort. The approach may be further studied and promises to offer deeper explanations about why we get quickly exhausted from multitasking, how we are influenced by irrelevant stimulus modalities, why we exhibit magnitude interference, and, during social interactions, why we often fail to take the perspective of others into account.
Collapse
Affiliation(s)
- Martin V. Butz
- Neuro-Cognitive Modeling Group, Department of Computer Science, University of Tübingen, Tubingen, Germany
- Department of Psychology, Faculty of Science, University of Tübingen, Tubingen, Germany
| |
Collapse
|
5
|
Gumbsch C, Adam M, Elsner B, Butz MV. Emergent Goal-Anticipatory Gaze in Infants via Event-Predictive Learning and Inference. Cogn Sci 2021; 45:e13016. [PMID: 34379329 DOI: 10.1111/cogs.13016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 05/17/2021] [Accepted: 06/16/2021] [Indexed: 12/18/2022]
Abstract
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Collapse
Affiliation(s)
- Christian Gumbsch
- Neuro-Cognitive Modeling Group, Department of Computer Science, University of Tübingen.,Autonomous Learning Group, Max Planck Institute for Intelligent Systems
| | | | | | - Martin V Butz
- Neuro-Cognitive Modeling Group, Department of Computer Science, University of Tübingen
| |
Collapse
|
6
|
Bogdanova OV, Bogdanov VB, Dureux A, Farnè A, Hadj-Bouziane F. The Peripersonal Space in a social world. Cortex 2021; 142:28-46. [PMID: 34174722 DOI: 10.1016/j.cortex.2021.05.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 02/27/2021] [Accepted: 05/19/2021] [Indexed: 11/27/2022]
Abstract
The PeriPersonal Space (PPS) has been defined as the space surrounding the body, where physical interactions with elements of the environment take place. As our world is social in nature, recent evidence revealed the complex modulation of social factors onto PPS representation. In light of the growing interest in the field, in this review we take a close look at the experimental approaches undertaken to assess the impact of social factors onto PPS representation. Our social world also influences the personal space (PS), a concept stemming from social psychology, defined as the space we keep between us and others to avoid discomfort. Here we analytically compare PPS and PS with the aim of understanding if and how they relate to each other. At the behavioral level, the multiplicity of experimental methodologies, whether well-established or novel, lead to somewhat divergent results and interpretations. Beyond behavior, we review physiological and neural signatures of PPS representation to discuss how interoceptive signals could contribute to PPS representation, as well as how these internal signals could shape the neural responses of PPS representation. In particular, by merging exteroceptive information from the environment and internal signals that come from the body, PPS may promote an integrated representation of the self, as distinct from the environment and the others. We put forward that integrating internal and external signals in the brain for perception of proximal environmental stimuli may also provide us with a better understanding of the processes at play during social interactions. Adopting such an integrative stance may offer novel insights about PPS representation in a social world. Finally, we discuss possible links between PPS research and social cognition, a link that may contribute to the understanding of intentions and feelings of others around us and promote appropriate social interactions.
Collapse
Affiliation(s)
- Olena V Bogdanova
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; INCIA, UMR 5287, CNRS, Université de Bordeaux, France.
| | - Volodymyr B Bogdanov
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; Ecole Nationale des Travaux Publics de l'Etat, Laboratoire Génie Civil et Bâtiment, Vaulx-en-Velin, France
| | - Audrey Dureux
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France; Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - Fadila Hadj-Bouziane
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France.
| |
Collapse
|
7
|
Kuperberg GR. Tea With Milk? A Hierarchical Generative Framework of Sequential Event Comprehension. Top Cogn Sci 2021; 13:256-298. [PMID: 33025701 PMCID: PMC7897219 DOI: 10.1111/tops.12518] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 07/11/2020] [Accepted: 07/11/2020] [Indexed: 10/23/2022]
Abstract
To make sense of the world around us, we must be able to segment a continual stream of sensory inputs into discrete events. In this review, I propose that in order to comprehend events, we engage hierarchical generative models that "reverse engineer" the intentions of other agents as they produce sequential action in real time. By generating probabilistic predictions for upcoming events, generative models ensure that we are able to keep up with the rapid pace at which perceptual inputs unfold. By tracking our certainty about other agents' goals and the magnitude of prediction errors at multiple temporal scales, generative models enable us to detect event boundaries by inferring when a goal has changed. Moreover, by adapting flexibly to the broader dynamics of the environment and our own comprehension goals, generative models allow us to optimally allocate limited resources. Finally, I argue that we use generative models not only to comprehend events but also to produce events (carry out goal-relevant sequential action) and to continually learn about new events from our surroundings. Taken together, this hierarchical generative framework provides new insights into how the human brain processes events so effortlessly while highlighting the fundamental links between event comprehension, production, and learning.
Collapse
Affiliation(s)
- Gina R. Kuperberg
- Department of Psychology and Center for Cognitive Science, Tufts University
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|
8
|
Baldwin DA, Kosie JE. How Does the Mind Render Streaming Experience as Events? Top Cogn Sci 2020; 13:79-105. [DOI: 10.1111/tops.12502] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 03/14/2020] [Accepted: 03/14/2020] [Indexed: 11/28/2022]
|
9
|
Patané I, Cardinali L, Salemme R, Pavani F, Farnè A, Brozzoli C. Action Planning Modulates Peripersonal Space. J Cogn Neurosci 2019; 31:1141-1154. [DOI: 10.1162/jocn_a_01349] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Peripersonal space is a multisensory representation relying on the processing of tactile and visual stimuli presented on and close to different body parts. The most studied peripersonal space representation is perihand space (PHS), a highly plastic representation modulated following tool use and by the rapid approach of visual objects. Given these properties, PHS may serve different sensorimotor functions, including guidance of voluntary actions such as object grasping. Strong support for this hypothesis would derive from evidence that PHS plastic changes occur before the upcoming movement rather than after its initiation, yet to date, such evidence is scant. Here, we tested whether action-dependent modulation of PHS, behaviorally assessed via visuotactile perception, may occur before an overt movement as early as the action planning phase. To do so, we probed tactile and visuotactile perception at different time points before and during the grasping action. Results showed that visuotactile perception was more strongly affected during the planning phase (250 msec after vision of the target) than during a similarly static but earlier phase (50 msec after vision of the target). Visuotactile interaction was also enhanced at the onset of hand movement, and it further increased during subsequent phases of hand movement. Such a visuotactile interaction featured interference effects during all phases from action planning onward as well as a facilitation effect at the movement onset. These findings reveal that planning to grab an object strengthens the multisensory interaction of visual information from the target and somatosensory information from the hand. Such early updating of the visuotactile interaction reflects multisensory processes supporting motor planning of actions.
Collapse
Affiliation(s)
- Ivan Patané
- INSERM U1028, CNRS U5292, Lyon, France
- University of Bologna
- University of Lyon 1
- Hospices Civils de Lyon
| | | | - Romeo Salemme
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
| | | | - Alessandro Farnè
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
- University of Trento
| | - Claudio Brozzoli
- INSERM U1028, CNRS U5292, Lyon, France
- University of Lyon 1
- Hospices Civils de Lyon
- Karolinska Institutet
| |
Collapse
|
10
|
Senna I, Cardinali L, Farnè A, Brozzoli C. Aim and Plausibility of Action Chains Remap Peripersonal Space. Front Psychol 2019; 10:1681. [PMID: 31379692 PMCID: PMC6652232 DOI: 10.3389/fpsyg.2019.01681] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Accepted: 07/03/2019] [Indexed: 11/22/2022] Open
Abstract
Successful interaction with objects in the peripersonal space requires that the information relative to current and upcoming positions of our body is continuously monitored and updated with respect to the location of target objects. Voluntary actions, for example, are known to induce an anticipatory remapping of the peri-hand space (PHS, i.e., the space near the acting hand) during the very early stages of the action chain: planning and initiating an object grasp increase the interference exerted by visual stimuli coming from the object on touches delivered to the grasping hand, thus allowing for hand-object position monitoring and guidance. Voluntarily grasping an object, though, is rarely performed in isolation. Grasping a candy, for example, is most typically followed by concatenated secondary action steps (bringing the candy to the mouth and swallowing it) that represent the agent’s ultimate intention (to eat the candy). However, whether and when complex action chains remap the PHS remains unknown, just as whether remapping is conditional to goal achievability (e.g., candy-mouth fit). Here we asked these questions by assessing changes in visuo-tactile interference on the acting hand while participants had to grasp an object serving as a support for an elongated candy, and bring it toward their mouth. Depending on its orientation, the candy could potentially enter the participants’ mouth (plausible goal), or not (implausible goal). We observed increased visuo-tactile interference at relatively late stages of the action chain, after the object had been grasped, and only when the action goal was plausible. These findings suggest that multisensory interactions during action execution depend upon the final aim and plausibility of complex goal-directed actions, and extend our knowledge about the role of peripersonal space in guiding goal-directed voluntary actions.
Collapse
Affiliation(s)
- Irene Senna
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,Department of Applied Cognitive Psychology, Ulm University, Ulm, Germany
| | - Lucilla Cardinali
- Cognition, Motion and Neuroscience Unit, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Alessandro Farnè
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,University of Lyon 1, Lyon, France.,Hospices Civils de Lyon, Mouvement et Handicap & Neuro-Immersion, Lyon, France.,Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Claudio Brozzoli
- Integrative Multisensory Perception Action and Cognition Team (ImpAct), Lyon Neuroscience Research Center, INSERM U1028, CNRS U5292, Lyon, France.,University of Lyon 1, Lyon, France.,Hospices Civils de Lyon, Mouvement et Handicap & Neuro-Immersion, Lyon, France.,Institutionen för Neurobiologi, Vårdvetenskap och Samhälle, Aging Research Center, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
11
|
Lohmann J, Belardinelli A, Butz MV. Hands Ahead in Mind and Motion: Active Inference in Peripersonal Hand Space. Vision (Basel) 2019; 3:vision3020015. [PMID: 31735816 PMCID: PMC6802774 DOI: 10.3390/vision3020015] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 04/05/2019] [Accepted: 04/16/2019] [Indexed: 01/02/2023] Open
Abstract
According to theories of anticipatory behavior control, actions are initiated by predicting their sensory outcomes. From the perspective of event-predictive cognition and active inference, predictive processes activate currently desired events and event boundaries, as well as the expected sensorimotor mappings necessary to realize them, dependent on the involved predicted uncertainties before actual motor control unfolds. Accordingly, we asked whether peripersonal hand space is remapped in an uncertainty anticipating manner while grasping and placing bottles in a virtual reality (VR) setup. To investigate, we combined the crossmodal congruency paradigm with virtual object interactions in two experiments. As expected, an anticipatory crossmodal congruency effect (aCCE) at the future finger position on the bottle was detected. Moreover, a manipulation of the visuo-motor mapping of the participants’ virtual hand while approaching the bottle selectively reduced the aCCE at movement onset. Our results support theories of event-predictive, anticipatory behavior control and active inference, showing that expected uncertainties in movement control indeed influence anticipatory stimulus processing.
Collapse
Affiliation(s)
- Johannes Lohmann
- Cognitive Modeling, Department of Computer Science, Faculty of Science, University of Tübingen, 72076 Tübingen, Germany
| | - Anna Belardinelli
- Cognitive Modeling, Department of Computer Science, Faculty of Science, University of Tübingen, 72076 Tübingen, Germany
| | - Martin V Butz
- Cognitive Modeling, Department of Computer Science, Faculty of Science, University of Tübingen, 72076 Tübingen, Germany
| |
Collapse
|
12
|
Berger M, Neumann P, Gail A. Peri-hand space expands beyond reach in the context of walk-and-reach movements. Sci Rep 2019; 9:3013. [PMID: 30816205 PMCID: PMC6395760 DOI: 10.1038/s41598-019-39520-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 01/28/2019] [Indexed: 12/15/2022] Open
Abstract
The brain incorporates sensory information across modalities to be able to interact with our environment. The peripersonal space (PPS), defined by a high level of crossmodal interaction, is centered on the relevant body part, e.g. the hand, but can spatially expand to encompass tools or reach targets during goal-directed behavior. Previous studies considered expansion of the PPS towards goals within immediate or tool-mediated reach, but not the translocation of the body as during walking. Here, we used the crossmodal congruency effect (CCE) to quantify the extension of the PPS and test if PPS can also expand further to include far located walk-and-reach targets accessible only by translocation of the body. We tested for orientation specificity of the hand-centered reference frame, asking if the CCE inverts with inversion of the hand orientation during reach. We show a high CCE with onset of the movement not only towards reach targets but also walk-and-reach targets. When participants must change hand orientation, the CCE decreases, if not vanishes, and does not simply invert. We conclude that the PPS can expand to the action space beyond immediate or tool-mediated reaching distance but is not purely hand-centered with respect to orientation.
Collapse
Affiliation(s)
- Michael Berger
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany.
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.
| | - Peter Neumann
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany
| | - Alexander Gail
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany
- Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|