1
|
Giannopulu I, Lee K, Abdi E, Noori-Hoshyar A, Brotto G, Van Velsen M, Lin T, Gauchan P, Gorman J, Indelicato G. Predicting neural activity of whole body cast shadow through object cast shadow in dynamic environments. Front Psychol 2024; 15:1149750. [PMID: 38646121 PMCID: PMC11027993 DOI: 10.3389/fpsyg.2024.1149750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 01/23/2024] [Indexed: 04/23/2024] Open
Abstract
Shadows, as all other objects that surround us, are incorporated into the body and extend the body mediating perceptual information. The current study investigates the hypothesis according to which the perception of object shadows would predict the perception of body shadows. 38 participants (19 males and 19 females) aged 23 years on average were immersed into a virtual reality environment and instructed to perceive and indicate the coincidence or non coincidence between the movement of a ball shadow with regard to ball movement on the one hand, and between their body shadow and their body position in space on the other. Their brain activity was recording via a 32-channel EEG system, in which beta (13.5-30 Hz) oscillations were analyzed. A series of Multiple Regression Analysis (MRA) revealed that the beta dynamic oscillations patterns of the bilateral occipito-parieto-frontal pathway associated with the perception of ball shadow appeared to be a significant predictor of the increase in beta oscillations across frontal areas related to the body shadow perception and the decrease in beta oscillations across frontal areas connected to the decision making of the body shadow. Taken together, the findings suggest that inferential thinking ability relative to body shadow would be reliably predicted from object shadows and that the bilateral beta oscillatory modulations would be indicative of the formation of predictive neural frontal assemblies, which encode and infer body shadow neural representation, that is, a substitution of the physical body.
Collapse
Affiliation(s)
- Irini Giannopulu
- Creative Robotics Lab, UNSW, Sydney, NSW, Australia
- Clinical Research and Technological Innovation Centre, RCIT, Paris, France
| | - Khai Lee
- Department of Mechanical, Aerospace and Mechatronics Engineering, Monash University Australia, Melbourne, VIC, Australia
| | - Elahe Abdi
- Department of Mechanical, Aerospace and Mechatronics Engineering, Monash University Australia, Melbourne, VIC, Australia
| | - Azadeh Noori-Hoshyar
- School of Engineering, Information Technology and Physical Sciences, Federation University, Ballarat, VIC, Australia
| | - Gaelle Brotto
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| | - Mathew Van Velsen
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| | - Tiffany Lin
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| | - Priya Gauchan
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| | - Jazmin Gorman
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| | - Giuseppa Indelicato
- Interdisciplinary Centre for the Artificial Mind (iCAM), Gold Coast, QLD, Australia
| |
Collapse
|
2
|
Reed CL, Garza JP, Bush WS, Parikh N, Nagar N, Vecera SP. Does hand position affect orienting when no action is required? An electrophysiological study. Front Neurosci 2023; 16:982005. [PMID: 36685236 PMCID: PMC9853295 DOI: 10.3389/fnins.2022.982005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 12/13/2022] [Indexed: 01/09/2023] Open
Abstract
Previous research has shown that attention can be biased to targets appearing near the hand that require action responses, arguing that attention to the hand facilitates upcoming action. It is unclear whether attention orients to non-targets near the hand not requiring responses. Using electroencephalography/event-related potentials (EEG/ERP), this study investigated whether hand position affected visual orienting to non-targets under conditions that manipulated the distribution of attention. We modified an attention paradigm in which stimuli were presented briefly and rapidly on either side of fixation; participants responded to infrequent targets (15%) but not standard non-targets and either a hand or a block was placed next to one stimulus location. In Experiment 1, attention was distributed across left and right stimulus locations to determine whether P1 or N1 ERP amplitudes to non-target standards were differentially influenced by hand location. In Experiment 2, attention was narrowed to only one stimulus location to determine whether attentional focus affected orienting to non-target locations near the hand. When attention was distributed across both stimulus locations, the hand increased overall N1 amplitudes relative to the block but not selectively to stimuli appearing near the hand. However, when attention was focused on one location, amplitudes were affected by the location of attentional focus and the stimulus, but not by hand or block location. Thus, hand position appears to contribute only a non-location-specific input to standards during visual orienting, but only in cases when attention is distributed across stimulus locations.
Collapse
Affiliation(s)
- Catherine L. Reed
- Department of Psychological Science, Claremont McKenna College, Claremont, CA, United States,*Correspondence: Catherine L. Reed,
| | - John P. Garza
- BUILDing SCHOLARS Center, The University of Texas, El Paso, TX, United States
| | - William S. Bush
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, United States
| | - Natasha Parikh
- Department of Psychological Science, Claremont McKenna College, Claremont, CA, United States
| | - Niti Nagar
- Department of Psychological Science, Claremont McKenna College, Claremont, CA, United States
| | - Shaun P. Vecera
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City, IA, United States
| |
Collapse
|
3
|
Pathak A, Jovanov K, Nitsche M, Mazalek A, Welsh TN. Do Changes in the Body-Part Compatibility Effect Index Tool-Embodiment? J Mot Behav 2023; 55:135-151. [PMID: 36642420 DOI: 10.1080/00222895.2022.2132201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Tool-embodiment is said to occur when the representation of the body extends to incorporate the representation of a tool following goal-directed tool-use. The present study was designed to determine if tool-embodiment-like phenomenon emerges following different interventions. Participants completed body-part compatibility task in which they responded with foot or hand presses to colored targets presented on the foot or hand of a model, or on a rake held by the model. This response time (RT) task was performed before and after one of four interventions. In the Virtual-Tangible and the Virtual-Keyboard interventions, participants used customized controllers or keyboards, respectively, to move a virtual rake and ball around a course. Participants in the Tool-Perception intervention manually pointed to targets presented on static images of the virtual tool-use task. Participants in the Tool-Absent group completed math problems and were not exposed to a tool task. Results revealed that all four interventions lead to a pattern of pre-/post-intervention changes in RT thought to indicate the emergence of tool-embodiment. Overall, the study indicated that tool-embodiment can occur through repeated exposure to the body-part compatibility paradigm in the absence of any active tool-use, and that the paradigm may tap into more than just body schema.
Collapse
Affiliation(s)
- Aarohi Pathak
- Centre for Motor Control, Faculty of Kinesiology & Physical Education, University of Toronto, Toronto, ON, Canada
| | - Kimberley Jovanov
- Centre for Motor Control, Faculty of Kinesiology & Physical Education, University of Toronto, Toronto, ON, Canada
| | - Michael Nitsche
- School of Literature, Media, and Communication, Georgia Tech, Atlanta, GA, USA
| | - Ali Mazalek
- Synaesthetic Media Lab, University of Ryerson, Toronto, ON, Canada
| | - Timothy N Welsh
- Centre for Motor Control, Faculty of Kinesiology & Physical Education, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
4
|
Cornelio P, Haggard P, Hornbaek K, Georgiou O, Bergström J, Subramanian S, Obrist M. The sense of agency in emerging technologies for human–computer integration: A review. Front Neurosci 2022; 16:949138. [PMID: 36172040 PMCID: PMC9511170 DOI: 10.3389/fnins.2022.949138] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 08/05/2022] [Indexed: 11/13/2022] Open
Abstract
Human–computer integration is an emerging area in which the boundary between humans and technology is blurred as users and computers work collaboratively and share agency to execute tasks. The sense of agency (SoA) is an experience that arises by a combination of a voluntary motor action and sensory evidence whether the corresponding body movements have somehow influenced the course of external events. The SoA is not only a key part of our experiences in daily life but also in our interaction with technology as it gives us the feeling of “I did that” as opposed to “the system did that,” thus supporting a feeling of being in control. This feeling becomes critical with human–computer integration, wherein emerging technology directly influences people’s body, their actions, and the resulting outcomes. In this review, we analyse and classify current integration technologies based on what we currently know about agency in the literature, and propose a distinction between body augmentation, action augmentation, and outcome augmentation. For each category, we describe agency considerations and markers of differentiation that illustrate a relationship between assistance level (low, high), agency delegation (human, technology), and integration type (fusion, symbiosis). We conclude with a reflection on the opportunities and challenges of integrating humans with computers, and finalise with an expanded definition of human–computer integration including agency aspects which we consider to be particularly relevant. The aim this review is to provide researchers and practitioners with guidelines to situate their work within the integration research agenda and consider the implications of any technologies on SoA, and thus overall user experience when designing future technology.
Collapse
Affiliation(s)
- Patricia Cornelio
- Ultraleap Ltd., Bristol, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
- *Correspondence: Patricia Cornelio,
| | - Patrick Haggard
- Department of Computer Science, University College London, London, United Kingdom
| | - Kasper Hornbaek
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | | | - Joanna Bergström
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Sriram Subramanian
- Department of Computer Science, University College London, London, United Kingdom
| | - Marianna Obrist
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
5
|
Khan HR, Turri J. Phenomenological Origins of Psychological Ownership. REVIEW OF GENERAL PSYCHOLOGY 2022. [DOI: 10.1177/10892680221085506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Motivated by a set of converging empirical findings and theoretical suggestions pertaining to the construct of ownership, we survey literature from multiple disciplines and present an extensive theoretical account linking the inception of a foundational naïve theory of ownership to principles governing the sense of (body) ownership. The first part of the account examines the emergence of the non-conceptual sense of ownership in terms of the minimal self and the body schema—a dynamic mental model of the body that functions as an instrument of directed action. A remarkable feature of the body schema is that it expands to incorporate objects that are objectively controlled by the person. Moreover, this embodiment of extracorporeal objects is accompanied by the phenomenological feeling of ownership towards the embodied objects. In fact, we argue that the sense of agency and ownership are inextricably linked, and that predictable control over an object can engender the sense of ownership. This relation between objective agency and the sense of ownership is moderated by gestalt-like principles. In the second part, we posit that these early emerging principles and experiences lead to the formation of a naïve theory of ownership rooted in notions of agential involvement.
Collapse
Affiliation(s)
- Haider Riaz Khan
- Department of Philosophy, University of Waterloo, Waterloo, ON, Canada
| | - John Turri
- Philosophy Department and Cognitive Science Program, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
6
|
Interplay of tactile and motor information in constructing spatial self-perception. Curr Biol 2022; 32:1301-1309.e3. [DOI: 10.1016/j.cub.2022.01.047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/01/2021] [Accepted: 01/18/2022] [Indexed: 11/20/2022]
|
7
|
Colbourne JAD, Auersperg AMI, Lambert ML, Huber L, Völter CJ. Extending the Reach of Tooling Theory: A Neurocognitive and Phylogenetic Perspective. Top Cogn Sci 2021; 13:548-572. [PMID: 34165917 DOI: 10.1111/tops.12554] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/02/2021] [Accepted: 06/02/2021] [Indexed: 02/02/2023]
Abstract
Tool use research has suffered from a lack of consistent theoretical frameworks. There is a plethora of tool use definitions and the most widespread ones are so inclusive that the behaviors that fall under them arguably do not have much in common. The situation is aggravated by the prevalence of anecdotes, which have played an undue role in the literature. In order to provide a more rigorous foundation for research and to advance our understanding of the interrelation between tool use and cognition, we suggest the adoption of Fragaszy and Mangalam's (2018) tooling framework, which is characterized by the creation of a body-plus-object system that manages a mechanical interface between tool and surface. Tooling is limited to a narrower suite of behaviors than tool use, which might facilitate its neurocognitive investigation. Indeed, evidence in the literature indicates that tooling has distinct neurocognitive underpinnings not shared by other activities typically classified as tool use, at least in primates. In order to understand the extent of tooling incidences in previous research, we systematically surveyed the comprehensive tool use catalog by Shumaker et al. (2011). We identified 201 tool use submodes, of which only 81 could be classified as tooling, and the majority of the tool use examples across species were poorly supported by evidence. Furthermore, tooling appears to be phylogenetically less widespread than tool use, with the greatest variability found in the primate order. However, in order to confirm these findings and to understand the evolution and neurocognitive mechanisms of tooling, more systematic research will be required in the future, particularly with currently underrepresented taxa.
Collapse
Affiliation(s)
- Jennifer A D Colbourne
- Comparative Cognition Unit, Messerli Research Institute, University of Veterinary Medicine Vienna, University of Vienna, Medical University of Vienna
| | - Alice M I Auersperg
- Comparative Cognition Unit, Messerli Research Institute, University of Veterinary Medicine Vienna, University of Vienna, Medical University of Vienna
| | - Megan L Lambert
- Comparative Cognition Unit, Messerli Research Institute, University of Veterinary Medicine Vienna, University of Vienna, Medical University of Vienna
| | - Ludwig Huber
- Comparative Cognition Unit, Messerli Research Institute, University of Veterinary Medicine Vienna, University of Vienna, Medical University of Vienna
| | - Christoph J Völter
- Comparative Cognition Unit, Messerli Research Institute, University of Veterinary Medicine Vienna, University of Vienna, Medical University of Vienna
| |
Collapse
|
8
|
Abstract
Two experiments were conducted to determine, first, whether food items influence participants’ estimations of the size of their subjective peripersonal space. It was of particular interest whether this representation is influenced by satiated/hungry states and is differentially affected by valence and calorie content of depicted stimuli. Second, event-related brain potentials (ERPs) were used, in order to obtain information about the time course of the observed effects and how they depend on the spatial location of the food pictures. For that purpose, participants had to decide whether food items shown at various distances along a horizontal plane in front of them, were reachable or not. In Experiment 1, when participants were hungry, they perceived an increase of their peripersonal space modulated by high-calorie items which were experienced as being more reachable than low-calorie items. In Experiment 2, the reachability findings were replicated and early and late components of ERPs showed an attentional enhancement in far space for food items when participants were hungry. These findings suggest that participants’ subjective peripersonal space increased while being hungry, especially for high-calorie contents. Attention also seems to be oriented more strongly to far space items due to their expected incentive-related salience, expanding the subjective representation of peripersonal space.
Collapse
|
9
|
Bretas R, Taoka M, Hihara S, Cleeremans A, Iriki A. Neural Evidence of Mirror Self-Recognition in the Secondary Somatosensory Cortex of Macaque: Observations from a Single-Cell Recording Experiment and Implications for Consciousness. Brain Sci 2021; 11:brainsci11020157. [PMID: 33503993 PMCID: PMC7911187 DOI: 10.3390/brainsci11020157] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 01/20/2021] [Accepted: 01/21/2021] [Indexed: 11/23/2022] Open
Abstract
Despite mirror self-recognition being regarded as a classical indication of self-awareness, little is known about its neural underpinnings. An increasing body of evidence pointing to a role of multimodal somatosensory neurons in self-recognition guided our investigation toward the secondary somatosensory cortex (SII), as we observed single-neuron activity from a macaque monkey sitting in front of a mirror. The monkey was previously habituated to the mirror, successfully acquiring the ability of mirror self-recognition. While the monkey underwent visual and somatosensory stimulation, multimodal visual and somatosensory activity was detected in the SII, with neurons found to respond to stimuli seen through the mirror. Responses were also modulated by self-related or non-self-related stimuli. These observations corroborate that vision is an important aspect of SII activity, with electrophysiological evidence of mirror self-recognition at the neuronal level, even when such an ability is not innate. We also show that the SII may be involved in distinguishing self and non-self. Together, these results point to the involvement of the SII in the establishment of bodily self-consciousness.
Collapse
Affiliation(s)
- Rafael Bretas
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Miki Taoka
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Sayaka Hihara
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
| | - Axel Cleeremans
- Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research, Toronto, ON M5G 1M1, Canada;
- Consciousness, Cognition, and Computation Group (CO3), Centre for Research in Cognition and Neurosciences (CRCN), ULB Neuroscience Institute (UNI), Université Libre de Bruxelles (ULB), B-1050 Brussels, Belgium
| | - Atsushi Iriki
- Laboratory for Symbolic Cognitive Development, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan; (R.B.); (M.T.)
- Program in Brain, Mind & Consciousness, Canadian Institute for Advanced Research, Toronto, ON M5G 1M1, Canada;
- Correspondence:
| |
Collapse
|
10
|
Pugach G, Pitti A, Tolochko O, Gaussier P. Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events. Front Neurorobot 2019; 13:5. [PMID: 30899217 PMCID: PMC6416207 DOI: 10.3389/fnbot.2019.00005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 02/06/2019] [Indexed: 11/13/2022] Open
Abstract
Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.
Collapse
Affiliation(s)
- Ganna Pugach
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Alexandre Pitti
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Olga Tolochko
- Faculty of Electric Power Engineering and Automation, National Technical University of Ukraine Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| |
Collapse
|
11
|
Berger M, Neumann P, Gail A. Peri-hand space expands beyond reach in the context of walk-and-reach movements. Sci Rep 2019; 9:3013. [PMID: 30816205 PMCID: PMC6395760 DOI: 10.1038/s41598-019-39520-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 01/28/2019] [Indexed: 12/15/2022] Open
Abstract
The brain incorporates sensory information across modalities to be able to interact with our environment. The peripersonal space (PPS), defined by a high level of crossmodal interaction, is centered on the relevant body part, e.g. the hand, but can spatially expand to encompass tools or reach targets during goal-directed behavior. Previous studies considered expansion of the PPS towards goals within immediate or tool-mediated reach, but not the translocation of the body as during walking. Here, we used the crossmodal congruency effect (CCE) to quantify the extension of the PPS and test if PPS can also expand further to include far located walk-and-reach targets accessible only by translocation of the body. We tested for orientation specificity of the hand-centered reference frame, asking if the CCE inverts with inversion of the hand orientation during reach. We show a high CCE with onset of the movement not only towards reach targets but also walk-and-reach targets. When participants must change hand orientation, the CCE decreases, if not vanishes, and does not simply invert. We conclude that the PPS can expand to the action space beyond immediate or tool-mediated reaching distance but is not purely hand-centered with respect to orientation.
Collapse
Affiliation(s)
- Michael Berger
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany.
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.
| | - Peter Neumann
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany
| | - Alexander Gail
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz-Institute for Primate Research, Goettingen, Germany
- Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany
- Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany
- Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
12
|
Miura S, Kawamura K, Kobayashi Y, Fujie MG. Using Brain Activation to Evaluate Arrangements Aiding Hand-Eye Coordination in Surgical Robot Systems. IEEE Trans Biomed Eng 2018; 66:2352-2361. [PMID: 30582521 DOI: 10.1109/tbme.2018.2889316] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
GOAL To realize intuitive, minimally invasive surgery, surgical robots are often controlled using master-slave systems. However, the surgical robot's structure often differs from that of the human body, so the arrangement between the monitor and master must reflect this physical difference. In this study, we validate the feasibility of an embodiment evaluation method that determines the arrangement between the monitor and master. In our constructed cognitive model, the brain's intraparietal sulcus activates significantly when somatic and visual feedback match. Using this model, we validate a cognitively appropriate arrangement between the monitor and master. METHODS In experiments, we measure participants' brain activation using an imaging device as they control the virtual surgical simulator. Two experiments are carried out that vary the monitor and hand positions. CONCLUSION There are two common arrangements of the monitor and master at the brain activation's peak: One is placing the monitor behind the master, so the user feels that the system is an extension of his arms into the monitor; the other arranges the monitor in front of the master, so the user feels the correspondence between his own arm and the virtual arm in the monitor. SIGNIFICANCE From these results, we conclude that the arrangement between the monitor and master impacts embodiment, enabling the participant to feel apparent posture matches in master-slave surgical robot systems.
Collapse
|
13
|
The sense of agency shapes body schema and peripersonal space. Sci Rep 2018; 8:13847. [PMID: 30218103 PMCID: PMC6138644 DOI: 10.1038/s41598-018-32238-z] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 08/21/2018] [Indexed: 12/12/2022] Open
Abstract
Body schema, a sensorimotor representation of the body used for planning and executing movements, is plastic because it extends by using a tool to reach far objects. Modifications of peripersonal space, i.e., a functional representation of reach space, usually co-occur with body schema changes. Here, we hypothesized that such plastic changes depend on the experience of controlling the course of events in space trough one’s own actions, i.e., the sense of agency. In two experiments, body schema and peripersonal space were assessed before and after the participants’ sense of agency over a virtual hand was manipulated. Body schema and peripersonal space enlarged or contracted depending on whether the virtual hand was presented in far space, or closer to the participants’ body than the real hand. These findings suggest that body schema and peripersonal space are affected by the dynamic mapping between intentional body movements and expected consequences in space.
Collapse
|
14
|
Development of a quantitative evaluation system for visuo-motor control in three-dimensional virtual reality space. Sci Rep 2018; 8:13439. [PMID: 30194427 PMCID: PMC6128926 DOI: 10.1038/s41598-018-31758-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 08/22/2018] [Indexed: 12/11/2022] Open
Abstract
The process of learning a human's movement and motor control mechanisms by watching and mimicking human motions was based on visuo-motor control in three dimensional space. However, previous studies regarding the visuo-motor control in three dimensional space have focused on analyzing the tracking tasks along one-dimensional lines or two-dimensional planes using single or multi-joint movements. Therefore, in this study, we developed a new system to quantitatively evaluate visuo-motor control in three-dimensional space based on virtual reality (VR) environment. Our proposed system is designed to analyze circular tracking movements on frontal and sagittal planes in VR space with millimeter level accuracy. In particular, we compared the circular tracking movements under monocular and binocular vision conditions. The results showed that the accuracy of circular tracking movements drops approximately 4.5 times in monocular vision than that in binocular vision on both frontal and sagittal planes. We also found that significant difference can be observed between frontal and sagittal planes for only the accuracy of X-axis in both monocular and binocular visions.
Collapse
|
15
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
16
|
Carole P. Pictorial Competence in Primates: A Cognitive Correlate of Mirror Self-Recognition? Primates 2018. [DOI: 10.5772/intechopen.75568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
|
18
|
Davoli CC, Bloesch EK, Abrams RA. The power of the imagination to affect peripersonal space representations. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1405135] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Emily K. Bloesch
- Department of Psychology, Central Michigan University, Mt. Pleasant, MI, USA
| | - Richard A. Abrams
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
19
|
Vazquez Y, Federici L, Pesaran B. Multiple spatial representations interact to increase reach accuracy when coordinating a saccade with a reach. J Neurophysiol 2017; 118:2328-2343. [PMID: 28768742 DOI: 10.1152/jn.00408.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 07/11/2017] [Accepted: 07/25/2017] [Indexed: 11/22/2022] Open
Abstract
Reaching is an essential behavior that allows primates to interact with the environment. Precise reaching to visual targets depends on our ability to localize and foveate the target. Despite this, how the saccade system contributes to improvements in reach accuracy remains poorly understood. To assess spatial contributions of eye movements to reach accuracy, we performed a series of behavioral psychophysics experiments in nonhuman primates (Macaca mulatta). We found that a coordinated saccade with a reach to a remembered target location increases reach accuracy without target foveation. The improvement in reach accuracy was similar to that obtained when the subject had visual information about the location of the current target in the visual periphery and executed the reach while maintaining central fixation. Moreover, we found that the increase in reach accuracy elicited by a coordinated movement involved a spatial coupling mechanism between the saccade and reach movements. We observed significant correlations between the saccade and reach errors for coordinated movements. In contrast, when the eye and arm movements were made to targets in different spatial locations, the magnitude of the error and the degree of correlation between the saccade and reach direction were determined by the spatial location of the eye and the hand targets. Hence, we propose that coordinated movements improve reach accuracy without target foveation due to spatial coupling between the reach and saccade systems. Spatial coupling could arise from a neural mechanism for coordinated visual behavior that involves interacting spatial representations.NEW & NOTEWORTHY How visual spatial representations guiding reach movements involve coordinated saccadic eye movements is unknown. Temporal coupling between the reach and saccade system during coordinated movements improves reach performance. However, the role of spatial coupling is unclear. Using behavioral psychophysics, we found that spatial coupling increases reach accuracy in addition to temporal coupling and visual acuity. These results suggest that a spatial mechanism to couple the reach and saccade systems increases the accuracy of coordinated movements.
Collapse
Affiliation(s)
- Yuriria Vazquez
- Center for Neural Science, New York University, New York, New York; and
| | - Laura Federici
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Bijan Pesaran
- Center for Neural Science, New York University, New York, New York; and
| |
Collapse
|
20
|
|
21
|
Pitti A, Pugach G, Gaussier P, Shimada S. Spatio-Temporal Tolerance of Visuo-Tactile Illusions in Artificial Skin by Recurrent Neural Network with Spike-Timing-Dependent Plasticity. Sci Rep 2017; 7:41056. [PMID: 28106139 PMCID: PMC5247701 DOI: 10.1038/srep41056] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 12/16/2016] [Indexed: 12/15/2022] Open
Abstract
Perceptual illusions across multiple modalities, such as the rubber-hand illusion, show how dynamic the brain is at adapting its body image and at determining what is part of it (the self) and what is not (others). Several research studies showed that redundancy and contingency among sensory signals are essential for perception of the illusion and that a lag of 200-300 ms is the critical limit of the brain to represent one's own body. In an experimental setup with an artificial skin, we replicate the visuo-tactile illusion within artificial neural networks. Our model is composed of an associative map and a recurrent map of spiking neurons that learn to predict the contingent activity across the visuo-tactile signals. Depending on the temporal delay incidentally added between the visuo-tactile signals or the spatial distance of two distinct stimuli, the two maps detect contingency differently. Spiking neurons organized into complex networks and synchrony detection at different temporal interval can well explain multisensory integration regarding self-body.
Collapse
Affiliation(s)
- Alexandre Pitti
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Ganna Pugach
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France.,Energy and Metallurgy Department, Donetsk National Technical University, Krasnoarmeysk, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, UMR CNRS 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Sotaro Shimada
- Dept. of Electronics and Bioinformatics, School of Science and Technology, Meiji University, Kawasaki, Japan
| |
Collapse
|
22
|
Incorporation of prosthetic limbs into the body representation of amputees: Evidence from the crossed hands temporal order illusion. PROGRESS IN BRAIN RESEARCH 2017. [DOI: 10.1016/bs.pbr.2017.08.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register]
|
23
|
Tramacere A, Pievani T, Ferrari PF. Mirror neurons in the tree of life: mosaic evolution, plasticity and exaptation of sensorimotor matching responses. Biol Rev Camb Philos Soc 2016; 92:1819-1841. [PMID: 27862868 DOI: 10.1111/brv.12310] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 10/05/2016] [Accepted: 10/10/2016] [Indexed: 12/31/2022]
Abstract
Considering the properties of mirror neurons (MNs) in terms of development and phylogeny, we offer a novel, unifying, and testable account of their evolution according to the available data and try to unify apparently discordant research, including the plasticity of MNs during development, their adaptive value and their phylogenetic relationships and continuity. We hypothesize that the MN system reflects a set of interrelated traits, each with an independent natural history due to unique selective pressures, and propose that there are at least three evolutionarily significant trends that gave raise to three subtypes: hand visuomotor, mouth visuomotor, and audio-vocal. Specifically, we put forward a mosaic evolution hypothesis, which posits that different types of MNs may have evolved at different rates within and among species. This evolutionary hypothesis represents an alternative to both adaptationist and associative models. Finally, the review offers a strong heuristic potential in predicting the circumstances under which specific variations and properties of MNs are expected. Such predictive value is critical to test new hypotheses about MN activity and its plastic changes, depending on the species, the neuroanatomical substrates, and the ecological niche.
Collapse
Affiliation(s)
- Antonella Tramacere
- Department of Neuroscience, University of Parma, Parma, 43100, Italy.,Deutsche Primaten Zentrum - Lichtenberg-Kolleg, Institute for Advanced Study, 37083, Göttingen, Germany
| | - Telmo Pievani
- Department of Biology, University of Padua, Padua, 35131, Italy
| | - Pier F Ferrari
- Department of Neuroscience, University of Parma, Parma, 43100, Italy.,Institut des Sciences Cognitives 'Marc Jeannerod', CNRS/Université Claude Bernard Lyon, 69675, Bron Cedex, France
| |
Collapse
|
24
|
Whiteley L, Kennett S, Taylor-Clarke M, Haggard P. Facilitated Processing of Visual Stimuli Associated with the Body. Perception 2016; 33:307-14. [PMID: 15176615 DOI: 10.1068/p5053] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Recent work on tactile perception has revealed enhanced tactile acuity and speeded spatial-choice reaction times (RTs) when viewing the stimulated body site as opposed to viewing a neutral object. Here we examine whether this body-view enhancement effect extends to visual targets. Participants performed a speeded spatial discrimination between two lights attached either to their own left index finger or to a wooden finger-shaped object, making a simple distal–proximal decision. We filmed either the finger-mounted or the object-mounted lights in separate experimental blocks and the live scene was projected onto a screen in front of the participants. Thus, participants responded to identical visual targets varying only in their context: on the body or not. Results revealed a large performance advantage for the finger-mounted stimuli: reaction times were substantially reduced, while discrimination accuracy was unaffected. With this finding we address concerns associated with previous work on the processing of stimuli attributed to the self and extend the finding of a performance advantage for such stimuli to vision.
Collapse
Affiliation(s)
- Louise Whiteley
- Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, UK
| | | | | | | |
Collapse
|
25
|
He X, Stefan M, Terranova K, Steinglass J, Marsh R. Altered White Matter Microstructure in Adolescents and Adults with Bulimia Nervosa. Neuropsychopharmacology 2016; 41:1841-8. [PMID: 26647975 PMCID: PMC4869053 DOI: 10.1038/npp.2015.354] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Revised: 11/12/2015] [Accepted: 12/04/2015] [Indexed: 12/29/2022]
Abstract
Previous data suggest structural and functional deficits in frontal control circuits in adolescents and adults with bulimia nervosa (BN), but less is known about the microstructure of white matter in these circuits early in the course of the disorder. Diffusion tensor imaging (DTI) data were acquired from 28 female adolescents and adults with BN and 28 age- and BMI-matched healthy female participants. Tract-based spatial statistics (TBSS) was used to detect group differences in white matter microstructure and explore the differential effects of age on white matter microstructure across groups. Significant reductions in fractional anisotropy (FA) were detected in the BN compared with healthy control group in multiple tracts including forceps minor and major, superior longitudinal, inferior fronto-occipital, and uncinate fasciculi, anterior thalamic radiation, cingulum, and corticospinal tract. FA reductions in forceps and frontotemporal tracts correlated inversely with symptom severity and Stroop interference in the BN group. These findings suggest that white matter microstructure is abnormal in BN in tracts extending through frontal and temporoparietal cortices, especially in those with the most severe symptoms. Age-related differences in both FA and RD in these tracts in BN compared with healthy individuals may represent an abnormal trajectory of white matter development that contributes to the persistence of functional impairments in self-regulation in BN.
Collapse
Affiliation(s)
- Xiaofu He
- Division of Child and Adolescent Psychiatry, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Mihaela Stefan
- Division of Child and Adolescent Psychiatry, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Kate Terranova
- Division of Child and Adolescent Psychiatry, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Joanna Steinglass
- Eating Disorders Research Unit, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA
| | - Rachel Marsh
- Division of Child and Adolescent Psychiatry, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA,Eating Disorders Research Unit, Department of Psychiatry, New York State Psychiatric Institute and College of Physicians & Surgeons, Columbia University, New York, NY, USA,Division of Child and Adolescent Psychiatry in the Department of Psychiatry, Columbia University and New York State Psychiatric Institute, 1051 Riverside Drive, Unit 74, New York, NY 10032, USA, Tel: +1 646 774 5774, Fax: +1 212 543 0522, E-mail:
| |
Collapse
|
26
|
Declerck G. How we remember what we can do. SOCIOAFFECTIVE NEUROSCIENCE & PSYCHOLOGY 2015; 5:24807. [PMID: 26507953 PMCID: PMC4623285 DOI: 10.3402/snp.v5.24807] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Revised: 09/14/2015] [Accepted: 09/28/2015] [Indexed: 11/14/2022]
Abstract
According to the motor simulation theory, the knowledge we possess of what we can do is based on simulation mechanisms triggered by an off-line activation of the brain areas involved in motor control. Action capabilities memory does not work by storing some content, but consists in the capacity, rooted in sensory-motor systems, to reenact off-line action sequences exhibiting the range of our powers. In this paper, I present several arguments from cognitive neuropsychology, but also first-person analysis of experience, against this hypothesis. The claim that perceptual access to affordances is mediated by motor simulation processes rests on a misunderstanding of what affordances are, and comes up against a computational reality principle. Motor simulation cannot provide access to affordances because (i) the affordances we are aware of at each moment are too many for their realization to be simulated by the brain and (ii) affordances are not equivalent to currently or personally feasible actions. The explanatory significance of the simulation theory must then be revised downwards compared to what is claimed by most of its advocates. One additional challenge is to determine the prerequisite, in terms of cognitive processing, for the motor simulation mechanisms to work. To overcome the limitations of the simulation theory, I propose a new approach: the direct content specification hypothesis. This hypothesis states that, at least for the most basic actions of our behavioral repertoire, the action possibilities we are aware of through perception are directly specified by perceptual variables characterizing the content of our experience. The cognitive system responsible for the perception of action possibilities is consequently far more direct, in terms of cognitive processing, than what is stated by the simulation theory. To support this hypothesis I review evidence from current neuropsychological research, in particular data suggesting a phenomenon of ‘fossilization’ of affordances. Fossilization can be defined as a gap between the capacities that are treated as available by the cognitive system and the capacities this system really has at its disposal. These considerations do not mean that motor simulation cannot contribute to explain how we gain perceptual knowledge of what we can do based on the memory of our past performances. However, when precisely motor simulation plays a role and what it is for exactly currently remain largely unknown.
Collapse
Affiliation(s)
- Gunnar Declerck
- Sorbonne universités, Université de technologie de Compiègne, EA 2223 Costech (Connaissance, Organisation et Systèmes Techniques), Centre Pierre Guillaumat - CS 60 319 - 60 203 Compiègne cedex, France;
| |
Collapse
|
27
|
Blanke O, Slater M, Serino A. Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness. Neuron 2015; 88:145-66. [PMID: 26447578 DOI: 10.1016/j.neuron.2015.09.029] [Citation(s) in RCA: 386] [Impact Index Per Article: 42.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Affiliation(s)
- Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland; Department of Neurology, University of Geneva, 24 rue Micheli-du-Crest, 1211 Geneva, Switzerland.
| | - Mel Slater
- ICREA-University of Barcelona, Campus de Mundet, 08035 Barcelona, Spain; Department of Computer Science, University College London, Malet Place Engineering Building, Gower Street, London, WC1E 6BT, UK
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland.
| |
Collapse
|
28
|
Miura S, Kobayashi Y, Kawamura K, Nakashima Y, Fujie MG. Brain activation in parietal area during manipulation with a surgical robot simulator. Int J Comput Assist Radiol Surg 2015; 10:783-90. [PMID: 25847665 PMCID: PMC4449951 DOI: 10.1007/s11548-015-1178-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2015] [Accepted: 03/13/2015] [Indexed: 11/28/2022]
Abstract
PURPOSE we present an evaluation method to qualify the embodiment caused by the physical difference between master-slave surgical robots by measuring the activation of the intraparietal sulcus in the user's brain activity during surgical robot manipulation. We show the change of embodiment based on the change of the optical axis-to-target view angle in the surgical simulator to change the manipulator's appearance in the monitor in terms of hand-eye coordination. The objective is to explore the change of brain activation according to the change of the optical axis-to-target view angle. METHODS In the experiments, we used a functional near-infrared spectroscopic topography (f-NIRS) brain imaging device to measure the brain activity of the seven subjects while they moved the hand controller to insert a curved needle into a target using the manipulator in a surgical simulator. The experiment was carried out several times with a variety of optical axis-to-target view angles. RESULTS Some participants showed a significant peak (P value = 0.037, F-number = 2.841) when the optical axis-to-target view angle was 75°. CONCLUSIONS The positional relationship between the manipulators and endoscope at 75° would be the closest to the human physical relationship between the hands and eyes.
Collapse
Affiliation(s)
- Satoshi Miura
- Department of Modern Mechanical Engineering, Waseda University, Room 309, Bld. 59, 3-4-1 Okubo, Shinjuku, Tokyo, 169-8555, Japan,
| | | | | | | | | |
Collapse
|
29
|
Cléry J, Guipponi O, Wardak C, Ben Hamed S. Neuronal bases of peripersonal and extrapersonal spaces, their plasticity and their dynamics: Knowns and unknowns. Neuropsychologia 2015; 70:313-26. [PMID: 25447371 DOI: 10.1016/j.neuropsychologia.2014.10.022] [Citation(s) in RCA: 148] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 10/09/2014] [Accepted: 10/14/2014] [Indexed: 11/19/2022]
Affiliation(s)
- Justine Cléry
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Olivier Guipponi
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Claire Wardak
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Suliann Ben Hamed
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France.
| |
Collapse
|
30
|
Chang L, Fang Q, Zhang S, Poo MM, Gong N. Mirror-induced self-directed behaviors in rhesus monkeys after visual-somatosensory training. Curr Biol 2015; 25:212-217. [PMID: 25578908 DOI: 10.1016/j.cub.2014.11.016] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2013] [Revised: 09/16/2014] [Accepted: 11/06/2014] [Indexed: 11/25/2022]
Abstract
Mirror self-recognition is a hallmark of higher intelligence in humans. Most children recognize themselves in the mirror by 2 years of age. In contrast to human and some great apes, monkeys have consistently failed the standard mark test for mirror self-recognition in all previous studies. Here, we show that rhesus monkeys could acquire mirror-induced self-directed behaviors resembling mirror self-recognition following training with visual-somatosensory association. Monkeys were trained on a monkey chair in front of a mirror to touch a light spot on their faces produced by a laser light that elicited an irritant sensation. After 2-5 weeks of training, monkeys had learned to touch a face area marked by a non-irritant light spot or odorless dye in front of a mirror and by a virtual face mark on the mirroring video image on a video screen. Furthermore, in the home cage, five out of seven trained monkeys showed typical mirror-induced self-directed behaviors, such as touching the mark on the face or ear and then looking at and/or smelling their fingers, as well as spontaneously using the mirror to explore normally unseen body parts. Four control monkeys of a similar age that went through mirror habituation but had no training of visual-somatosensory association did not pass any mark tests and did not exhibit mirror-induced self-directed behaviors. These results shed light on the origin of mirror self-recognition and suggest a new approach to studying its neural mechanism.
Collapse
Affiliation(s)
- Liangtang Chang
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qin Fang
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Shikun Zhang
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Mu-Ming Poo
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China
| | - Neng Gong
- Institute of Neuroscience and Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
31
|
Revechkis B, Aflalo TNS, Kellis S, Pouratian N, Andersen RA. Parietal neural prosthetic control of a computer cursor in a graphical-user-interface task. J Neural Eng 2014; 11:066014. [PMID: 25394419 DOI: 10.1088/1741-2560/11/6/066014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
OBJECTIVE To date, the majority of Brain-Machine Interfaces have been used to perform simple tasks with sequences of individual targets in otherwise blank environments. In this study we developed a more practical and clinically relevant task that approximated modern computers and graphical user interfaces (GUIs). This task could be problematic given the known sensitivity of areas typically used for BMIs to visual stimuli, eye movements, decision-making, and attentional control. Consequently, we sought to assess the effect of a complex, GUI-like task on the quality of neural decoding. APPROACH A male rhesus macaque monkey was implanted with two 96-channel electrode arrays in area 5d of the superior parietal lobule. The animal was trained to perform a GUI-like 'Face in a Crowd' task on a computer screen that required selecting one cued, icon-like, face image from a group of alternatives (the 'Crowd') using a neurally controlled cursor. We assessed whether the crowd affected decodes of intended cursor movements by comparing it to a 'Crowd Off' condition in which only the matching target appeared without alternatives. We also examined if training a neural decoder with the Crowd On rather than Off had any effect on subsequent decode quality. MAIN RESULTS Despite the additional demands of working with the Crowd On, the animal was able to robustly perform the task under Brain Control. The presence of the crowd did not itself affect decode quality. Training the decoder with the Crowd On relative to Off had no negative influence on subsequent decoding performance. Additionally, the subject was able to gaze around freely without influencing cursor position. SIGNIFICANCE Our results demonstrate that area 5d recordings can be used for decoding in a complex, GUI-like task with free gaze. Thus, this area is a promising source of signals for neural prosthetics that utilize computing devices with GUI interfaces, e.g. personal computers, mobile devices, and tablet computers.
Collapse
Affiliation(s)
- Boris Revechkis
- Division of Biology and Biological Engineering, MC 216-76, 1200 E California Blvd, Pasadena, CA 91125, USA
| | | | | | | | | |
Collapse
|
32
|
Wesslein AK, Spence C, Frings C. Vision of embodied rubber hands enhances tactile distractor processing. Exp Brain Res 2014; 233:477-86. [PMID: 25354970 DOI: 10.1007/s00221-014-4129-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 10/11/2014] [Indexed: 10/24/2022]
Abstract
Previous research has demonstrated that viewing one's hand can induce tactile response compatibility effects at the hands. Here, we investigated the question of whether vision of one's own hand is actually necessary. The Eriksen flanker task was combined with the rubber hand illusion in order to determine whether tactile distractors presented to the hand would be processed up to the level of response selection when a pair of rubber hands was seen (while one's own hands were not). Our results demonstrate that only if the rubber hands are perceived as belonging to one's own body, is enhanced distractor processing (up to the level of response selection) observed at the hands. In conclusion, vision of a pair of fake hands enhances tactile distractor processing at the hands if, and only if, it happens to be incorporated into the body representation.
Collapse
Affiliation(s)
- Ann-Katrin Wesslein
- Department of Psychology, Cognitive Psychology, University of Trier, 54286, Trier, Germany,
| | | | | |
Collapse
|
33
|
Ward J, Wright T. Sensory substitution as an artificially acquired synaesthesia. Neurosci Biobehav Rev 2014; 41:26-35. [DOI: 10.1016/j.neubiorev.2012.07.007] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2012] [Revised: 07/18/2012] [Accepted: 07/26/2012] [Indexed: 10/28/2022]
|
34
|
Within-hemifield posture changes affect tactile-visual exogenous spatial cueing without spatial precision, especially in the dark. Atten Percept Psychophys 2014; 76:1121-35. [PMID: 24470256 PMCID: PMC4174290 DOI: 10.3758/s13414-013-0484-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We investigated the effects of seen and unseen within-hemifield posture changes on crossmodal visual-tactile links in covert spatial attention. In all experiments, a spatially nonpredictive tactile cue was presented to the left or the right hand, with the two hands placed symmetrically across the midline. Shortly after a tactile cue, a visual target appeared at one of two eccentricities within either of the hemifields. For half of the trial blocks, the hands were aligned with the inner visual target locations, and for the remainder, the hands were aligned with the outer target locations. In Experiments 1 and 2, the inner and outer eccentricities were 17.5º and 52.5º, respectively. In Experiment 1, the arms were completely covered, and visual up-down judgments were better when on the same side as the preceding tactile cue. Cueing effects were not significantly affected by hand or target alignment. In Experiment 2, the arms were in view, and now some target responses were affected by cue alignment: Cueing for outer targets was only significant when the hands were aligned with them. In Experiment 3, we tested whether any unseen posture changes could alter the cueing effects, by widely separating the inner and outer target eccentricities (now 10º and 86º). In this case, hand alignment did affect some of the cueing effects: Cueing for outer targets was now only significant when the hands were in the outer position. Although these results confirm that proprioception can, in some cases, influence tactile-visual links in exogenous spatial attention, they also show that spatial precision is severely limited, especially when posture is unseen.
Collapse
|
35
|
Saegusa R, Metta G, Sandini G, Natale L. Developmental perception of the self and action. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:183-202. [PMID: 24806653 DOI: 10.1109/tnnls.2013.2271793] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper describes a developmental framework for action-driven perception in anthropomorphic robots. The key idea of the framework is that action generation develops the agent's perception of its own body and actions. Action-driven development is critical for identifying changing body parts and understanding the effects of actions in unknown or nonstationary environments. We embedded minimal knowledge into the robot's cognitive system in the form of motor synergies and actions to allow motor exploration. The robot voluntarily generates actions and develops the ability to perceive its own body and the effect that it generates on the environment. The robot, in addition, can compose this kind of learned primitives to perform complex actions and characterize them in terms of their sensory effects. After learning, the robot can recognize manipulative human behaviors with cross-modal anticipation for recovery of unavailable sensory modality, and reproduce the recognized actions afterward. We evaluated the proposed framework in the experiments with a real robot. In the experiments, we achieved autonomous body identification, learning of fixation, reaching and grasping actions, and developmental recognition of human actions as well as their reproduction.
Collapse
|
36
|
Abstract
An input (e.g., airplane takeoff sound) to a sensory modality can suppress the percept of another input (e.g., talking voices of neighbors) of the same modality. This perceptual suppression effect is evidence that neural responses to different inputs closely interact with each other in the brain. While recent studies suggest that close interactions also occur across sensory modalities, crossmodal perceptual suppression effect has not yet been reported. Here, we demonstrate that tactile stimulation can suppress the percept of visual stimuli: Visual orientation discrimination performance was degraded when a tactile vibration was applied to the observer's index finger of hands. We also demonstrated that this tactile suppression effect on visual perception occurred primarily when the tactile and visual information were spatially and temporally consistent. The current findings would indicate that neural signals could closely and directly interact with each other, sufficient to induce the perceptual suppression effect, even across sensory modalities.
Collapse
|
37
|
Abstract
Tool use is a vital component of the human behavioural repertoire. The benefits of tool use have often been assumed to be self-evident: by extending control over our environment, we have increased energetic returns and buffered ourselves from potentially harmful influences. In recent decades, however, the study of tool use in both humans and non-human animals has expanded the way we think about the role of tools in the natural world. This Theme Issue is aimed at bringing together this developing body of knowledge, gathered across multiple species and from multiple research perspectives, to chart the wider evolutionary context of this phylogenetically rare behaviour.
Collapse
Affiliation(s)
- Dora Biro
- Department of Zoology, University of Oxford, , Oxford, UK
| | | | | |
Collapse
|
38
|
|
39
|
Mice move smoothly: irrelevant object variation affects perception, but not computer mouse actions. Exp Brain Res 2013; 231:97-106. [PMID: 23955104 DOI: 10.1007/s00221-013-3671-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Accepted: 07/31/2013] [Indexed: 10/26/2022]
Abstract
Human-Computer Interactions pose special demands on the motor system, especially regarding the virtual tool transformations underlying typical mouse movements. We investigated whether such virtual tool-transformed movements are similarly resistant to irrelevant variation of a target object as skilled natural movements are. Results show that such irrelevant information deteriorates performance in perceptual tasks, whereas movement parameters remain unaffected, suggesting that the control of virtual tools draws on the same mechanisms as natural actions do. The results are discussed in terms of their practical utility and recent findings investigating unskilled and transformed movements in the framework of the action/perception model and the integration of tools into the body schema.
Collapse
|
40
|
Visual presentation of hand image modulates visuo–tactile temporal order judgment. Exp Brain Res 2013; 228:43-50. [DOI: 10.1007/s00221-013-3535-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2013] [Accepted: 04/17/2013] [Indexed: 10/26/2022]
|
41
|
Sengül A, van Elk M, Rognini G, Aspell JE, Bleuler H, Blanke O. Extending the body to virtual tools using a robotic surgical interface: evidence from the crossmodal congruency task. PLoS One 2012; 7:e49473. [PMID: 23227142 PMCID: PMC3515602 DOI: 10.1371/journal.pone.0049473] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Accepted: 10/09/2012] [Indexed: 11/18/2022] Open
Abstract
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.
Collapse
Affiliation(s)
- Ali Sengül
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | | | | | | | | | |
Collapse
|
42
|
Abstract
The blink reflex elicited by the electrical stimulation of the median nerve at the wrist [hand blink reflex (HBR)] is a subcortical, defensive response that is enhanced when the stimulated hand is inside the peripersonal space of the face. Such enhancement results from a tonic, top-down modulation of the excitability of the brainstem interneurons mediating the HBR. Here we aim to (1) characterize the somatotopical specificity of this top-down modulation and investigate its dependence on (2) cognitive expectations and (3) the presence of objects protecting the face, in healthy humans. Experiment 1 showed that the somatotopical specificity of the HBR enhancement is partially homosegmental, i.e., it is greater for the HBR elicited by the stimulation of the hand near the face compared with the other hand, always kept far from the face. Experiment 2 showed that the HBR is enhanced only when participants expect to receive stimuli on the hand close to the face and is thus strongly dependent on cognitive expectations. Experiment 3 showed that the HBR enhancement by hand-face proximity is suppressed when a thin wooden screen is placed between the participants' face and their hand. Thus, the screen reduces the extension of the defensive peripersonal space, so that the hand is never inside the peripersonal space of the face, even in the "near" condition. Together, these findings indicate a fine somatotopical and cognitive tuning of the excitability of brainstem circuits subserving the HBR, whose strength is adjusted depending on the context in a purposeful manner.
Collapse
|
43
|
A satisficing and bricoleur approach to sensorimotor cognition. Biosystems 2012; 110:65-73. [PMID: 23063599 DOI: 10.1016/j.biosystems.2012.09.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2012] [Revised: 09/18/2012] [Accepted: 09/28/2012] [Indexed: 01/06/2023]
Abstract
In this manuscript I present a set of neural processing principles and evolutionary constraints that should be taken into account in the characterization of sensorimotor cognition. I review evidence supporting the choice of the set of principles, and then I assess how such principles apply to two cases, object perception-action and peripersonal space. The aim is to emphasize the importance of focusing cognitive models on how evolution shapes functional paths to adaptations, as well as to adopt fitness maximization analyses of cognitive functions. Such an approach contrasts with the widespread reverse-engineering assumption that the neural system comprises a set of specialized circuits designed to comply with its assumed functions. The evidence presented in the manuscript points to the fact that neural systems should not be seen as a seat of optimal processes and circuits addressing particular problems in sensorimotor cognition, but as a set of satisficing and tinkered components, mostly not addressing the problems that are supposed to solve, but solving them as secondary effects of the engaged processes. I conclude with a corollary of the challenges lying ahead of the proposed approach.
Collapse
|
44
|
|
45
|
Rybarczyk YP, Mestre D. Effect of temporal organization of the visuo-locomotor coupling on the predictive steering. Front Psychol 2012; 3:239. [PMID: 22798955 PMCID: PMC3394438 DOI: 10.3389/fpsyg.2012.00239] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2012] [Accepted: 06/22/2012] [Indexed: 11/16/2022] Open
Abstract
Studies on the direction of a driver’s gaze while taking a bend show that the individual looks toward the tangent-point of the inside curve. Mathematically, the direction of this point in relation to the car enables the driver to predict the curvature of the road. In the same way, when a person walking in the street turns a corner, his/her gaze anticipates the rotation of the body. A current explanation for the visuo-motor anticipation over the locomotion would be that the brain, involved in a steering behavior, executes an internal model of the trajectory that anticipates the completion of the path, and not the contrary. This paper proposes to test this hypothesis by studying the effect of an artificial manipulation of the visuo-locomotor coupling on the trajectory prediction. In this experiment, subjects remotely control a mobile robot with a pan-tilt camera. This experimental paradigm is chosen to manipulate in an easy and precise way the temporal organization of the visuo-locomotor coupling. The results show that only the visuo-locomotor coupling organized from the visual sensor to the locomotor organs enables (i) a significant smoothness of the trajectory and (ii) a velocity-curvature relationship that follows the “2/3 Power Law.” These findings are consistent with the theory of an anticipatory construction of an internal model of the trajectory. This mental representation used by the brain as a forward prediction of the formation of the path seems conditioned by the motor program. The overall results are discussed in terms of the sensorimotor scheme bases of the predictive coding.
Collapse
|
46
|
Hosoda K, Sekimoto S, Nishigori Y, Takamuku S, Ikemoto S. Anthropomorphic Muscular–Skeletal Robotic Upper Limb for Understanding Embodied Intelligence. Adv Robot 2012. [DOI: 10.1163/156855312x625371] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Koh Hosoda
- a Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan;,
| | - Shunsuke Sekimoto
- b Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Yoichi Nishigori
- c Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Shinya Takamuku
- d Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Shuhei Ikemoto
- e Department of Multimedia Engineering, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
47
|
Nabeshima C, Kuniyoshi Y. A Method for Sustaining Consistent Sensory–Motor Coordination under Body Property Changes Including Tool Grasp/Release. Adv Robot 2012. [DOI: 10.1163/016918610x493543] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Cota Nabeshima
- a University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo, Japan, CYBERDYNE Inc., Gakuen Minami D25-1, Tsukuba-shi, Ibaraki, Japan;,
| | | |
Collapse
|
48
|
Iriki A, Taoka M. Triadic (ecological, neural, cognitive) niche construction: a scenario of human brain evolution extrapolating tool use and language from the control of reaching actions. Philos Trans R Soc Lond B Biol Sci 2012; 367:10-23. [PMID: 22106423 PMCID: PMC3223791 DOI: 10.1098/rstb.2011.0190] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Hominin evolution has involved a continuous process of addition of new kinds of cognitive capacity, including those relating to manufacture and use of tools and to the establishment of linguistic faculties. The dramatic expansion of the brain that accompanied additions of new functional areas would have supported such continuous evolution. Extended brain functions would have driven rapid and drastic changes in the hominin ecological niche, which in turn demanded further brain resources to adapt to it. In this way, humans have constructed a novel niche in each of the ecological, cognitive and neural domains, whose interactions accelerated their individual evolution through a process of triadic niche construction. Human higher cognitive activity can therefore be viewed holistically as one component in a terrestrial ecosystem. The brain's functional characteristics seem to play a key role in this triadic interaction. We advance a speculative argument about the origins of its neurobiological mechanisms, as an extension (with wider scope) of the evolutionary principles of adaptive function in the animal nervous system. The brain mechanisms that subserve tool use may bridge the gap between gesture and language—the site of such integration seems to be the parietal and extending opercular cortices.
Collapse
Affiliation(s)
- Atsushi Iriki
- Laboratory for Symbolic Cognitive Development, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako-shi, Saitama 351-0198, Japan.
| | | |
Collapse
|
49
|
Kaneko T, Tomonaga M. The perception of self-agency in chimpanzees (Pan troglodytes). Proc Biol Sci 2011; 278:3694-702. [PMID: 21543355 PMCID: PMC3203506 DOI: 10.1098/rspb.2011.0611] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2011] [Accepted: 04/12/2011] [Indexed: 11/12/2022] Open
Abstract
The ability to distinguish actions and effects caused by oneself from events occurring in the external environment is a fundamental aspect of human cognition. Underlying such distinctions, self-monitoring processes are often assumed, in which predicted events accompanied by one's own volitional action are compared with actual events observed in the external environment. Although many studies have examined the absence or presence of a certain type of self-recognition (i.e. mirror self-recognition) in non-human animals, the underlying cognitive mechanisms remain unclear. Here, we provide, to our knowledge, the first behavioural evidence that chimpanzees can perform self/other distinction for external events on the basis of self-monitoring processes. Three chimpanzees were presented with two cursors on a computer display. One cursor was manipulated by a chimpanzee using a trackball, while the other displayed motion that had been produced previously by the same chimpanzee. Chimpanzees successfully identified which cursor they were able to control. A follow-up experiment revealed that their performance could not be explained by simple associative responses. A further experiment with one chimpanzee showed that the monitoring process occurred in both temporal and spatial dimensions. These findings indicate that chimpanzees and humans share the fundamental cognitive processes underlying the sense of being an independent agent.
Collapse
Affiliation(s)
- Takaaki Kaneko
- Primate Research Institute, Kyoto University, Inuyama, Japan.
| | | |
Collapse
|
50
|
FUKE SAWA, OGINO MASAKI, ASADA MINORU. BODY IMAGE CONSTRUCTED FROM MOTOR AND TACTILE IMAGES WITH VISUAL INFORMATION. INT J HUM ROBOT 2011. [DOI: 10.1142/s0219843607001096] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
This paper proposes a learning model that enables a robot to acquire a body image for parts of its body that are invisible to itself. The model associates spatial perception based on motor experience and motor image with perception based on the activations of touch sensors and tactile image, both of which are supported by visual information. The tactile image can be acquired with the help of the motor image, which is thought to be the basis for spatial perception, because all spatial perceptions originate in motor experiences. Based on the proposed model, a robot estimates invisible hand positions using the Jacobian between the displacement of the joint angles and the optical flow of the hand. When the hand touches one of the invisible tactile sensor units on the face, the robot associates this sensor unit with the estimated hand position. The simulation results show that the spatial arrangement of tactile sensors is successfully acquired by the proposed model.
Collapse
Affiliation(s)
- SAWA FUKE
- Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka, 565-0871, Japan
- Asada Synergistic Intelligence Project, ERATO, JST, FRC1, Graduate School of Engineering, Osaka University 2-1 Yamadaoka, Suita Osaka 565-0871, Japan
| | - MASAKI OGINO
- Asada Synergistic Intelligence Project, ERATO, JST, FRC1, Graduate School of Engineering, Osaka University 2-1 Yamadaoka, Suita Osaka 565-0871, Japan
| | - MINORU ASADA
- Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, 2-1 Yamadaoka, Suita, Osaka, 565-0871, Japan
- Asada Synergistic Intelligence Project, ERATO, JST, FRC1, Graduate School of Engineering, Osaka University 2-1 Yamadaoka, Suita Osaka 565-0871, Japan
| |
Collapse
|