1
|
Hapuarachchi H, Hagiwara T, Ganesh G, Kitazaki M. Effect of connection induced upper body movements on embodiment towards a limb controlled by another during virtual co-embodiment. PLoS One 2023; 18:e0278022. [PMID: 36602991 DOI: 10.1371/journal.pone.0278022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 11/08/2022] [Indexed: 01/06/2023] Open
Abstract
Even if we cannot control them, or when we receive no tactile or proprioceptive feedback from them, limbs attached to our bodies can still provide indirect proprioceptive and haptic stimulations to the body parts they are attached to simply due to the physical connections. In this study we investigated whether such indirect movement and haptic feedbacks from a limb contribute to a feeling of embodiment towards it. To investigate this issue, we developed a 'Joint Avatar' setup in which two individuals were given full control over the limbs in different sides (left and right) of an avatar during a reaching task. The backs of the two individuals were connected with a pair of solid braces through which they could exchange forces and match the upper body postures with one another. Coupled with the first-person view, this simulated an experience of the upper body being synchronously dragged by the partner-controlled virtual arm when it moved. We observed that this passive synchronized upper-body movement significantly reduced the feeling of the partner-controlled limb being owned or controlled by another. In summary, our results suggest that even in total absence of control, connection induced upper body movements synchronized with the visible limb movements can positively affect the sense of embodiment towards partner-controlled or autonomous limbs.
Collapse
Affiliation(s)
- Harin Hapuarachchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | | | - Gowrishankar Ganesh
- Laboratoire d'Informatique de Robotique et de Microelectronique de Montpellier (LIRMM), Univ. Montpellier, CNRS, Rue Ada, Montpellier, France
| | - Michiteru Kitazaki
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| |
Collapse
|
2
|
Jung M, Kim J, Han K, Kim K. Social Telecommunication Experience with Full-Body Ownership Humanoid Robot. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00922-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
3
|
Iwasaki Y, Navarro B, Iwata H, Ganesh G. Embodiment modifies attention allotment for the benefit of dual task performance. Commun Biol 2022; 5:701. [PMID: 35835983 PMCID: PMC9283402 DOI: 10.1038/s42003-022-03603-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 06/21/2022] [Indexed: 11/09/2022] Open
Abstract
Many everyday tasks, like walking down a street, require us to dual task to also avoid collisions of our swinging arms with other pedestrians. The collision avoidance is possible with ease because humans attend to all our (embodied) limbs. But how does the level of embodiment affect attention distribution, and consequently task performance in dual tasks? Here we examined this question with a dual task that required participants to perform a cued button-press (main task) with their right hand, while reacting to possible collisions by a moving object with a left 'robot' hand (secondary task). We observed that participants consistently improve main task performance when they perceived the robot hand to be embodied, compared to when they don't. The secondary task performance could be maintained in both cases. Our results suggest that embodiment of a limb modifies attention allotment for the benefit of dual motor task performance using limbs.
Collapse
Affiliation(s)
- Yukiko Iwasaki
- Graduate School of Creative Science and Engineering, Waseda University, Shinjuku, Tokyo, 1628480, Japan.
| | - Benjamin Navarro
- Laboratoire d'Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM), University Montpellier, CNRS, Montpellier, 34095, France
| | - Hiroyasu Iwata
- Graduate School of Creative Science and Engineering, Waseda University, Shinjuku, Tokyo, 1628480, Japan
| | - Gowrishankar Ganesh
- Laboratoire d'Informatique, de Robotique et de Microelectronique de Montpellier (LIRMM), University Montpellier, CNRS, Montpellier, 34095, France.
| |
Collapse
|
4
|
Miura R, Kasahara S, Kitazaki M, Verhulst A, Inami M, Sugimoto M. MultiSoma: Motor and Gaze Analysis on Distributed Embodiment With Synchronized Behavior and Perception. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.788014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Human behavior and perception are optimized for a single body. Yet, the human brain has plasticity, which allows us to extend our body schema. By utilizing technology like robotics or virtual reality (VR), we can modify our body parts or even add a new body to our own while retaining control over these parts. However, the update of body cognition when controlling multiple bodies has not been well examined. In this study, we explore the task performance and body cognition of humans when they have multiple full bodies as an extended embodiment. Our experimental system allows a participant to control up to four bodies at the same time and perceive sensory information from them. The participant experiences synchronizing behavior and vision perception in a virtual environment. We set up three tasks for multiple bodies and evaluated the cognition of these bodies with their gazing information, task performances, and subjective ratings. We found that humans can have the sense of body ownership and agency for each body when controlling multiple bodies simultaneously. Furthermore, it was observed that people manipulate multiple bodies by actively switching their attention in a static environment and passively switching their attention in a dynamic environment. Distributed embodiment has the potential to extend human behavior in cooperative work, parallel work, group behavior, and so on.
Collapse
|
5
|
Bodily ownership of an independent supernumerary limb: an exploratory study. Sci Rep 2022; 12:2339. [PMID: 35165309 PMCID: PMC8844351 DOI: 10.1038/s41598-022-06040-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 01/21/2022] [Indexed: 12/11/2022] Open
Abstract
Can our brain perceive a sense of ownership towards an independent supernumerary limb; one that can be moved independently of any other limb and provides its own independent movement feedback? Following the rubber-hand illusion experiment, a plethora of studies have shown that the human representation of “self” is very plastic. But previous studies have almost exclusively investigated ownership towards “substitute” artificial limbs, which are controlled by the movements of a real limb and/or limbs from which non-visual sensory feedback is provided on an existing limb. Here, to investigate whether the human brain can own an independent artificial limb, we first developed a novel independent robotic “sixth finger.” We allowed participants to train using the finger and examined whether it induced changes in the body representation using behavioral as well as cognitive measures. Our results suggest that unlike a substitute artificial limb (like in the rubber hand experiment), it is more difficult for humans to perceive a sense of ownership towards an independent limb. However, ownership does seem possible, as we observed clear tendencies of changes in the body representation that correlated with the cognitive reports of the sense of ownership. Our results provide the first evidence to show that an independent supernumerary limb can be embodied by humans.
Collapse
|
6
|
Farizon D, Dominey PF, Ventre-Dominey J. Insights on embodiment induced by visuo-tactile stimulation during robotic telepresence. Sci Rep 2021; 11:22718. [PMID: 34811420 PMCID: PMC8609005 DOI: 10.1038/s41598-021-02091-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 10/26/2021] [Indexed: 11/17/2022] Open
Abstract
Using a simple neuroscience-inspired procedure to beam human subjects into robots, we previously demonstrated by visuo-motor manipulations that embodiment into a robot can enhance the acceptability and closeness felt towards the robot. In that study, the feelings of likeability and closeness toward the robot were significantly related to the sense of agency, independently of the sensations of enfacement and location. Here, using the same paradigm we investigated the effect of a purely sensory manipulation on the sense of robotic embodiment associated to social cognition. Wearing a head-mounted display, participants saw the visual scene captured from the robot eyes. By positioning a mirror in front of the robot, subjects saw themselves as a robot. Tactile stimulation was provided by stroking synchronously or not with a paintbrush the same location of the subject and robot faces. In contrast to the previous motor induction of embodiment which particularly affected agency, tactile induction yields more generalized effects on the perception of ownership, location and agency. Interestingly, the links between positive social feelings towards the robot and the strength of the embodiment sensations were not observed. We conclude that the embodiment into a robot is not sufficient in itself to induce changes in social cognition.
Collapse
Affiliation(s)
- D Farizon
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, 21000, Dijon, France
| | - P F Dominey
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, 21000, Dijon, France
| | - J Ventre-Dominey
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, 21000, Dijon, France.
| |
Collapse
|
7
|
Windt JM. How deep is the rift between conscious states in sleep and wakefulness? Spontaneous experience over the sleep-wake cycle. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190696. [PMID: 33308071 PMCID: PMC7741079 DOI: 10.1098/rstb.2019.0696] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2020] [Indexed: 12/29/2022] Open
Abstract
Whether we are awake or asleep is believed to mark a sharp divide between the types of conscious states we undergo in either behavioural state. Consciousness in sleep is often equated with dreaming and thought to be characteristically different from waking consciousness. Conversely, recent research shows that we spend a substantial amount of our waking lives mind wandering, or lost in spontaneous thoughts. Dreaming has been described as intensified mind wandering, suggesting that there is a continuum of spontaneous experience that reaches from waking into sleep. This challenges how we conceive of the behavioural states of sleep and wakefulness in relation to conscious states. I propose a conceptual framework that distinguishes different subtypes of spontaneous thoughts and experiences independently of their occurrence in sleep or waking. I apply this framework to selected findings from dream and mind-wandering research. I argue that to assess the relationship between spontaneous thoughts and experiences and the behavioural states of sleep and wakefulness, we need to look beyond dreams to consider kinds of sleep-related experience that qualify as dreamless. I conclude that if we consider the entire range of spontaneous thoughts and experiences, there appears to be variation in subtypes both within as well as across behavioural states. Whether we are sleeping or waking does not appear to strongly constrain which subtypes of spontaneous thoughts and experiences we undergo in those states. This challenges the conventional and coarse-grained distinction between sleep and waking and their putative relation to conscious states. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Jennifer M. Windt
- Department of Philosophy, Monash University, Clayton, Victoria 3800, Australia
| |
Collapse
|
8
|
Ianì F. Embodied cognition: So flexible as to be "disembodied"? Conscious Cogn 2021; 88:103075. [PMID: 33493962 DOI: 10.1016/j.concog.2021.103075] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 12/21/2020] [Accepted: 12/31/2020] [Indexed: 11/20/2022]
Abstract
This review aims to explore what I call the "Embodiment Cost Hypothesis" (ECH), according to which, when humans "embody" a part of the world other than their bodies, a measurable cost is detectable on their real bodies. The review analyzes experimental evidence in favor of the ECH by examining studies from different research fields, including studies of action observation (2), tool-use (3), rubber hand illusion (4), and full-body illusions (5). In light of this literature, this review argues that embodiment effects can profitably be seen as phenomena associated with both benefits (resulting from the embodiment of external objects/bodies) and costs (resulting from the disembodiment at various levels of the subject's own body). Implications are discussed in relation to the ongoing debate on the embodied cognition (EC) approach.
Collapse
Affiliation(s)
- Francesco Ianì
- Università di Torino, Dipartimento di Psicologia, Via Verdi, 10, 10123 Turin, Italy.
| |
Collapse
|
9
|
Hagiwara T, Ganesh G, Sugimoto M, Inami M, Kitazaki M. Individuals Prioritize the Reach Straightness and Hand Jerk of a Shared Avatar over Their Own. iScience 2020; 23:101732. [PMID: 33376966 PMCID: PMC7756142 DOI: 10.1016/j.isci.2020.101732] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 08/13/2020] [Accepted: 10/22/2020] [Indexed: 10/26/2022] Open
Abstract
Cyber space enables us to "share" bodies whose movements are a consequence of movements by several individuals. But whether and how our motor behavior is affected during body sharing remains unclear. Here we examined this issue in arm reaching performed by a shared avatar, whose movement was generated by averaging the movements of two participants. We observed that participants exhibited improved reaction times with a shared avatar than alone. Moreover, the reach trajectory of the shared avatar was straighter than that of either participant and correlated with their subjective embodiment of the avatar. Finally, the jerk of the avatar's hand was less than either participant's own hand, both when they reached alone and in the shared body. Movement straightness and hand jerk are well known characteristics of human reach behavior, and our results suggest that during body sharing, humans prioritize these movement characteristics of the shared body over their own.
Collapse
Affiliation(s)
- Takayoshi Hagiwara
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| | - Gowrishankar Ganesh
- UM-CNRS Laboratoire d'Informatique de Robotique et de Microelectronique de Montpellier (LIRMM), 161, Rue Ada, Montpellier, France
| | - Maki Sugimoto
- Department of Information and Computer Science, Keio University, Yokohama, Kanagawa, Japan
| | - Masahiko Inami
- Research Center for Advanced Science and Technology, The University of Tokyo, Bunkyo-ku, Tokyo, Japan
| | - Michiteru Kitazaki
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan
| |
Collapse
|
10
|
Guterstam A, Larsson DEO, Szczotka J, Ehrsson HH. Duplication of the bodily self: a perceptual illusion of dual full-body ownership and dual self-location. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201911. [PMID: 33489299 PMCID: PMC7813251 DOI: 10.1098/rsos.201911] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Accepted: 11/02/2020] [Indexed: 06/12/2023]
Abstract
Previous research has shown that it is possible to use multisensory stimulation to induce the perceptual illusion of owning supernumerary limbs, such as two right arms. However, it remains unclear whether the coherent feeling of owning a full-body may be duplicated in the same manner and whether such a dual full-body illusion could be used to split the unitary sense of self-location into two. Here, we examined whether healthy human participants can experience simultaneous ownership of two full-bodies, located either close in parallel or in two separate spatial locations. A previously described full-body illusion, based on visuo-tactile stimulation of an artificial body viewed from the first-person perspective (1PP) via head-mounted displays, was adapted to a dual-body setting and quantified in five experiments using questionnaires, a behavioural self-location task and threat-evoked skin conductance responses. The results of experiments 1-3 showed that synchronous visuo-tactile stimulation of two bodies viewed from the 1PP lying in parallel next to each other induced a significant illusion of dual full-body ownership. In experiment 4, we failed to find support for our working hypothesis that splitting the visual scene into two, so that each of the two illusory bodies was placed in distinct spatial environments, would lead to dual self-location. In a final exploratory experiment (no. 5), we found preliminary support for an illusion of dual self-location and dual body ownership by using dynamic changes between the 1PPs of two artificial bodies and/or a common third-person perspective in the ceiling of the testing room. These findings suggest that healthy people, under certain conditions of multisensory perceptual ambiguity, may experience dual body ownership and dual self-location. These findings suggest that the coherent sense of the bodily self located at a single place in space is the result of an active and dynamic perceptual integration process.
Collapse
Affiliation(s)
- Arvid Guterstam
- Department of Psychology, Princeton University, Princeton, NJ, USA
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | | | - Joanna Szczotka
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - H. Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
11
|
Skewes J, Amodio DM, Seibt J. Social robotics and the modulation of social perception and bias. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180037. [PMID: 30853001 DOI: 10.1098/rstb.2018.0037] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The field of social robotics offers an unprecedented opportunity to probe the process of impression formation and the effects of identity-based stereotypes (e.g. about gender or race) on social judgements and interactions. We present the concept of fair proxy communication-a form of robot-mediated communication that proceeds in the absence of potentially biasing identity cues-and describe how this application of social robotics may be used to illuminate implicit bias in social cognition and inform novel interventions to reduce bias. We discuss key questions and challenges for the use of robots in research on the social cognition of bias and offer some practical recommendations. We conclude by discussing boundary conditions of this new form of interaction and by raising some ethical concerns about the inclusion of social robots in psychological research and interventions. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Joshua Skewes
- 1 Department for Linguistics, Cognitive Science and Semiotics, and Interacting Minds Center, Aarhus University , Denmark
| | - David M Amodio
- 3 Department of Psychology and Neural Science, New York University , New York, NY , USA.,4 Department of Psychology, University of Amsterdam , Amsterdam , The Netherlands
| | - Johanna Seibt
- 2 Research Unit for Robophilosophy, School of Culture and Society, Aarhus University , Denmark
| |
Collapse
|
12
|
Abstract
Recent studies have shown how embodiment induced by multisensory bodily interactions between individuals can positively change social attitudes (closeness, empathy, racial biases). Here we use a simple neuroscience-inspired procedure to beam our human subjects into one of two distinct robots and demonstrate how this can readily increase acceptability and social closeness to that robot. Participants wore a Head Mounted Display tracking their head movements and displaying the 3D visual scene taken from the eyes of a robot which was positioned in front of a mirror and piloted by the subjects’ head movements. As a result, participants saw themselves as a robot. When participant’ and robot’s head movements were correlated, participants felt that they were incorporated into the robot with a sense of agency. Critically, the robot they embodied was judged more likeable and socially closer. Remarkably, we found that the beaming experience with correlated head movements and corresponding sensation of embodiment and social proximity, was independent of robots’ humanoid’s appearance. These findings not only reveal the ease of body-swapping, via visual-motor synchrony, into robots that do not share any clear human resemblance, but they may also pave a new way to make our future robotic helpers socially acceptable.
Collapse
|
13
|
Aymerich-Franch L, Kishore S, Slater M. When Your Robot Avatar Misbehaves You Are Likely to Apologize: An Exploration of Guilt During Robot Embodiment. Int J Soc Robot 2019. [DOI: 10.1007/s12369-019-00556-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
14
|
Beckerle P, Castellini C, Lenggenhager B. Robotic interfaces for cognitive psychology and embodiment research: A research roadmap. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 10:e1486. [PMID: 30485732 DOI: 10.1002/wcs.1486] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 10/03/2018] [Accepted: 10/20/2018] [Indexed: 11/09/2022]
Abstract
Advanced human-machine interfaces render robotic devices applicable to study and enhance human cognition. This turns robots into formidable neuroscientific tools to study processes such as the adaptation between a human operator and the operated robotic device and how this adaptation modulates human embodiment and embodied cognition. We analyze bidirectional human-machine interface (bHMI) technologies for transparent information transfer between a human and a robot via efferent and afferent channels. Even if such interfaces have a tremendous positive impact on feedback loops and embodiment, advanced bHMIs face immense technological challenges. We critically discuss existing technical approaches, mainly focusing on haptics, and suggest extensions thereof, which include other aspects of touch. Moreover, we point out other potential constraints such as limited functionality, semi-autonomy, intent-detection, and feedback methods. From this, we develop a research roadmap to guide understanding and development of bidirectional human-machine interfaces that enable robotic experiments to empirically study the human mind and embodiment. We conclude the integration of dexterous control and multisensory feedback to be a promising roadmap towards future robotic interfaces, especially regarding applications in the cognitive sciences. This article is categorized under: Computer Science > Robotics Psychology > Motor Skill and Performance Neuroscience > Plasticity.
Collapse
Affiliation(s)
- Philipp Beckerle
- Elastic Lightweight Robotics Group, Robotics Research Institute, Technische Universität Dortmund, Dortmund, Germany.,Institute for Mechatronic Systems in Mechanical Engineering, Technische Universität Darmstadt, Darmstadt, Germany
| | - Claudio Castellini
- Institut of Robotics and Mechatronics, DLR German Aerospace Center, Oberpfaffenhofen, Germany
| | - Bigna Lenggenhager
- Cognitive Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
15
|
Newen A. The Embodied Self, the Pattern Theory of Self, and the Predictive Mind. Front Psychol 2018; 9:2270. [PMID: 30532721 PMCID: PMC6265368 DOI: 10.3389/fpsyg.2018.02270] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 11/01/2018] [Indexed: 12/21/2022] Open
Abstract
Do we have to presuppose a self to account for human self-consciousness? If so, how should we characterize the self? These questions are discussed in the context of two alternatives, i.e., the no-self position held by Metzinger (2003, 2009) and the claim that the only self we have to presuppose is a narrative self (Dennett, 1992; Schechtman, 2007; Hardcastle, 2008) which is primarily an abstract entity. In contrast to these theories, I argue that we have to presuppose an embodied self, although this is not a metaphysical substance, nor an entity for which stable necessary and jointly sufficient conditions can be given. Self-consciousness results from an integration of an embodied, basic affective flow with an intentional object (the self as agent or as center of imagination or thought), where this integration remains anchored in an embodied self. This embodied self is a flexible and variable entity, which we can account for only with a pattern theory of the self (in line with Gallagher, 2013). Furthermore, I outline how this pattern theory of the self fits into the predictive coding framework, which also answers the open question whether self-representation is prior to world-representation or the other way around. The principal organization of a mechanism of building up a self-model is such that both types of representations are always activated and developed in parallel. Modeling oneself is a process which is always activated when one interacts with the world - much as a shadow is present when a person walks in the sun.
Collapse
Affiliation(s)
- Albert Newen
- Institut für Philosophie II, Ruhr-Universität Bochum, Bochum, Germany
| |
Collapse
|
16
|
|
17
|
Aymerich-Franch L, Petit D, Ganesh G, Kheddar A. Non-human Looking Robot Arms Induce Illusion of Embodiment. Int J Soc Robot 2017. [DOI: 10.1007/s12369-017-0397-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|