1
|
Senna I, Piller S, Martolini C, Cocchi E, Gori M, Ernst MO. Multisensory training improves the development of spatial cognition after sight restoration from congenital cataracts. iScience 2024; 27:109167. [PMID: 38414862 PMCID: PMC10897914 DOI: 10.1016/j.isci.2024.109167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/04/2023] [Accepted: 02/05/2024] [Indexed: 02/29/2024] Open
Abstract
Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual).
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
- Department of Psychology, Liverpool Hope University, Liverpool L16 9JD, UK
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| | - Chiara Martolini
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, 16145 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
2
|
Orioli G, Parisi I, van Velzen JL, Bremner AJ. Visual objects approaching the body modulate subsequent somatosensory processing at 4 months of age. Sci Rep 2023; 13:19300. [PMID: 37989781 PMCID: PMC10663495 DOI: 10.1038/s41598-023-45897-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/25/2023] [Indexed: 11/23/2023] Open
Abstract
We asked whether, in the first year of life, the infant brain can support the dynamic crossmodal interactions between vision and somatosensation that are required to represent peripersonal space. Infants aged 4 (n = 20, 9 female) and 8 (n = 20, 10 female) months were presented with a visual object that moved towards their body or receded away from it. This was presented in the bottom half of the screen and not fixated upon by the infants, who were instead focusing on an attention getter at the top of the screen. The visual moving object then disappeared and was followed by a vibrotactile stimulus occurring later in time and in a different location in space (on their hands). The 4-month-olds' somatosensory evoked potentials (SEPs) were enhanced when tactile stimuli were preceded by unattended approaching visual motion, demonstrating that the dynamic visual-somatosensory cortical interactions underpinning representations of the body and peripersonal space begin early in the first year of life. Within the 8-month-olds' sample, SEPs were increasingly enhanced by (unexpected) tactile stimuli following receding visual motion as age in days increased, demonstrating changes in the neural underpinnings of the representations of peripersonal space across the first year of life.
Collapse
Affiliation(s)
- Giulia Orioli
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK.
- Department of Psychology, Goldsmiths, University of London, London, UK.
| | - Irene Parisi
- Department of Psychology, Goldsmiths, University of London, London, UK
- Department of Psychology, Sapienza, University of Rome, Rome, Italy
| | - José L van Velzen
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Andrew J Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
- Department of Psychology, Goldsmiths, University of London, London, UK
| |
Collapse
|
3
|
Meredith Weiss S, Marshall PJ. Anticipation across modalities in children and adults: Relating anticipatory alpha rhythm lateralization, reaction time, and executive function. Dev Sci 2023; 26:e13277. [PMID: 35616474 PMCID: PMC10078525 DOI: 10.1111/desc.13277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 02/14/2022] [Accepted: 04/22/2022] [Indexed: 12/15/2022]
Abstract
The development of the ability to anticipate-as manifested by preparatory actions and neural activation related to the expectation of an upcoming stimulus-may play a key role in the ontogeny of cognitive skills more broadly. This preregistered study examined anticipatory brain potentials and behavioral responses (reaction time; RT) to anticipated target stimuli in relation to individual differences in the ability to use goals to direct action (as indexed by measures of executive function; EF). A cross-sectional investigation was conducted in 40 adults (aged 18-25 years) and 40 children (aged 6-8 years) to examine the association of changes in the amplitude of modality-specific alpha-range rhythms in the electroencephalogram (EEG) during anticipation of lateralized visual, tactile, or auditory stimuli with inter- and intraindividual variation in RT and EF. Children and adults exhibited contralateral anticipatory reductions in the mu rhythm and the visual alpha rhythm for tactile and visual anticipation, respectively, indicating modality and spatially specific attention allocation. Variability in within-subject anticipatory alpha lateralization (the difference between contralateral and ipsilateral alpha power) was related to single-trial RT. This relation was more prominent in adults than in children, and was not apparent for auditory stimuli. Multilevel models indicated that interindividual differences in anticipatory mu rhythm lateralization contributed to the significant association with variability in EF, but this was not the case for visual or auditory alpha rhythms. Exploratory microstate analyses were undertaken to cluster global field power (GFP) into a distribution-free temporal analysis examining developmental differences across samples and in relation to RT and EF. Anticipation is suggested as a developmental bridge construct connecting neuroscience, behavior, and cognition, with anticipatory EEG oscillations being discussed as quantifiable and potentially malleable indicators of stimulus prediction.
Collapse
Affiliation(s)
- Staci Meredith Weiss
- Department of Psychology, Temple University, Philadelphia, Pennsylvania, USA.,Department of Psychology, University of Cambridge, Cambridge, UK
| | - Peter J Marshall
- Department of Psychology, Temple University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
4
|
Tinelli F, Gori M, Beani E, Sgandurra G, Martolini C, Maselli M, Petri S, Purpura G. Feasibility of audio-motor training with the multisensory device ABBI: Implementation in a child with hemiplegia and hemianopia. Neuropsychologia 2022; 174:108319. [PMID: 35820452 DOI: 10.1016/j.neuropsychologia.2022.108319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 05/17/2022] [Accepted: 07/06/2022] [Indexed: 11/26/2022]
Abstract
Spatial representation is crucial when it comes to everyday interaction with the environment. Different factors influence spatial perception, such as body movements and vision. Accordingly, training strategies that exploit the plasticity of the human brain should be adopted early. In the current study we developed and tested a new training protocol based on the reinforcement of audio-motor associations. It supports spatial development in one hemiplegic child with an important visual field defect (hemianopia) in the same side of the hemiplegic limb. We focused on investigating whether a better representation of the space using the sound can also improve the involvement of the hemiplegic upper limb in daily life activity. The experimental training consists of intensive but entertaining rehabilitation for two weeks, during which a child performed ad-hoc developed audio-motor-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI) for 2 h/day. We administered a battery of tests before and after the training that indicated that the child significantly improved in both the spatial aspects and the involvement of the hemiplegic limb in bimanual tasks. During the assessment, ActiGraph GT3X+ was used to measure asymmetry in the use of the two upper limbs with a standardized clinical tool, the Assisting Hand Assessment (AHA), pre and post-training. Additionally, the study measured and recorded spontaneous daily life activity for at least 2 h/day. These results confirm that one can enhance perceptual development in motor and visual disorders using naturally associated auditory feedback to body movements.
Collapse
Affiliation(s)
- Francesca Tinelli
- Department of Developmental Neuroscience, Laboratory of Vision, IRCCS Fondazione Stella Maris, Pisa, Italy.
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| | - Elena Beani
- Department of Developmental Neuroscience, Laboratory of Vision, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - Giuseppina Sgandurra
- Department of Developmental Neuroscience, Laboratory of Vision, IRCCS Fondazione Stella Maris, Pisa, Italy; Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Chiara Martolini
- U-VIP: Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| | - Martina Maselli
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, Italy; Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Stefania Petri
- Department of Developmental Neuroscience, Laboratory of Vision, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - Giulia Purpura
- Department of Developmental Neuroscience, Laboratory of Vision, IRCCS Fondazione Stella Maris, Pisa, Italy; School of Medicine and Surgery, University of Milano Bicocca, Monza, Italy
| |
Collapse
|
5
|
Do infants have agency? – The importance of control for the study of early agency. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2022.101022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
6
|
Ahmad H, Tonelli A, Campus C, Capris E, Facchini V, Sandini G, Gori M. An audio-visual motor training improves audio spatial localization skills in individuals with scotomas due to retinal degenerative diseases. Acta Psychol (Amst) 2021; 219:103384. [PMID: 34365274 DOI: 10.1016/j.actpsy.2021.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 07/05/2021] [Accepted: 07/31/2021] [Indexed: 11/29/2022] Open
Abstract
Several studies have shown that impairments in a sensory modality can induce perceptual deficits in tasks involving the remaining senses. For example, people with retinal degenerative diseases like Macular Degeneration (MD) and with central scotoma show biased auditory localization abilities towards the visual field's scotoma area. This result indicates an auditory spatial reorganization of cross-modal processing in people with scotoma when the visual information is impaired. Recent works showed that multisensory training could be beneficial to improve spatial perception. In line with this idea, here we hypothesize that audio-visual and motor training could improve people's spatial skills with retinal degenerative diseases. In the present study, we tested this hypothesis by testing two groups of scotoma patients in an auditory and visual localization task before and after a training or rest performance. The training group was tested before and after multisensory training, while the control group performed the two tasks twice after 10 min of break. The training was done with a portable device positioned on the finger, providing spatially and temporally congruent audio and visual feedback during arm movement. Our findings show improved audio and visual localization for the training group and not for the control group. These results suggest that integrating multiple spatial sensory cues can improve the spatial perception of scotoma patients. This finding ignites further research and applications for people with central scotoma for whom rehabilitation is classically focused on training visual modality only.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy; Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy; University of Genova, Genova, Italy; Sino-Pakistan Centre for Artificial Intelligence (SPCAI), Pak-Austria Fachhochschule: Institute of Applied Sciences and Technology (PAF-IAST), Haripur, Pakistan
| | - Alessia Tonelli
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences (RBCS), Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology (IIT), Genova, Italy.
| |
Collapse
|
7
|
Spille JL, Grunwald M, Martin S, Mueller SM. Stop touching your face! A systematic review of triggers, characteristics, regulatory functions and neuro-physiology of facial self touch. Neurosci Biobehav Rev 2021; 128:102-116. [PMID: 34126163 DOI: 10.1016/j.neubiorev.2021.05.030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 02/26/2021] [Accepted: 05/24/2021] [Indexed: 11/25/2022]
Abstract
Spontaneous face touching (sFST) is an ubiquitous behavior that occurs in people of all ages and all sexes, up to 800 times a day. Despite their high frequency, they have rarely been considered as an independent phenomenon. Recently, sFST have sparked scientific interest since they contribute to self-infection with pathogens. This raises questions about trigger mechanisms and functions of sFST and whether they can be prevented. This systematic comprehensive review compiles relevant evidence on these issues. Facial self-touches seem to increase in frequency and duration in socially, emotionally as well as cognitively challenging situations. They have been associated with attention focus, working memory processes and emotion regulating functions as well as the development and maintenance of a sense of self and body. The dominance of face touch over other body parts is discussed in light of the proximity of hand-face cortical representations and the peculiarities of facial innervations. The results show that underlying psychological and neuro-physiological mechanisms of sFST are still poorly understood and that various basic questions remain unanswered.
Collapse
Affiliation(s)
- Jente L Spille
- University of Leipzig, Paul-Flechsig-Institute for Brain Research, Haptic Research Lab, 04103 Leipzig, Germany
| | - Martin Grunwald
- University of Leipzig, Paul-Flechsig-Institute for Brain Research, Haptic Research Lab, 04103 Leipzig, Germany
| | - Sven Martin
- University of Leipzig, Paul-Flechsig-Institute for Brain Research, Haptic Research Lab, 04103 Leipzig, Germany
| | - Stephanie M Mueller
- University of Leipzig, Paul-Flechsig-Institute for Brain Research, Haptic Research Lab, 04103 Leipzig, Germany.
| |
Collapse
|
8
|
Bogdanova OV, Bogdanov VB, Dureux A, Farnè A, Hadj-Bouziane F. The Peripersonal Space in a social world. Cortex 2021; 142:28-46. [PMID: 34174722 DOI: 10.1016/j.cortex.2021.05.005] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 02/27/2021] [Accepted: 05/19/2021] [Indexed: 11/27/2022]
Abstract
The PeriPersonal Space (PPS) has been defined as the space surrounding the body, where physical interactions with elements of the environment take place. As our world is social in nature, recent evidence revealed the complex modulation of social factors onto PPS representation. In light of the growing interest in the field, in this review we take a close look at the experimental approaches undertaken to assess the impact of social factors onto PPS representation. Our social world also influences the personal space (PS), a concept stemming from social psychology, defined as the space we keep between us and others to avoid discomfort. Here we analytically compare PPS and PS with the aim of understanding if and how they relate to each other. At the behavioral level, the multiplicity of experimental methodologies, whether well-established or novel, lead to somewhat divergent results and interpretations. Beyond behavior, we review physiological and neural signatures of PPS representation to discuss how interoceptive signals could contribute to PPS representation, as well as how these internal signals could shape the neural responses of PPS representation. In particular, by merging exteroceptive information from the environment and internal signals that come from the body, PPS may promote an integrated representation of the self, as distinct from the environment and the others. We put forward that integrating internal and external signals in the brain for perception of proximal environmental stimuli may also provide us with a better understanding of the processes at play during social interactions. Adopting such an integrative stance may offer novel insights about PPS representation in a social world. Finally, we discuss possible links between PPS research and social cognition, a link that may contribute to the understanding of intentions and feelings of others around us and promote appropriate social interactions.
Collapse
Affiliation(s)
- Olena V Bogdanova
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; INCIA, UMR 5287, CNRS, Université de Bordeaux, France.
| | - Volodymyr B Bogdanov
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; Ecole Nationale des Travaux Publics de l'Etat, Laboratoire Génie Civil et Bâtiment, Vaulx-en-Velin, France
| | - Audrey Dureux
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France
| | - Alessandro Farnè
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France; Hospices Civils de Lyon, Neuro-Immersion Platform, Lyon, France; Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | - Fadila Hadj-Bouziane
- Integrative Multisensory Perception Action & Cognition Team (Impact), INSERM U1028, CNRS UMR5292, Lyon Neuroscience Research Center (CRNL), Lyon, France; University of Lyon 1, France.
| |
Collapse
|
9
|
Measuring the sensitivity of tactile temporal order judgments in sighted and blind participants using the adaptive psi method. Atten Percept Psychophys 2021; 83:2995-3007. [PMID: 34036536 DOI: 10.3758/s13414-021-02301-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/14/2021] [Indexed: 11/08/2022]
Abstract
Spatial locations of somatosensory stimuli are coded according to somatotopic (anatomical distribution of the sensory receptors on the skin surface) and spatiotopic (position of the body parts in external space) reference frames. This was mostly evidenced by means of temporal order judgment (TOJ) tasks in which participants discriminate the temporal order of two tactile stimuli, one applied on each hand. Because crossing the hands generates a conflict between anatomical and spatial responses, TOJ performance is decreased in such posture, except for congenitally blind people, suggesting a role of visual experience in somatosensory perception. In previous TOJ studies, stimuli were generally presented using the method of constant stimuli-that is, the repetition of a predefined sample of stimulus-onset asynchronies (SOA) separating the two stimuli. This method has the disadvantage that a large number of trials is needed to obtain reliable data when aiming at dissociating performances of groups characterized by different cognitive abilities. Indeed, each SOA among a large variety of different SOAs should be presented the same number of times irrespective of the participant's performance. This study aimed to replicate previous tactile TOJ data in sighted and blind participants with the adaptive psi method in order to validate a novel method that adapts the presented SOA according to the participant's performance. This allows to precisely estimate the temporal sensitivity of each participant while the presented stimuli are adapted to the participant's individual discrimination threshold. We successfully replicated previous findings in both sighted and blind participants, corroborating previous data using a more suitable psychophysical tool.
Collapse
|
10
|
Abstract
AbstractSafe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.
Collapse
|
11
|
Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
12
|
Virtual Body Ownership Illusions for Mental Health: A Narrative Review. J Clin Med 2021; 10:jcm10010139. [PMID: 33401596 PMCID: PMC7796179 DOI: 10.3390/jcm10010139] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 12/21/2020] [Accepted: 12/23/2020] [Indexed: 02/06/2023] Open
Abstract
Over the last 20 years, virtual reality (VR) has been widely used to promote mental health in populations presenting different clinical conditions. Mental health does not refer only to the absence of psychiatric disorders but to the absence of a wide range of clinical conditions that influence people’s general and social well-being such as chronic pain, neurological disorders that lead to motor o perceptual impairments, psychological disorders that alter behaviour and social cognition, or physical conditions like eating disorders or present in amputees. It is known that an accurate perception of oneself and of the surrounding environment are both key elements to enjoy mental health and well-being, and that both can be distorted in patients suffering from the clinical conditions mentioned above. In the past few years, multiple studies have shown the effectiveness of VR to modulate such perceptual distortions of oneself and of the surrounding environment through virtual body ownership illusions. This narrative review aims to review clinical studies that have explored the manipulation of embodied virtual bodies in VR for improving mental health, and to discuss the current state of the art and the challenges for future research in the context of clinical care.
Collapse
|
13
|
Sorrentino G, Franza M, Zuber C, Blanke O, Serino A, Bassolino M. How ageing shapes body and space representations: A comparison study between healthy young and older adults. Cortex 2020; 136:56-76. [PMID: 33460913 DOI: 10.1016/j.cortex.2020.11.021] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 08/31/2020] [Accepted: 11/09/2020] [Indexed: 01/04/2023]
Abstract
To efficiently interact with the external world, the brain needs to represent the size of the involved body parts - body representations (BR) - and the space around the body in which the interactions with the environment take place - peripersonal space representation (PPS). BR and PPS are both highly flexible, being updated by the continuous flow of sensorimotor signals between the brain and the body, as observed for example after tool-use or immobilization. The progressive decline of sensorimotor abilities typically described in ageing could thus influence BR and PPS representations in the older adults. To explore this hypothesis, we compared BR and PPS in healthy young and older participants. By focusing on the upper limb, we adapted tasks previously used to evaluate BR and PPS plasticity, i.e., the body-landmarks localization task and audio-tactile interaction task, together with a new task targeting explicit BR (avatar adjustment task, AAT). Results show significantly higher distortions in the older rather than young participants in the perceived metric characteristic of the upper limbs. We found significant modifications in the implicit BR of the global shape (length and width) of both upper limbs, together with an underestimation in the arm length. Similar effects were also observed in the AAT task. Finally, both young and older adults showed equivalent multisensory facilitation in the space close to the hand, suggesting an intact PPS representation. Together, these findings demonstrated significant alterations of implicit and explicit BR in the older participants, probably associated with a less efficient contribution of bodily information typically subjected to age-related decline, whereas the comparable PPS representation in both groups could be supported by preserved multisensory abilities in older participants. These results provide novel empirical insight on how multiple representations of the body in space, subserving actions and perception, are shaped by the normal course of life.
Collapse
Affiliation(s)
- Giuliana Sorrentino
- Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus SUVA, Sion, Switzerland
| | - Matteo Franza
- Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus SUVA, Sion, Switzerland
| | - Charlène Zuber
- Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus SUVA, Sion, Switzerland; Master of Science, University of Applied Sciences of Western, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus SUVA, Sion, Switzerland; Department of Neurology, University Hospital Geneva, Switzerland
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), Switzerland
| | - Michela Bassolino
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus Biotech, Geneva, Switzerland; Center for Neuroprosthetics, School of Life Science, Swiss Federal Institute of Technology (Ecole Polytechnique Fédérale de Lausanne), Campus SUVA, Sion, Switzerland; School of Health Sciences, HES-SO Valais-Wallis, Sion, Switzerland.
| |
Collapse
|
14
|
Noel JP, Failla MD, Quinde-Zlibut JM, Williams ZJ, Gerdes M, Tracy JM, Zoltowski AR, Foss-Feig JH, Nichols H, Armstrong K, Heckers SH, Blake RR, Wallace MT, Park S, Cascio CJ. Visual-Tactile Spatial Multisensory Interaction in Adults With Autism and Schizophrenia. Front Psychiatry 2020; 11:578401. [PMID: 33192716 PMCID: PMC7644602 DOI: 10.3389/fpsyt.2020.578401] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 09/22/2020] [Indexed: 01/04/2023] Open
Abstract
Background: Individuals with autism spectrum disorder (ASD) and schizophrenia (SZ) exhibit multisensory processing difficulties and social impairments, with growing evidence that the former contributes to the latter. However, this work has largely reported on separate cohorts, introducing method variance as a barrier to drawing broad conclusions across studies. Further, very few studies have addressed touch, resulting in sparse knowledge about how these two clinical groups may integrate somatic information with other senses. Methods: In this study, we compared adults with ASD (n = 29), SZ (n = 24), and typical developmental histories (TD, n = 37) on two tasks requiring visual-tactile spatial multisensory processing. In the first task (crossmodal congruency), participants judged the location of a tactile stimulus in the presence or absence of simultaneous visual input that was either spatially congruent or incongruent, with poorer performance for incongruence an index of spatial multisensory interaction. In the second task, participants reacted to touch in the presence or absence of dynamic visual stimuli that appeared to approach or recede from the body. Within a certain radius around the body, defined as peripersonal space (PPS), an approaching visual or auditory stimulus reliably speeds reaction times (RT) to touch; outside of this radius, in extrapersonal space (EPS), there is no multisensory effect. PPS can be defined both by its size (radius) and slope (sharpness of the PPS-EPS boundary). Clinical measures were administered to explore relations with visual-tactile processing. Results: Neither clinical group differed from controls on the crossmodal congruency task. The ASD group had significantly smaller and more sharply-defined PPSs compared to the other two groups. Small PPS size was related to social symptom severity across groups, but was largely driven by the TD group, without significant effects in either clinical group. Conclusions: These results suggest that: (1) spatially static visual-tactile facilitation is intact in adults with ASD and SZ, (2) spatially dynamic visual-tactile facilitation impacting perception of the body boundary is affected in ASD but not SZ, and (3) body boundary perception is related to social-emotional function, but not in a way that maps on to clinical status.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY, United States
| | - Michelle D. Failla
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | | | - Zachary J. Williams
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
- Medical Scientist Training Program, Vanderbilt University School of Medicine, Nashville, TN, United States
| | - Madison Gerdes
- School of Criminology and Justice Policty, Northeastern University, Boston, MA, United States
| | | | - Alisa R. Zoltowski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
| | - Jennifer H. Foss-Feig
- Department of Psychiatry and Seaver Center for Autism Research, Mount Sinai Hospital, New York, NY, United States
| | - Heathman Nichols
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Kristan Armstrong
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
| | - Stephan H. Heckers
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
| | - Randolph R. Blake
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Mark T. Wallace
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
- Vanderbilt Frist Center for Autism and Innovation, Nashville, TN, United States
| | - Sohee Park
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Carissa J. Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, United States
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, United States
- Vanderbilt Frist Center for Autism and Innovation, Nashville, TN, United States
| |
Collapse
|
15
|
Begum Ali J, Thomas RL, Mullen Raymond S, Bremner AJ. Sensitivity to Visual-Tactile Colocation on the Body Prior to Skilled Reaching in Early Infancy. Child Dev 2020; 92:21-34. [PMID: 32920852 DOI: 10.1111/cdev.13428] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Two experiments examined perceptual colocation of visual and tactile stimuli in young infants. Experiment 1 compared 4- (n = 15) and 6-month-old (n = 12) infants' visual preferences for visual-tactile stimulus pairs presented across the same or different feet. The 4- and 6-month-olds showed, respectively, preferences for colocated and noncolocated conditions, demonstrating sensitivity to visual-tactile colocation on their feet. This extends previous findings of visual-tactile perceptual colocation on the hands in older infants. Control conditions excluded the possibility that both 6- (Experiment 1), and 4-month-olds (Experiment 2, n = 12) perceived colocation on the basis of an undifferentiated supramodal coding of spatial distance between stimuli. Bimodal perception of visual-tactile colocation is available by 4 months of age, that is, prior to the development of skilled reaching.
Collapse
|
16
|
Gottwald JM, Bird LA, Keenaghan S, Diamond C, Zampieri E, Tosodduk H, Bremner AJ, Cowie D. The Developing Bodily Self: How Posture Constrains Body Representation in Childhood. Child Dev 2020; 92:351-366. [PMID: 32767576 DOI: 10.1111/cdev.13425] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Adults' body representation is constrained by multisensory information and knowledge of the body such as its possible postures. This study (N = 180) tested for similar constraints in children. Using the rubber hand illusion with adults and 6- to 7-year olds, we measured proprioceptive drift (an index of hand localization) and ratings of felt hand ownership. The fake hand was either congruent or incongruent with the participant's own. Across ages, congruency of posture and visual-tactile congruency yielded greater drift toward the fake hand. Ownership ratings were higher with congruent visual-tactile information, but unaffected by posture. Posture constrains body representation similarly in children and adults, suggesting that children have sensitive, robust mechanisms for maintaining a sense of bodily self.
Collapse
|
17
|
Martolini C, Cappagli G, Luparia A, Signorini S, Gori M. The Impact of Vision Loss on Allocentric Spatial Coding. Front Neurosci 2020; 14:565. [PMID: 32612500 PMCID: PMC7308590 DOI: 10.3389/fnins.2020.00565] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 05/07/2020] [Indexed: 11/13/2022] Open
Abstract
Several works have demonstrated that visual experience plays a critical role in the development of allocentric spatial coding. Indeed, while children with a typical development start to code space by relying on allocentric landmarks from the first year of life, blind children remain anchored to an egocentric perspective until late adolescence. Nonetheless, little is known about when and how visually impaired children acquire the ability to switch from an egocentric to an allocentric frame of reference across childhood. This work aims to investigate whether visual experience is necessary to shift from bodily to external frames of reference. Children with visual impairment and normally sighted controls between 4 and 9 years of age were asked to solve a visual switching-perspective task requiring them to assume an egocentric or an allocentric perspective depending on the task condition. We hypothesize that, if visual experience is necessary for allocentric spatial coding, then visually impaired children would have been impaired to switch from egocentric to allocentric perspectives. Results support this hypothesis, confirming a developmental delay in the ability to update spatial coordinates in visually impaired children. It suggests a pivotal role of vision in shaping allocentric spatial coding across development.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Giulia Cappagli
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Antonella Luparia
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
18
|
Kritikos A, Lister J, Sparks S, Sofronoff K, Bayliss A, Slaughter V. To have and to hold: embodied ownership is established in early childhood. Exp Brain Res 2020; 238:355-367. [PMID: 31925477 DOI: 10.1007/s00221-020-05726-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 01/03/2020] [Indexed: 11/28/2022]
Abstract
We investigated whether embodied ownership is evident in early childhood. To do so, we gifted a drinking bottle to children (aged 24-48 months) to use for 2 weeks. They returned to perform reach-grasp-lift-replace actions with their own or the experimenter's bottle while we recorded their movements using motion capture. There were differences in motor interactions with self- vs experimenter-owned bottles, such that children positioned self-owned bottles significantly closer to themselves compared with the experimenter's bottle. Age did not modulate the positioning of the self-owned bottle relative to the experimenter-owned bottle. In contrast, the pattern was not evident in children who selected one of the two bottles to keep only after the task was completed, and thus did not 'own' it during the task (Experiment 2). These results extend similar findings in adults, confirming the importance of ownership in determining self-other differences and provide novel evidence that object ownership influences sensorimotor processes from as early as 2 years of age.
Collapse
Affiliation(s)
- Ada Kritikos
- School of Psychology, University of Queensland, St Lucia, 4072, Australia.
| | - Jessica Lister
- School of Psychology, University of Queensland, St Lucia, 4072, Australia
| | - Samuel Sparks
- School of Psychology, University of Queensland, St Lucia, 4072, Australia
| | - Kate Sofronoff
- School of Psychology, University of Queensland, St Lucia, 4072, Australia
| | - Andrew Bayliss
- School of Psychology, University of East Anglia, Norwich, NR4 7TJ, UK
| | - Virginia Slaughter
- School of Psychology, University of Queensland, St Lucia, 4072, Australia
| |
Collapse
|
19
|
Abstract
Cortical body size representations are distorted in the adult, from low-level motor and sensory maps to higher levels multisensory and cognitive representations. Little is known about how such representations are built and evolve during infancy and childhood. Here we investigated how hand size is represented in typically developing children aged 6 to 10. Participants were asked to estimate their hand size using two different sensory modalities (visual or haptic). We found a distortion (underestimation) already present in the youngest children. Crucially, such distortion increases with age and regardless of the sensory modality used to access the representation. Finally, underestimation is specific for the body as no bias was found for object estimation. This study suggests that the brain does not keep up with the natural body growth. However, since motor behavior nor perception were impaired, the distortion seems functional and/or compensated for, for proper interaction with the external environment.
Collapse
|
20
|
Hense M, Badde S, Köhne S, Dziobek I, Röder B. Visual and Proprioceptive Influences on Tactile Spatial Processing in Adults with Autism Spectrum Disorders. Autism Res 2019; 12:1745-1757. [PMID: 31507084 DOI: 10.1002/aur.2202] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 06/25/2019] [Accepted: 08/14/2019] [Indexed: 12/19/2022]
Abstract
Children with autism spectrum disorders (ASDs) often exhibit altered representations of the external world. Consistently, when localizing touch, children with ASDs were less influenced than their peers by changes of the stimulated limb's location in external space [Wada et al., Scientific Reports 2015, 4(1), 5985]. However, given the protracted development of an external-spatial dominance in tactile processing in typically developing children, this difference might reflect a developmental delay rather than a set suppression of external space in ASDs. Here, adults with ASDs and matched control-participants completed (a) the tactile temporal order judgment (TOJ) task previously used to test external-spatial representation of touch in children with ASDs and (b) a tactile-visual cross-modal congruency (CC) task which assesses benefits of task-irrelevant visual stimuli on tactile localization in external space. In both experiments, participants localized tactile stimuli to the fingers of each hand, while holding their hands either crossed or uncrossed. Performance differences between hand postures reflect the influence of external-spatial codes. In both groups, tactile TOJ-performance markedly decreased when participants crossed their hands and CC-effects were especially large if the visual stimulus was presented at the same side of external space as the task-relevant touch. The absence of group differences was statistically confirmed using Bayesian statistical modeling: adults with ASDs weighted external-spatial codes comparable to typically developed adults during tactile and visual-tactile spatio-temporal tasks. Thus, atypicalities in the spatial coding of touch for children with ASDs appear to reflect a developmental delay rather than a stable characteristic of ASD. Autism Res 2019, 12: 1745-1757. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: A touched limb's location can be described twofold, with respect to the body (right hand) or the external world (right side). Children and adolescents with autism spectrum disorder (ASD) reportedly rely less than their peers on the external world. Here, adults with and without ASDs completed two tactile localization tasks. Both groups relied to the same degree on external world locations. This opens the possibility that the tendency to relate touch to the external world is typical in individuals with ASDs but emerges with a delay.
Collapse
Affiliation(s)
- Marlene Hense
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Stephanie Badde
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Department of Psychology, New York University, New York, New York
| | - Svenja Köhne
- Berlin School of Mind and Brain, Department of Psychology, Humboldt University Berlin, Berlin, Germany
| | - Isabel Dziobek
- Berlin School of Mind and Brain, Department of Psychology, Humboldt University Berlin, Berlin, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
21
|
Serino A. Peripersonal space (PPS) as a multisensory interface between the individual and the environment, defining the space of the self. Neurosci Biobehav Rev 2019; 99:138-159. [DOI: 10.1016/j.neubiorev.2019.01.016] [Citation(s) in RCA: 112] [Impact Index Per Article: 22.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/23/2018] [Accepted: 01/14/2019] [Indexed: 11/25/2022]
|
22
|
Pugach G, Pitti A, Tolochko O, Gaussier P. Brain-Inspired Coding of Robot Body Schema Through Visuo-Motor Integration of Touched Events. Front Neurorobot 2019; 13:5. [PMID: 30899217 PMCID: PMC6416207 DOI: 10.3389/fnbot.2019.00005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 02/06/2019] [Indexed: 11/13/2022] Open
Abstract
Representing objects in space is difficult because sensorimotor events are anchored in different reference frames, which can be either eye-, arm-, or target-centered. In the brain, Gain-Field (GF) neurons in the parietal cortex are involved in computing the necessary spatial transformations for aligning the tactile, visual and proprioceptive signals. In reaching tasks, these GF neurons exploit a mechanism based on multiplicative interaction for binding simultaneously touched events from the hand with visual and proprioception information.By doing so, they can infer new reference frames to represent dynamically the location of the body parts in the visual space (i.e., the body schema) and nearby targets (i.e., its peripersonal space). In this line, we propose a neural model based on GF neurons for integrating tactile events with arm postures and visual locations for constructing hand- and target-centered receptive fields in the visual space. In robotic experiments using an artificial skin, we show how our neural architecture reproduces the behaviors of parietal neurons (1) for encoding dynamically the body schema of our robotic arm without any visual tags on it and (2) for estimating the relative orientation and distance of targets to it. We demonstrate how tactile information facilitates the integration of visual and proprioceptive signals in order to construct the body space.
Collapse
Affiliation(s)
- Ganna Pugach
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Alexandre Pitti
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| | - Olga Tolochko
- Faculty of Electric Power Engineering and Automation, National Technical University of Ukraine Kyiv Polytechnic Institute, Kyiv, Ukraine
| | - Philippe Gaussier
- ETIS Laboratory, University Paris-Seine, CNRS UMR 8051, University of Cergy-Pontoise, ENSEA, Cergy-Pontoise, France
| |
Collapse
|
23
|
Juett J, Kuipers B. Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping. Front Neurorobot 2019; 13:4. [PMID: 30853907 PMCID: PMC6396706 DOI: 10.3389/fnbot.2019.00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 02/04/2019] [Indexed: 11/13/2022] Open
Abstract
The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by a form of intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move a specified object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, bumping an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasping reliable is more complex than for reaching, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.
Collapse
Affiliation(s)
- Jonathan Juett
- Computer Science and Engineering, University of Michigan, Ann Arbor, MI, United States
| | - Benjamin Kuipers
- Computer Science and Engineering, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
24
|
Cappagli G, Finocchietti S, Cocchi E, Giammari G, Zumiani R, Cuppone AV, Baud-Bovy G, Gori M. Audio motor training improves mobility and spatial cognition in visually impaired children. Sci Rep 2019; 9:3303. [PMID: 30824830 PMCID: PMC6397231 DOI: 10.1038/s41598-019-39981-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 02/07/2019] [Indexed: 11/25/2022] Open
Abstract
Since it has been demonstrated that spatial cognition can be affected in visually impaired children, training strategies that exploit the plasticity of the human brain should be early adopted. Here we developed and tested a new training protocol based on the reinforcement of audio-motor associations and thus supporting spatial development in visually impaired children. The study involved forty-four visually impaired children aged 6–17 years old assigned to an experimental (ABBI training) or a control (classical training) rehabilitation conditions. The experimental training group followed an intensive but entertaining rehabilitation for twelve weeks during which they performed ad-hoc developed audio-spatial exercises with the Audio Bracelet for Blind Interaction (ABBI). A battery of spatial tests administered before and after the training indicated that children significantly improved in almost all the spatial aspects considered, while the control group didn’t show any improvement. These results confirm that perceptual development in the case of blindness can be enhanced with naturally associated auditory feedbacks to body movements. Therefore the early introduction of a tailored audio-motor training could potentially prevent spatial developmental delays in visually impaired children.
Collapse
Affiliation(s)
- Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed ipovedenti ONLUS, Genova, Italy
| | - Giuseppina Giammari
- Centro regionale per l'ipovisione in età evolutiva, IRCCS Scientific Institute "E. Medea", Bosisio Parini, Lecco, Italy
| | | | - Anna Vera Cuppone
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | - Gabriel Baud-Bovy
- RBCS Robotics, Brain and Cognitive Science department, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.,Vita-Salute San Raffaele University & Unit of Experimental Psychology, Division of Neuroscience, San Raffaele Scientific Institute, Milan, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
25
|
Tamè L, Azañón E, Longo MR. A Conceptual Model of Tactile Processing across Body Features of Size, Shape, Side, and Spatial Location. Front Psychol 2019; 10:291. [PMID: 30863333 PMCID: PMC6399380 DOI: 10.3389/fpsyg.2019.00291] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 01/29/2019] [Indexed: 11/30/2022] Open
Abstract
The processing of touch depends of multiple factors, such as the properties of the skin and type of receptors stimulated, as well as features related to the actual configuration and shape of the body itself. A large body of research has focused on the effect that the nature of the stimuli has on tactile processing. Less research, however, has focused on features beyond the nature of the touch. In this review, we focus on some features related to the body that have been investigated for less time and in a more fragmented way. These include the symmetrical quality of the two sides of the body, the postural configuration of the body, as well as the size and shape of different body parts. We will describe what we consider three key aspects: (1) how and at which stages tactile information is integrated between different parts and sides of the body; (2) how tactile signals are integrated with online and stored postural configurations of the body, regarded as priors; (3) and how tactile signals are integrated with representations of body size and shape. Here, we describe how these different body dimensions affect integration of tactile information as well as guide motor behavior by integrating them in a single model of tactile processing. We review a wide range of neuropsychological, neuroimaging, and neurophysiological data and suggest a revised model of tactile integration on the basis of the one proposed previously by Longo et al.
Collapse
Affiliation(s)
- Luigi Tamè
- Department of Psychological Sciences, Birkbeck University of London, London, United Kingdom.,School of Psychology, University of Kent, Canterbury, United Kingdom
| | - Elena Azañón
- Institute of Psychology, Otto von Guericke University Magdeburg, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck University of London, London, United Kingdom
| |
Collapse
|
26
|
My true face: Unmasking one's own face representation. Acta Psychol (Amst) 2018; 191:63-68. [PMID: 30219412 DOI: 10.1016/j.actpsy.2018.08.014] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 08/14/2018] [Accepted: 08/27/2018] [Indexed: 11/23/2022] Open
Abstract
Face recognition has been the focus of multiple studies, but little is still known on how we represent the structure of one's own face. Most of the studies have focused on the topic of visual and haptic face recognition, but the metric representation of different features of one's own face is relatively unknown. We investigated the metric representation of the face in young adults by developing a proprioceptive pointing task to locate face landmarks in the first-person perspective. Our data revealed a large overestimation of width for all face features which resembles, in part, the size in somatosensory cortical representation. In contrast, face length was compartmentalised in two different regions: upper (underestimated) and bottom (overestimated); indicating size differences possibly due to functionality. We also identified shifts of the location judgments, with all face areas perceived closer to the body than they really were, due to a potential influence of the self-frame of reference. More importantly, the representation of the face appeared asymmetrical, with an overrepresentation of right side of the face, due to the influence of lateralization biases for strong right-handers. We suggest that these effects may be due to functionality influences and experience that affect the construction of face structural representation, going beyond the parallel of the somatosensory homunculus.
Collapse
|
27
|
Shen G, Weiss SM, Meltzoff AN, Marshall PJ. The somatosensory mismatch negativity as a window into body representations in infancy. Int J Psychophysiol 2018; 134:144-150. [PMID: 30385369 DOI: 10.1016/j.ijpsycho.2018.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Revised: 10/19/2018] [Accepted: 10/29/2018] [Indexed: 10/28/2022]
Abstract
How the body is represented in the developing brain is a topic of growing interest. The current study takes a novel approach to investigating neural body representations in infants by recording somatosensory mismatch negativity (sMMN) responses elicited by tactile stimulation of different body locations. Recent research in adults has suggested that sMMN amplitude may be influenced by the relative distance between representations of the stimulated body parts in somatosensory cortex. The current study uses a similar paradigm to explore whether the sMMN can be elicited in infants, and to test whether the infant sMMN response is sensitive to the somatotopic organization of somatosensory cortex. Participants were healthy infants (n = 31) aged 6 and 7 months. The protocol leveraged a discontinuity in cortical somatotopic organization, whereby the representations of the neck and the face are separated by representations of the arms, the hands and the shoulder. In a double-deviant oddball protocol, stimulation of the hand (100 trials, 10% probability) and neck (100 trials, 10% probability) was interspersed among repeated stimulation of the face (800 trials, 80% probability). Waveforms showed evidence of an infant sMMN response that was significantly larger for the face/neck contrast than for the face/hand contrast. These results suggest that, for certain combinations of body parts, early pre-attentive tactile discrimination in infants may be influenced by distance between the corresponding cortical representations. The results provide the first evidence that the sMMN can be elicited in infants, and pave the way for further applications of the sMMN in studying body representations in preverbal infants.
Collapse
Affiliation(s)
- Guannan Shen
- Department of Psychology, Temple University, 1701 N. 13th Street, Philadelphia, PA 19122, USA.
| | - Staci M Weiss
- Department of Psychology, Temple University, 1701 N. 13th Street, Philadelphia, PA 19122, USA
| | - Andrew N Meltzoff
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA, USA
| | - Peter J Marshall
- Department of Psychology, Temple University, 1701 N. 13th Street, Philadelphia, PA 19122, USA
| |
Collapse
|
28
|
|
29
|
Tanaka Y, Kanakogi Y, Kawasaki M, Myowa M. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy. Dev Cogn Neurosci 2018; 30:31-40. [PMID: 29253738 PMCID: PMC6969118 DOI: 10.1016/j.dcn.2017.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Revised: 10/04/2017] [Accepted: 12/04/2017] [Indexed: 11/19/2022] Open
Abstract
Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy.
Collapse
Affiliation(s)
- Yukari Tanaka
- Graduate school of Education, Kyoto University, Kyoto, Japan.
| | - Yasuhiro Kanakogi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 2-4 Hikaridai, Seika-cho, Souraku-gun, Kyoto 619-0237, Japan; Japan Society for Promotion Science, Kojimachi Business Center Building, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo 102-0083, Japan
| | - Masahiro Kawasaki
- Rhythm-based Brain Information Processing Unit, RIKEN BSI-TOYOTA Collaboration Center, Saitama, Japan; Department of Intelligent Interaction Technology, Graduate School of Systems and Information Engineering, University of Tsukuba, Ibaraki, Japan
| | - Masako Myowa
- Graduate school of Education, Kyoto University, Kyoto, Japan
| |
Collapse
|
30
|
Cappagli G, Finocchietti S, Baud-Bovy G, Cocchi E, Gori M. Multisensory Rehabilitation Training Improves Spatial Perception in Totally but Not Partially Visually Deprived Children. Front Integr Neurosci 2017; 11:29. [PMID: 29097987 PMCID: PMC5654347 DOI: 10.3389/fnint.2017.00029] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 10/02/2017] [Indexed: 11/26/2022] Open
Abstract
Since it has been shown that spatial development can be delayed in blind children, focused sensorimotor trainings that associate auditory and motor information might be used to prevent the risk of spatial-related developmental delays or impairments from an early age. With this aim, we proposed a new technological device based on the implicit link between action and perception: ABBI (Audio Bracelet for Blind Interaction) is an audio bracelet that produces a sound when a movement occurs by allowing the substitution of the visuo-motor association with a new audio-motor association. In this study, we assessed the effects of an extensive but entertaining sensorimotor training with ABBI on the development of spatial hearing in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR). The training required the participants to play several spatial games individually and/or together with the psychomotor therapist 1 h per week for 3 months: the spatial games consisted of exercises meant to train their ability to associate visual and motor-related signals from their body, in order to foster the development of multisensory processes. We measured spatial performance by asking participants to indicate the position of one single fixed (static condition) or moving (dynamic condition) sound source on a vertical sensorized surface. We found that spatial performance of congenitally blind but not low vision children is improved after the training, indicating that early interventions with the use of science-driven devices based on multisensory capabilities can provide consistent advancements in therapeutic interventions, improving the quality of life of children with visual disability.
Collapse
Affiliation(s)
- Giulia Cappagli
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Technologia, Genoa, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Technologia, Genoa, Italy
| | - Gabriel Baud-Bovy
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Technologia, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Technologia, Genoa, Italy
| |
Collapse
|
31
|
Riva G. The neuroscience of body memory: From the self through the space to the others. Cortex 2017; 104:241-260. [PMID: 28826604 DOI: 10.1016/j.cortex.2017.07.013] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 05/30/2017] [Accepted: 07/19/2017] [Indexed: 10/19/2022]
Abstract
Our experience of the body is not direct; rather, it is mediated by perceptual information, influenced by internal information, and recalibrated through stored implicit and explicit body representation (body memory). This paper presents an overview of the current investigations related to body memory by bringing together recent studies from neuropsychology, neuroscience, and evolutionary and cognitive psychology. To do so, in the paper, I explore the origin of representations of human body to elucidate their developmental process and, in particular, their relationship with more explicit concepts of self. First, it is suggested that our bodily experience is constructed from early development through the continuous integration of sensory and cultural data from six different representations of the body, i.e., the Sentient Body (Minimal Selfhood), the Spatial Body (Self Location), the Active Body (Agency), the Personal Body (Whole Body Ownership - Me); the Objectified Body (Objectified Self - Mine), and the Social Body (Body Satisfaction - Ideal Me). Then, it is suggested that these six representations can be combined in a coherent supramodal representation, i.e. the "body matrix", through a predictive, multisensory processing activated by central, top-down, attentional processes. From an evolutionary perspective, the main goal of the body matrix is to allow the self to protect and extend its boundaries at both the homeostatic and psychological levels. From one perspective, the self extends its boundaries (peripersonal space) through the enactment and recognition of motor schemas. From another perspective, the body matrix, by defining the boundaries of the body, also defines where the self is present, i.e., in the body that is processed by the body matrix as the most likely to be its one, and in the space surrounding it. In the paper I also introduce and discuss the concept of "embodied medicine": the use of advanced technology for altering the body matrix with the goal of improving our health and well-being.
Collapse
Affiliation(s)
- Giuseppe Riva
- Centro Studi e Ricerche di Psicologia Della Comunicazione, Università Cattolica Del Sacro Cuore, Milan, Italy; Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Milan, Italy.
| |
Collapse
|
32
|
Azañón E, Camacho K, Morales M, Longo MR. The Sensitive Period for Tactile Remapping Does Not Include Early Infancy. Child Dev 2017; 89:1394-1404. [PMID: 28452406 DOI: 10.1111/cdev.12813] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Visual input during development seems crucial in tactile spatial perception, given that late, but not congenitally, blind people are impaired when skin-based and tactile external representations are in conflict (when crossing the limbs). To test whether there is a sensitive period during which visual input is necessary, 14 children (age = 7.95) and a teenager (LM; age = 17.38) deprived of early vision by cataracts, and whose sight was restored during the first 5 months and at age 7, respectively, were tested. Tactile localization with arms crossed and uncrossed was measured. Children showed a crossing effect indistinguishable from a control group (Ns = 28, age = 8.24), whereas LM showed no crossing effect (Ns controls = 14, age = 20.78). This demonstrates a sensitive period which, critically, does not include early infancy.
Collapse
|
33
|
Cappagli G, Finocchietti S, Cocchi E, Gori M. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children. Front Psychol 2017; 8:467. [PMID: 28443040 PMCID: PMC5385626 DOI: 10.3389/fpsyg.2017.00467] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/13/2017] [Indexed: 11/17/2022] Open
Abstract
The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion.
Collapse
Affiliation(s)
- Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di TecnologiaGenova, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Istituto Italiano di TecnologiaGenova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed IpovedentiGenova, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di TecnologiaGenova, Italy
| |
Collapse
|
34
|
Bremner AJ. Multisensory Development: Calibrating a Coherent Sensory Milieu in Early Life. Curr Biol 2017; 27:R305-R307. [DOI: 10.1016/j.cub.2017.02.055] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
35
|
Fisher-Thompson D. Contributions of Look Duration and Gaze Shift Patterns to Infants' Novelty Preferences. INFANCY 2017; 22:190-222. [PMID: 33158341 DOI: 10.1111/infa.12154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2015] [Revised: 04/13/2016] [Accepted: 07/12/2016] [Indexed: 11/27/2022]
Abstract
Data from 72 infants, tested using a serial paired-comparison paradigm, were analyzed to better understand infant novelty preferences. Infants between the ages of 15 and 26 weeks were tested in three studies with familiar stimuli displayed adjacent to novel stimuli on each trial. Differences in look duration, look number, and gaze shifts directed at novel versus familiar stimuli were assessed to measure their contributions to group and individual novelty preferences. Infants produced longer looks for novel stimuli in all three studies, and stimulus differences in look duration accounted for more than 50% of the variability in individual novelty preferences. Infants that produced more looks to novel rather than familiar stimuli did not produce overall novelty preferences unless they also looked longer at novel stimuli. Gaze shift patterns did not predict individual novelty preferences, and novel stimuli did not determine where infants looked. The infants' visual exploration was constrained by memories for the direction of the previous look as well as by the attention-holding features of novel stimuli.
Collapse
|
36
|
Tactile localization performance in children with developmental coordination disorder (DCD) corresponds to their motor skill and not their cognitive ability. Hum Mov Sci 2017; 53:72-83. [PMID: 28109545 DOI: 10.1016/j.humov.2016.12.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 11/01/2016] [Accepted: 12/26/2016] [Indexed: 11/21/2022]
Abstract
When localizing touches to the hands, typically developing children and adults show a "crossed hands effect" whereby identifying which hand received a tactile stimulus is less accurate when the hands are crossed than uncrossed. This demonstrates the use of an external frame of reference for locating touches to one's own body. Given that studies indicate that developmental vision plays a role in the emergence of external representations of touch, and reliance on vision for representing the body during action is atypical in developmental coordination disorder (DCD), we investigated external spatial representations of touch in children with DCD using the "crossed hands effect". Nineteen children with DCD aged 7-11years completed a tactile localization task in which posture (uncrossed, crossed) and view (hands seen, unseen) were varied systematically. Their performance was compared to that of 35 typically developing controls (19 of a similar age and cognitive ability, and 16 of a younger age but similar fine motor ability). Like controls, the DCD group exhibited a crossed hands effect, whilst their overall tactile localization performance was weaker than their peers of similar age and cognitive ability, but in line with younger controls of similar motor ability. For children with movement difficulties, these findings indicate tactile localization impairments in relation to age expectations, but apparently typical use of an external reference frame for localizing touch.
Collapse
|
37
|
Integration of anatomical and external response mappings explains crossing effects in tactile localization: A probabilistic modeling approach. Psychon Bull Rev 2016; 23:387-404. [PMID: 26350763 DOI: 10.3758/s13423-015-0918-0] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To act upon a tactile stimulus its original skin-based, anatomical spatial code has to be transformed into an external, posture-dependent reference frame, a process known as tactile remapping. When the limbs are crossed, anatomical and external location codes are in conflict, leading to a decline in tactile localization accuracy. It is unknown whether this impairment originates from the integration of the resulting external localization response with the original, anatomical one or from a failure of tactile remapping in crossed postures. We fitted probabilistic models based on these diverging accounts to the data from three tactile localization experiments. Hand crossing disturbed tactile left-right location choices in all experiments. Furthermore, the size of these crossing effects was modulated by stimulus configuration and task instructions. The best model accounted for these results by integration of the external response mapping with the original, anatomical one, while applying identical integration weights for uncrossed and crossed postures. Thus, the model explained the data without assuming failures of remapping. Moreover, performance differences across tasks were accounted for by non-individual parameter adjustments, indicating that individual participants' task adaptation results from one common functional mechanism. These results suggest that remapping is an automatic and accurate process, and that the observed localization impairments in touch result from a cognitively controlled integration process that combines anatomically and externally coded responses.
Collapse
|
38
|
Badde S, Heed T. Towards explaining spatial touch perception: Weighted integration of multiple location codes. Cogn Neuropsychol 2016; 33:26-47. [PMID: 27327353 PMCID: PMC4975087 DOI: 10.1080/02643294.2016.1168791] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Touch is bound to the skin – that is, to the boundaries of the body. Yet, the activity of neurons in primary somatosensory cortex just mirrors the spatial distribution of the sensors across the skin. To determine the location of a tactile stimulus on the body, the body's spatial layout must be considered. Moreover, to relate touch to the external world, body posture has to be evaluated. In this review, we argue that posture is incorporated, by default, for any tactile stimulus. However, the relevance of the external location and, thus, its expression in behaviour, depends on various sensory and cognitive factors. Together, these factors imply that an external representation of touch dominates over the skin-based, anatomical when our focus is on the world rather than on our own body. We conclude that touch localization is a reconstructive process that is adjusted to the context while maintaining all available spatial information.
Collapse
Affiliation(s)
- Stephanie Badde
- a Department of Psychology , New York University , New York , NY , USA
| | - Tobias Heed
- b Faculty of Psychology and Human Movement Science , University of Hamburg , Hamburg , Germany
| |
Collapse
|
39
|
Cappagli G, Gori M. Auditory spatial localization: Developmental delay in children with visual impairments. RESEARCH IN DEVELOPMENTAL DISABILITIES 2016; 53-54:391-398. [PMID: 27002960 DOI: 10.1016/j.ridd.2016.02.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Revised: 01/06/2016] [Accepted: 02/29/2016] [Indexed: 06/05/2023]
Abstract
For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time.
Collapse
Affiliation(s)
- Giulia Cappagli
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, via Morego 30, 16163 Genoa, Italy.
| | - Monica Gori
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, via Morego 30, 16163 Genoa, Italy
| |
Collapse
|
40
|
Bremner AJ. Developing body representations in early life: combining somatosensation and vision to perceive the interface between the body and the world. Dev Med Child Neurol 2016; 58 Suppl 4:12-6. [PMID: 27027602 DOI: 10.1111/dmcn.13041] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/02/2015] [Indexed: 11/26/2022]
Abstract
This article lays out the computational challenges involved in constructing multisensory representations of the body and the interface between the body and the external world. It then provides a review of the most pertinent empirical literature regarding the ontogeny of such representational abilities in early life, focussing especially on ability to make spatiotemporal links between bodily events transduced by vision and somatosensation (cutaneous touch and proprioception), and the ability to use multisensory bodily cues to locate tactile stimuli. Findings from infants, children, and blind adults point towards a trajectory of development in early life in which infants and children, as a result of sensory experience, learn new ways of combining cues concerning the body arising from vision and somatosensation, in order to best represent the layout of their limbs and sensory events occurring on their limbs in relation to the external environment.
Collapse
Affiliation(s)
- Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths University of London, London, UK
| |
Collapse
|
41
|
Cowie D, Sterling S, Bremner AJ. The development of multisensory body representation and awareness continues to 10years of age: Evidence from the rubber hand illusion. J Exp Child Psychol 2016; 142:230-8. [DOI: 10.1016/j.jecp.2015.10.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2015] [Revised: 09/28/2015] [Accepted: 10/01/2015] [Indexed: 11/26/2022]
|
42
|
Cappagli G, Cocchi E, Gori M. Auditory and proprioceptive spatial impairments in blind children and adults. Dev Sci 2015; 20. [PMID: 26613827 DOI: 10.1111/desc.12374] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 09/04/2015] [Indexed: 11/30/2022]
Abstract
It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development.
Collapse
Affiliation(s)
- Giulia Cappagli
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | | | - Monica Gori
- Robotics, Brain and Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
43
|
Crollen V, Noël MP. Spatial and numerical processing in children with high and low visuospatial abilities. J Exp Child Psychol 2015; 132:84-98. [DOI: 10.1016/j.jecp.2014.12.006] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 12/19/2014] [Accepted: 12/22/2014] [Indexed: 11/28/2022]
|
44
|
Sclafani V, Simpson EA, Suomi SJ, Ferrari PF. Development of space perception in relation to the maturation of the motor system in infant rhesus macaques (Macaca mulatta). Neuropsychologia 2015; 70:429-41. [PMID: 25486636 PMCID: PMC5100747 DOI: 10.1016/j.neuropsychologia.2014.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Revised: 11/07/2014] [Accepted: 12/02/2014] [Indexed: 11/25/2022]
Abstract
To act on the environment, organisms must perceive object locations in relation to their body. Several neuroscientific studies provide evidence of neural circuits that selectively represent space within reach (i.e., peripersonal) and space outside of reach (i.e., extrapersonal). However, the developmental emergence of these space representations remains largely unexplored. We investigated the development of space coding in infant macaques and found that they exhibit different motor strategies and hand configurations depending on the objects' size and location. Reaching-grasping improved from 2 to 4 weeks of age, suggesting a broadly defined perceptual body schema at birth, modified by the acquisition and refinement of motor skills through early sensorimotor experience, enabling the development of a mature capacity for coding space.
Collapse
Affiliation(s)
- Valentina Sclafani
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39 - 43100 Parma, Italy.
| | - Elizabeth A Simpson
- Dipartimento di Neuroscienze, Università di Parma, Via Volturno 39 - 43100 Parma, Italy; Eunice Kennedy Shiver National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Poolesville, MD, USA
| | - Stephen J Suomi
- Eunice Kennedy Shiver National Institute of Child Health and Human Development, Laboratory of Comparative Ethology, Poolesville, MD, USA
| | | |
Collapse
|
45
|
Dynamic Tuning of Tactile Localization to Body Posture. Curr Biol 2015; 25:512-7. [DOI: 10.1016/j.cub.2014.12.038] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Revised: 10/21/2014] [Accepted: 12/12/2014] [Indexed: 11/20/2022]
|
46
|
Exploiting the gain-modulation mechanism in parieto-motor neurons: Application to visuomotor transformations and embodied simulation. Neural Netw 2015; 62:102-11. [DOI: 10.1016/j.neunet.2014.08.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 01/29/2023]
|
47
|
|
48
|
Begum Ali J, Cowie D, Bremner AJ. Effects of posture on tactile localization by 4 years of age are modulated by sight of the hands: evidence for an early acquired external spatial frame of reference for touch. Dev Sci 2014; 17:935-43. [DOI: 10.1111/desc.12184] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Accepted: 01/08/2014] [Indexed: 11/28/2022]
Affiliation(s)
- Jannath Begum Ali
- Sensorimotor Development Research Unit; Department of Psychology; Goldsmiths; University of London; UK
| | | | - Andrew J. Bremner
- Sensorimotor Development Research Unit; Department of Psychology; Goldsmiths; University of London; UK
| |
Collapse
|
49
|
Rigato S, Begum Ali J, van Velzen J, Bremner AJ. The neural basis of somatosensory remapping develops in human infancy. Curr Biol 2014; 24:1222-6. [PMID: 24856214 DOI: 10.1016/j.cub.2014.04.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2014] [Revised: 03/11/2014] [Accepted: 04/01/2014] [Indexed: 10/25/2022]
Abstract
When we sense a touch, our brains take account of our current limb position to determine the location of that touch in external space [1, 2]. Here we show that changes in the way the brain processes somatosensory information in the first year of life underlie the origins of this ability [3]. In three experiments we recorded somatosensory evoked potentials (SEPs) from 6.5-, 8-, and 10-month-old infants while presenting vibrotactile stimuli to their hands across uncrossed- and crossed-hands postures. At all ages we observed SEPs over central regions contralateral to the stimulated hand. Somatosensory processing was influenced by arm posture from 8 months onward. At 8 months, posture influenced mid-latency SEP components, but by 10 months effects were observed at early components associated with feed-forward stages of somatosensory processing. Furthermore, sight of the hands was a necessary pre-requisite for somatosensory remapping at 10 months. Thus, the cortical networks [4] underlying the ability to dynamically update the location of a perceived touch across limb movements become functional during the first year of life. Up until at least 6.5 months of age, it seems that human infants' perceptions of tactile stimuli in the external environment are heavily dependent upon limb position.
Collapse
Affiliation(s)
- Silvia Rigato
- Department of Psychology, University of Essex, Colchester CO4 3SQ, UK; Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, UK
| | - Jannath Begum Ali
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, UK
| | - José van Velzen
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, UK.
| |
Collapse
|
50
|
Law J, Shaw P, Earland K, Sheldon M, Lee M. A psychology based approach for longitudinal development in cognitive robotics. Front Neurorobot 2014; 8:1. [PMID: 24478693 PMCID: PMC3902213 DOI: 10.3389/fnbot.2014.00001] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2013] [Accepted: 01/03/2014] [Indexed: 11/17/2022] Open
Abstract
A major challenge in robotics is the ability to learn, from novel experiences, new behavior that is useful for achieving new goals and skills. Autonomous systems must be able to learn solely through the environment, thus ruling out a priori task knowledge, tuning, extensive training, or other forms of pre-programming. Learning must also be cumulative and incremental, as complex skills are built on top of primitive skills. Additionally, it must be driven by intrinsic motivation because formative experience is gained through autonomous activity, even in the absence of extrinsic goals or tasks. This paper presents an approach to these issues through robotic implementations inspired by the learning behavior of human infants. We describe an approach to developmental learning and present results from a demonstration of longitudinal development on an iCub humanoid robot. The results cover the rapid emergence of staged behavior, the role of constraints in development, the effect of bootstrapping between stages, and the use of a schema memory of experiential fragments in learning new skills. The context is a longitudinal experiment in which the robot advanced from uncontrolled motor babbling to skilled hand/eye integrated reaching and basic manipulation of objects. This approach offers promise for further fast and effective sensory-motor learning techniques for robotic learning.
Collapse
Affiliation(s)
- J Law
- Department of Computer Science, Aberystwyth University Aberystwyth, UK
| | - P Shaw
- Department of Computer Science, Aberystwyth University Aberystwyth, UK
| | - K Earland
- Department of Computer Science, Aberystwyth University Aberystwyth, UK
| | - M Sheldon
- Department of Computer Science, Aberystwyth University Aberystwyth, UK
| | - M Lee
- Department of Computer Science, Aberystwyth University Aberystwyth, UK
| |
Collapse
|