1
|
Okada NS, McNeely-White KL, Cleary AM, Carlaw BN, Drane DL, Parsons TD, McMahan T, Neisser J, Pedersen NP. A virtual reality paradigm with dynamic scene stimuli for use in memory research. Behav Res Methods 2024; 56:6440-6463. [PMID: 37845424 PMCID: PMC11018716 DOI: 10.3758/s13428-023-02243-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/15/2023] [Indexed: 10/18/2023]
Abstract
Episodic memory may essentially be memory for one's place within a temporally unfolding scene from a first-person perspective. Given this, pervasively used static stimuli may only capture one small part of episodic memory. A promising approach for advancing the study of episodic memory is immersing participants within varying scenes from a first-person perspective. We present a pool of distinct scene stimuli for use in virtual environments and a paradigm that is implementable across varying levels of immersion on multiple virtual reality (VR) platforms and adaptable to studying various aspects of scene and episodic memory. In our task, participants are placed within a series of virtual environments from a first-person perspective and guided through a virtual tour of scenes during a study phase and a test phase. In the test phase, some scenes share a spatial layout with studied scenes; others are completely novel. In three experiments with varying degrees of immersion, we measure scene recall, scene familiarity-detection during recall failure, the subjective experience of déjà vu, the ability to predict the next turn on a tour, the subjective sense of being able to predict the next turn on a tour, and the factors that influence memory search and the inclination to generate candidate recollective information. The level of first-person immersion mattered to multiple facets of episodic memory. The paradigm presents a useful means of advancing mechanistic understanding of how memory operates in realistic dynamic scene environments, including in combination with cognitive neuroscience methods such as functional magnetic resonance imaging and electrophysiology.
Collapse
Affiliation(s)
- Noah S Okada
- Department of Neurology, Emory University, Atlanta, GA, 30322, USA.
| | | | - Anne M Cleary
- Department of Psychology, Colorado State University, Fort Collins, CO, 80523, USA
| | - Brooke N Carlaw
- Department of Psychology, Colorado State University, Fort Collins, CO, 80523, USA
| | - Daniel L Drane
- Department of Neurology, Emory University, Atlanta, GA, 30322, USA
- Department of Pediatrics, Emory University School of Medicine, Atlanta, GA, 30322, USA
- Department of Neurology, University of Washington School of Medicine, Seattle, WA, 98105, USA
| | - Thomas D Parsons
- Grace Center, Arizona State University, Tempe, AZ, 85281, USA
- Computational Neuropsychology & Simulation (CNS) Laboratory, Arizona State University, Tempe, AZ, 85281, USA
| | - Timothy McMahan
- Department of Learning Technologies, University of North Texas, Denton, TX, 76203, USA
| | - Joseph Neisser
- Department of Philosophy, Grinnell College, Grinnell, IA, 50112, USA
| | - Nigel P Pedersen
- Department of Neurology, Emory University, Atlanta, GA, 30322, USA.
- Department of Neurology, University of California Davis, Sacramento, CA, 95816, USA.
| |
Collapse
|
2
|
Szubielska M, Szewczyk M, Augustynowicz P, Kędziora W, Möhring W. Adults' spatial scaling of tactile maps: Insights from studying sighted, early and late blind individuals. PLoS One 2024; 19:e0304008. [PMID: 38814897 PMCID: PMC11139347 DOI: 10.1371/journal.pone.0304008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 05/04/2024] [Indexed: 06/01/2024] Open
Abstract
The current study investigated spatial scaling of tactile maps among blind adults and blindfolded sighted controls. We were specifically interested in identifying spatial scaling strategies as well as effects of different scaling directions (up versus down) on participants' performance. To this aim, we asked late blind participants (with visual memory, Experiment 1) and early blind participants (without visual memory, Experiment 2) as well as sighted blindfolded controls to encode a map including a target and to place a response disc at the same spot on an empty, constant-sized referent space. Maps had five different sizes resulting in five scaling factors (1:3, 1:2, 1:1, 2:1, 3:1), allowing to investigate different scaling directions (up and down) in a single, comprehensive design. Accuracy and speed of learning about the target location as well as responding served as dependent variables. We hypothesized that participants who can use visual mental representations (i.e., late blind and blindfolded sighted participants) may adopt mental transformation scaling strategies. However, our results did not support this hypothesis. At the same time, we predicted the usage of relative distance scaling strategies in early blind participants, which was supported by our findings. Moreover, our results suggested that tactile maps can be scaled as accurately and even faster by blind participants than by sighted participants. Furthermore, irrespective of the visual status, participants of each visual status group gravitated their responses towards the center of the space. Overall, it seems that a lack of visual imagery does not impair early blind adults' spatial scaling ability but causes them to use a different strategy than sighted and late blind individuals.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Marta Szewczyk
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Paweł Augustynowicz
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | | | - Wenke Möhring
- Faculty of Psychology, University of Basel, Basel, Switzerland
- Department of Educational and Health Psychology, University of Education Schwäbisch Gmünd, Germany
| |
Collapse
|
3
|
Szubielska M, Kędziora W, Augustynowicz P, Picard D. Drawing as a tool for investigating the nature of imagery representations of blind people: The case of the canonical size phenomenon. Mem Cognit 2023:10.3758/s13421-023-01491-7. [PMID: 37985536 DOI: 10.3758/s13421-023-01491-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 11/22/2023]
Abstract
Several studies have shown that blind people, including those with congenital blindness, can use raised-line drawings, both for "reading" tactile graphics and for drawing unassisted. However, research on drawings produced by blind people has mainly been qualitative. The current experimental study was designed to investigate the under-researched issue of the size of drawings created by people with blindness. Participants (N = 59) varied in their visual status. Adventitiously blind people had previous visual experience and might use visual representations (e.g., when visualising objects in imagery/working memory). Congenitally blind people did not have any visual experience. The participant's task was to draw from memory common objects that vary in size in the real world. The findings revealed that both groups of participants produced larger drawings of objects that have larger actual sizes. This means that the size of familiar objects is a property of blind people's mental representations, regardless of their visual status. Our research also sheds light on the nature of the phenomenon of canonical size. Since we have found the canonical size effect in a group of people who are blind from birth, the assumption of the visual nature of this phenomenon - caused by the ocular-centric biases present in studies on drawing performance - should be revised.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Institute of Psychology, The John Paul II Catholic University of Lublin, Al. Racławickie 14, 20-950, Lublin, Poland.
| | | | - Paweł Augustynowicz
- Institute of Psychology, The John Paul II Catholic University of Lublin, Al. Racławickie 14, 20-950, Lublin, Poland
| | | |
Collapse
|
4
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
5
|
Leo F, Gori M, Sciutti A. Early blindness modulates haptic object recognition. Front Hum Neurosci 2022; 16:941593. [PMID: 36158621 PMCID: PMC9498977 DOI: 10.3389/fnhum.2022.941593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands.
Collapse
Affiliation(s)
- Fabrizio Leo
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Fabrizio Leo,
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
6
|
Shen G, Wang R, Yang M, Xie J. Chinese Children with Congenital and Acquired Blindness Represent Concrete Concepts in Vertical Space through Tactile Perception. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11055. [PMID: 36078767 PMCID: PMC9518128 DOI: 10.3390/ijerph191711055] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 08/20/2022] [Accepted: 08/21/2022] [Indexed: 06/15/2023]
Abstract
Many studies have tested perceptual symbols in conceptual processing and found that perceptual symbols contain experiences from multisensory channels. However, whether the disability of one sensory channel affects the processing of the perceptual symbols and then affects conceptual processing is still unknown. This line of research would extend the perceptual symbol theory and have implications for language rehabilitation and mental health for people with disabilities. Therefore, the present study filled in this gap and tested whether Chinese children with congenital and acquired blindness have difficulty in recruiting perceptual symbols in the processing of concrete concepts. Experiment 1 used the word-pair-matching paradigm to test whether blind children used vertical space information in understanding concrete word pairs. Experiment 2 used the word-card-pairing paradigm to test the role of tactile experiences in the processing of concrete concepts for blind children. Results found that blind children automatically activated the spatial information of referents in the processing of concepts through the tactile sensory channel even when the visual sensory channel was disabled. This finding supported the compensatory phenomenon of other sensory channels in conceptual representation. In addition, the difference between elementary school blind children and middle school blind children in judging the spatial position of concrete words also indicated the vital influence of perceptual experiences on perceptual symbols in conceptual representation. Interestingly, there were no significant differences between children with congenital or acquired blindness. This might suggest that the compensatory of other sensory channels did not have a sensitive period. This study not only provided new evidence for the perceptual symbol theory but also found that perceptual symbols could be developed by a compensatory mechanism. This compensatory mechanism can be used to develop a rehabilitation program for improving language learning in blind children. Improved language ability in blind children will also improve their mental health problems caused by difficulties in social interaction (e.g., social anxiety).
Collapse
Affiliation(s)
- Guangyin Shen
- Shenzhen Yuanping Special Education School, Shenzhen 518112, China
| | - Ruiming Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Mengru Yang
- School of Psychology, Nanjing Normal University, Nanjing 210097, China
| | - Jiushu Xie
- School of Psychology, Nanjing Normal University, Nanjing 210097, China
| |
Collapse
|
7
|
Ottink L, Buimer H, van Raalte B, Doeller CF, van der Geest TM, van Wezel RJA. Cognitive map formation supported by auditory, haptic, and multimodal information in persons with blindness. Neurosci Biobehav Rev 2022; 140:104797. [PMID: 35902045 DOI: 10.1016/j.neubiorev.2022.104797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/23/2022] [Accepted: 07/24/2022] [Indexed: 10/16/2022]
Abstract
For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.
Collapse
Affiliation(s)
- Loes Ottink
- Donders Institute, Radboud University, Nijmegen, the Netherlands.
| | - Hendrik Buimer
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Bram van Raalte
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Christian F Doeller
- Psychology Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Kavli Insitute for Systems Neuroscience, NTNU, Trondheim, Norway
| | - Thea M van der Geest
- Lectorate Media Design, HAN University of Applied Sciences, Arnhem, the Netherlands
| | - Richard J A van Wezel
- Donders Institute, Radboud University, Nijmegen, the Netherlands; Techmed Centre, Biomedical Signals and System, University of Twente, Enschede, the Netherlands
| |
Collapse
|
8
|
Setti W, Cuturi LF, Cocchi E, Gori M. Spatial Memory and Blindness: The Role of Visual Loss on the Exploration and Memorization of Spatialized Sounds. Front Psychol 2022; 13:784188. [PMID: 35686077 PMCID: PMC9171105 DOI: 10.3389/fpsyg.2022.784188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/21/2022] [Indexed: 11/20/2022] Open
Abstract
Spatial memory relies on encoding, storing, and retrieval of knowledge about objects’ positions in their surrounding environment. Blind people have to rely on sensory modalities other than vision to memorize items that are spatially displaced, however, to date, very little is known about the influence of early visual deprivation on a person’s ability to remember and process sound locations. To fill this gap, we tested sighted and congenitally blind adults and adolescents in an audio-spatial memory task inspired by the classical card game “Memory.” In this research, subjects (blind, n = 12; sighted, n = 12) had to find pairs among sounds (i.e., animal calls) displaced on an audio-tactile device composed of loudspeakers covered by tactile sensors. To accomplish this task, participants had to remember the spatialized sounds’ position and develop a proper mental spatial representation of their locations. The test was divided into two experimental conditions of increasing difficulty dependent on the number of sounds to be remembered (8 vs. 24). Results showed that sighted participants outperformed blind participants in both conditions. Findings were discussed considering the crucial role of visual experience in properly manipulating auditory spatial representations, particularly in relation to the ability to explore complex acoustic configurations.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
- *Correspondence: Walter Setti,
| | - Luigi F. Cuturi
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
9
|
Blindfolded adults use mental transformation strategies for spatial scaling of tactile maps. Sci Rep 2022; 12:6275. [PMID: 35428813 PMCID: PMC9012851 DOI: 10.1038/s41598-022-10401-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 04/07/2022] [Indexed: 11/17/2022] Open
Abstract
The current study tested strategies of spatial scaling in the haptic domain. Blindfolded adults (N = 31, aged 20–24 years) were presented with an embossed graphic including a target and asked to encode a target location on this map, imagine this map at a given scale, and to localize a target at the same spot on an empty referent space. Maps varied in three different sizes whereas the referent space had a constant size, resulting in three different scaling factors (1:1, 1:2, 1:4). Participants’ response times and localization errors were measured. Analyses indicated that both response times and errors increased with higher scaling factors, suggesting the usage of mental transformation stratergies for spatial scaling. Overall, the present study provides a suitable, novel methodology to assess spatial scaling in the haptic domain.
Collapse
|
10
|
Sacco K, Ronga I, Perna P, Cicerale A, Del Fante E, Sarasso P, Geminiani GC. A Virtual Navigation Training Promotes the Remapping of Space in Allocentric Coordinates: Evidence From Behavioral and Neuroimaging Data. Front Hum Neurosci 2022; 16:693968. [PMID: 35479185 PMCID: PMC9037151 DOI: 10.3389/fnhum.2022.693968] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 03/02/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric space representations demonstrated to be crucial to improve visuo-spatial skills, pivotal in every-day life activities and for the development and maintenance of other cognitive abilities, such as memory and reasoning. Here, we present a series of three different experiments: Experiment 1, Discovery sample (23 young male participants); Experiment 2, Neuroimaging and replicating sample (23 young male participants); and Experiment 3 (14 young male participants). In the experiments, we investigated whether virtual navigation stimulates the ability to form spatial allocentric representations. With this aim, we used a novel 3D videogame (MindTheCity!), focused on the navigation of a virtual town. We verified whether playing at MindTheCity! enhanced the performance on spatial representational tasks (pointing to a specific location in space) and on a spatial memory test (asking participant to remember the location of specific objects). Furthermore, to uncover the neural mechanisms underlying the observed effects, we performed a preliminary fMRI investigation before and after the training with MindTheCity!. Results show that our virtual training enhances the ability to form allocentric representations and spatial memory (Experiment 1). Experiments 2 and 3 confirmed the behavioral results of Experiment 1. Furthermore, our preliminary neuroimaging and behavioral results suggest that the training activates brain circuits involved in higher-order mechanisms of information encoding, triggering the activation of broader cognitive processes and reducing the working load on memory circuits (Experiments 2 and 3).
Collapse
|
11
|
Job XE, Kirsch LP, Auvray M. Spatial perspective-taking: insights from sensory impairments. Exp Brain Res 2022; 240:27-37. [PMID: 34716457 PMCID: PMC8803716 DOI: 10.1007/s00221-021-06221-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 09/12/2021] [Indexed: 11/03/2022]
Abstract
Information can be perceived from a multiplicity of spatial perspectives, which is central to effectively understanding and interacting with our environment and other people. Sensory impairments such as blindness are known to impact spatial representations and perspective-taking is often thought of as a visual process. However, disturbed functioning of other sensory systems (e.g., vestibular, proprioceptive and auditory) can also influence spatial perspective-taking. These lines of research remain largely separate, yet together they may shed new light on the role that each sensory modality plays in this core cognitive ability. The findings to date reveal that spatial cognitive processes may be differently affected by various types of sensory loss. The visual system may be crucial for the development of efficient allocentric (object-to-object) representation; however, the role of vision in adopting another's spatial perspective remains unclear. On the other hand, the vestibular and the proprioceptive systems likely play an important role in anchoring the perceived self to the physical body, thus facilitating imagined self-rotations required to adopt another's spatial perspective. Findings regarding the influence of disturbed auditory functioning on perspective-taking are so far inconclusive and thus await further data. This review highlights that spatial perspective-taking is a highly plastic cognitive ability, as the brain is often able to compensate in the face of different sensory loss.
Collapse
Affiliation(s)
- Xavier E Job
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, 17165, Stockholm, Sweden.
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France.
| | - Louise P Kirsch
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France.
- Integrative Neuroscience and Cognition Center (INCC), Université de Paris, Paris, France.
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|
12
|
Ruggiero G, Ruotolo F, Iachini T. How ageing and blindness affect egocentric and allocentric spatial memory. Q J Exp Psychol (Hove) 2021; 75:1628-1642. [PMID: 34670454 DOI: 10.1177/17470218211056772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Egocentric (subject-to-object) and allocentric (object-to-object) spatial reference frames are fundamental for representing the position of objects or places around us. The literature on spatial cognition in blind people has shown that lack of vision may limit the ability to represent spatial information in an allocentric rather than egocentric way. Furthermore, much research with sighted individuals has reported that ageing has a negative impact on spatial memory. However, as far as we know, no study has assessed how ageing may affect the processing of spatial reference frames in individuals with different degrees of visual experience. To fill this gap, here we report data from a cross-sectional study in which a large sample of young and elderly participants (160 participants in total) who were congenitally blind (long-term visual deprivation), adventitiously blind (late onset of blindness), blindfolded sighted (short-term visual deprivation) and sighted (full visual availability) performed a spatial memory task that required egocentric/allocentric distance judgements with regard to memorised stimuli. The results showed that egocentric judgements were better than allocentric ones and above all that the ability to process allocentric information was influenced by both age and visual status. Specifically, the allocentric judgements of congenitally blind elderly participants were worse than those of all other groups. These findings suggest that ageing and congenital blindness can contribute to the worsening of the ability to represent spatial relationships between external, non-body-centred anchor points.
Collapse
Affiliation(s)
- Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| |
Collapse
|
13
|
Bollini A, Campus C, Gori M. The development of allocentric spatial frame in the auditory system. J Exp Child Psychol 2021; 211:105228. [PMID: 34242896 DOI: 10.1016/j.jecp.2021.105228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 06/15/2021] [Accepted: 06/15/2021] [Indexed: 10/20/2022]
Abstract
The ability to encode space is a crucial aspect of interacting with the external world. Therefore, this ability appears to be fundamental for the correct development of the capacity to integrate different spatial reference frames. The spatial reference frame seems to be present in all the sensory modalities. However, it has been demonstrated that different sensory modalities follow various developmental courses. Nevertheless, to date these courses have been investigated only in people with sensory impairments, where there is a possible bias due to compensatory strategies and it is complicated to assess the exact age when these skills emerge. For these reasons, we investigated the development of the allocentric frame in the auditory domain in a group of typically developing children aged 6-10 years. To do so, we used an auditory Simon task, a paradigm that involves implicit spatial processing, and we asked children to perform the task in both the uncrossed and crossed hands postures. We demonstrated that the crossed hands posture affected the performance only in younger children (6-7 years), whereas at 10 years of age children performed as adults and were not affected by such posture. Moreover, we found that this task's performance correlated with age and developmental differences in spatial abilities. Our results support the hypothesis that auditory spatial cognition's developmental course is similar to the visual modality development as reported in the literature.
Collapse
Affiliation(s)
- Alice Bollini
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy.
| | - Claudio Campus
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy
| |
Collapse
|
14
|
Heimler B, Behor T, Dehaene S, Izard V, Amedi A. Core knowledge of geometry can develop independently of visual experience. Cognition 2021; 212:104716. [PMID: 33895652 DOI: 10.1016/j.cognition.2021.104716] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 03/28/2021] [Accepted: 03/29/2021] [Indexed: 01/29/2023]
Abstract
Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles and geometrical shapes to detect intruders in haptic displays, but experienced difficulties with symmetry and complex spatial transformations. Across items, their performance was highly correlated with that of sighted adults performing the same task in touch (blindfolded) and in vision, as well as with the performances of uneducated preschoolers and Amazonian adults. Our results support the existence of an amodal core-system of geometry that arises independently of visual experience. However, performance at selecting geometric intruders was generally higher in the visual compared to the haptic modality, suggesting that sensory-specific spatial experience may play a role in refining the properties of this core-system of geometry.
Collapse
Affiliation(s)
- Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Tel Hashomer, Israel.
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Véronique Izard
- Integrative Neuroscience and Cognition Center, Université de Paris, 45 rue des Saints-Pères, 75006 Paris, France; CNRS UMR 8002, 45 rue des Saints-Pères, 75006 Paris, France
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
15
|
Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. SENSORS 2021; 21:s21082700. [PMID: 33921202 PMCID: PMC8070041 DOI: 10.3390/s21082700] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/07/2021] [Accepted: 04/09/2021] [Indexed: 11/17/2022]
Abstract
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation.
Collapse
|
16
|
Martolini C, Cappagli G, Signorini S, Gori M. Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions. Brain Sci 2021; 11:brainsci11030343. [PMID: 33803142 PMCID: PMC7999573 DOI: 10.3390/brainsci11030343] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 02/27/2021] [Accepted: 02/27/2021] [Indexed: 11/16/2022] Open
Abstract
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
- Correspondence:
| | - Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, via Mondino 2, 27100 Pavia, Italy;
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
| |
Collapse
|
17
|
Chebat DR, Schneider FC, Ptito M. Spatial Competence and Brain Plasticity in Congenital Blindness via Sensory Substitution Devices. Front Neurosci 2020; 14:815. [PMID: 32848575 PMCID: PMC7406645 DOI: 10.3389/fnins.2020.00815] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 07/10/2020] [Indexed: 12/22/2022] Open
Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Collapse
Affiliation(s)
- Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel, Israel
| | - Fabien C. Schneider
- Department of Radiology, University of Lyon, Saint-Etienne, France
- Neuroradiology Unit, University Hospital of Saint-Etienne, Saint-Etienne, France
| | - Maurice Ptito
- BRAIN Lab, Department of Neuroscience and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Chaire de Recherche Harland Sanders en Sciences de la Vision, École d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
18
|
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol 2020; 11:1443. [PMID: 32754082 PMCID: PMC7381305 DOI: 10.3389/fpsyg.2020.01443] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/29/2020] [Indexed: 11/13/2022] Open
Abstract
Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, United Kingdom
| | | | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Simon Lange-Smith
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
19
|
Martolini C, Cappagli G, Luparia A, Signorini S, Gori M. The Impact of Vision Loss on Allocentric Spatial Coding. Front Neurosci 2020; 14:565. [PMID: 32612500 PMCID: PMC7308590 DOI: 10.3389/fnins.2020.00565] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 05/07/2020] [Indexed: 11/13/2022] Open
Abstract
Several works have demonstrated that visual experience plays a critical role in the development of allocentric spatial coding. Indeed, while children with a typical development start to code space by relying on allocentric landmarks from the first year of life, blind children remain anchored to an egocentric perspective until late adolescence. Nonetheless, little is known about when and how visually impaired children acquire the ability to switch from an egocentric to an allocentric frame of reference across childhood. This work aims to investigate whether visual experience is necessary to shift from bodily to external frames of reference. Children with visual impairment and normally sighted controls between 4 and 9 years of age were asked to solve a visual switching-perspective task requiring them to assume an egocentric or an allocentric perspective depending on the task condition. We hypothesize that, if visual experience is necessary for allocentric spatial coding, then visually impaired children would have been impaired to switch from egocentric to allocentric perspectives. Results support this hypothesis, confirming a developmental delay in the ability to update spatial coordinates in visually impaired children. It suggests a pivotal role of vision in shaping allocentric spatial coding across development.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Giulia Cappagli
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Antonella Luparia
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
20
|
Auvray M. Multisensory and spatial processes in sensory substitution. Restor Neurol Neurosci 2019; 37:609-619. [DOI: 10.3233/rnn-190950] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Sorbonne Université, Paris, France
| |
Collapse
|
21
|
Szubielska M, Möhring W. Adults' spatial scaling: evidence from the haptic domain. Cogn Process 2019; 20:431-440. [PMID: 31054026 PMCID: PMC6841643 DOI: 10.1007/s10339-019-00920-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 04/26/2019] [Indexed: 11/02/2022]
Abstract
The current study investigated adults' spatial-scaling abilities using a haptic localization task. As a first aim, we examined the strategies used to solve this haptic task. Secondly, we explored whether irrelevant visual information influenced adults' spatial-scaling performance. Thirty-two adults were asked to locate targets as presented in maps on a larger or same-sized referent space. Maps varied in size in accordance with different scaling factors (1:4, 1:2, 1:1), whereas the referent space was constant in size throughout the experimental session. The availability of irrelevant, non-informative vision was manipulated by blindfolding half of the participants prior to the experiment (condition without non-informative vision), whereas the other half were able to see their surroundings with the stimuli being hidden behind a curtain (condition with non-informative vision). Analyses with absolute errors (after correcting for reversal errors) as the dependent variable revealed a significant interaction of the scaling factor and non-informative vision condition. Adults in the blindfolded condition showed constant errors and response times irrespective of scaling factor. Such a response pattern indicates the usage of relative strategies. Adults in the curtain condition showed a linear increase in errors with higher scaling factors, whereas their response times remained constant. This pattern of results supports the usage of absolute strategies or mental transformation strategies. Overall, our results indicate different scaling strategies depending on the availability of non-informative vision, highlighting the strong influence of (even irrelevant) vision on adults' haptic processing.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Institute of Psychology, The John Paul II Catholic University of Lublin, Al. Racławickie 14, 20-950, Lublin, Poland.
| | - Wenke Möhring
- Faculty of Psychology, University of Basel, Missionsstrasse 60/62, 4055, Basel, Switzerland
| |
Collapse
|
22
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
23
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
24
|
Enhanced audio-tactile multisensory interaction in a peripersonal task after echolocation. Exp Brain Res 2019; 237:855-864. [PMID: 30617745 PMCID: PMC6394550 DOI: 10.1007/s00221-019-05469-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Accepted: 01/03/2019] [Indexed: 11/03/2022]
Abstract
Peripersonal space (PPS) is created by a multisensory interaction between different sensory modalities and can be modified by experience. In this article, we investigated whether an auditory training, inside the peripersonal space area, can modify the PPS around the head in sighted participants. The auditory training was based on echolocation. We measured the participant's reaction times to a tactile stimulation on the neck, while task-irrelevant looming auditory stimuli were presented. Sounds more strongly affect tactile processing when located within a limited distance from the body. We measured spatially dependent audio-tactile interaction as a proxy of PPS representation before and after an echolocation training. We found a significant speeding effect on tactile RTs after echolocation, specifically when sounds where around the location where the echolocation task was performed. This effect could not be attributed to a task repetition effect nor to a shift of spatial attention, as no changes of PPS were found in two control groups of participants, who performed the PPS task after either a break or a temporal auditory task (with stimuli located at the same position of echolocation task). These findings show that echolocation affects multisensory processing inside PPS representation, likely to better represent the space where external stimuli, have to be localized.
Collapse
|
25
|
Setti W, Cuturi LF, Cocchi E, Gori M. A novel paradigm to study spatial memory skills in blind individuals through the auditory modality. Sci Rep 2018; 8:13393. [PMID: 30190584 PMCID: PMC6127324 DOI: 10.1038/s41598-018-31588-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 08/09/2018] [Indexed: 11/26/2022] Open
Abstract
Spatial memory is a multimodal representation of the environment, which can be mediated by different sensory signals. Here we investigate how the auditory modality influences memorization, contributing to the mental representation of a scene. We designed an audio test inspired by a validated spatial memory test, the Corsi-Block test for blind individuals. The test was carried out in two different conditions, with non-semantic and semantic stimuli, presented in different sessions and displaced on an audio-tactile device. Furthermore, the semantic sounds were spatially displaced in order to reproduce an audio scene, explored by participants during the test. Thus, we verified if semantic rather than non-semantic sounds are better recalled and whether exposure to an auditory scene can enhance memorization skills. Our results show that sighted subjects performed better than blind participants after the exploration of the semantic scene. This suggests that blind participants focus on the perceived sound positions and do not use items’ locations learned during the exploration. We discuss these results in terms of the role of visual experience on spatial memorization skills and the ability to take advantage of semantic information stored in the memory.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy.,Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy.,DIBRIS Department, University of Genoa, Genoa, Italy
| | - Luigi F Cuturi
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy.
| |
Collapse
|
26
|
Hu Q, Yang Y, Huang Z, Shao Y. Children and Adults Prefer the Egocentric Representation to the Allocentric Representation. Front Psychol 2018; 9:1522. [PMID: 30174639 PMCID: PMC6107712 DOI: 10.3389/fpsyg.2018.01522] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2018] [Accepted: 07/31/2018] [Indexed: 11/13/2022] Open
Abstract
We studied the strategy preference of using the egocentric or the allocentric representation in individuals who have acquired the ability to use both representations. Fifty-seven children aged 5–7 years and 53 adults retrieved toys hidden in one of four identical containers in a square room. We varied the type of spatial representation available in four conditions: (1) only self-motion information (egocentric representation); (2) only external landmark cues (allocentric representation); (3) both self-motion and landmark cues (dual representation); (4) self-motion and landmark cues in conflict (conflict trial). We found that, compared with the allocentric representation, the egocentric representation approached maturity earlier in development and was exploited better in early years. More importantly, in the conflict trials, while both children and adults relied more on egocentric representation, still a small portion of participants chose allocentric representation, especially in the adult group. These results provided evidence that egocentric representation is generally preferred more in both young children and adults.
Collapse
Affiliation(s)
- Qingfen Hu
- Institute of Developmental Psychology, Beijing Normal University, Beijing, China
| | - Ying Yang
- Institute of Developmental Psychology, Beijing Normal University, Beijing, China
| | - Zhenzhen Huang
- Institute of Developmental Psychology, Beijing Normal University, Beijing, China
| | - Yi Shao
- Department of Psychology, Oklahoma City University, Oklahoma City, OK, United States
| |
Collapse
|
27
|
Renault AG, Auvray M, Parseihian G, Miall RC, Cole J, Sarlegna FR. Does Proprioception Influence Human Spatial Cognition? A Study on Individuals With Massive Deafferentation. Front Psychol 2018; 9:1322. [PMID: 30131736 PMCID: PMC6090482 DOI: 10.3389/fpsyg.2018.01322] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2018] [Accepted: 07/10/2018] [Indexed: 11/29/2022] Open
Abstract
When navigating in a spatial environment or when hearing its description, we can develop a mental model which may be represented in the central nervous system in different coordinate systems such as an egocentric or allocentric reference frame. The way in which sensory experience influences the preferred reference frame has been studied with a particular interest for the role of vision. The present study investigated the influence of proprioception on human spatial cognition. To do so, we compared the abilities to form spatial models of two rare participants chronically deprived of proprioception (GL and IW) and healthy control participants. Participants listened to verbal descriptions of a spatial environment, and their ability to form and use a mental model was assessed with a distance-comparison task and a free-recall task. Given that the loss of proprioception has been suggested to specifically impair the egocentric reference frame, the deafferented individuals were expected to perform worse than controls when the spatial environment was described in an egocentric reference frame. Results revealed that in both tasks, one deafferented individual (GL) made more errors than controls while the other (IW) made less errors. On average, both GL and IW were slower to respond than controls, and reaction time was more variable for IW. Additionally, we found that GL but not IW was impaired compared to controls in visuo-spatial imagery, which was assessed with the Minnesota Paper Form Board Test. Overall, the main finding of this study is that proprioception can influence the time necessary to use spatial representations while other factors such as visuo-spatial abilities can influence the capacity to form accurate spatial representations.
Collapse
Affiliation(s)
| | - Malika Auvray
- Sorbonne Université, UPMC, CNRS, Institut des Systémes Intelligents et de Robotique (ISIR), Paris, France
| | | | - R. Chris Miall
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Jonathan Cole
- Clinical Neurophysiology, Poole Hospital, and Centre of Postgraduate Research and Education, University of Bournemouth, Poole, United Kingdom
| | | |
Collapse
|
28
|
Temporal Cues Influence Space Estimations in Visually Impaired Individuals. iScience 2018; 6:319-326. [PMID: 30240622 PMCID: PMC6137691 DOI: 10.1016/j.isci.2018.07.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 06/01/2018] [Accepted: 07/03/2018] [Indexed: 11/20/2022] Open
Abstract
Many works have highlighted enhanced auditory processing in blind individuals, suggesting that they compensate for lack of vision with greater sensitivity of the other senses. Few years ago, we demonstrated severely impaired auditory precision in congenitally blind individuals performing an auditory spatial metric task: their thresholds for bisecting three consecutive spatially distributed sounds were seriously compromised, ranging from three times typical thresholds to total randomness. Here, we show that the deficit disappears if blind individuals are presented with coherent temporal and spatial cues. More interestingly, when the audio information is presented in conflict for space and time, sighted individuals are unaffected by the perturbation, whereas blind individuals are strongly attracted by the temporal cue. These results highlight that temporal cues influence space estimations in blind participants, suggesting for the first time that blind individuals use temporal information to infer spatial environmental coordinates. Blind individuals are not able to perform auditory spatial metric tasks Their deficit disappears when coherent temporal and spatial cues are presented In some cases, blind people use temporal cues to infer spatial coordinates
Collapse
|
29
|
Pasqualotto A, Furlan M, Proulx MJ, Sereno MI. Visual loss alters multisensory face maps in humans. Brain Struct Funct 2018; 223:3731-3738. [PMID: 30043118 DOI: 10.1007/s00429-018-1713-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Accepted: 07/04/2018] [Indexed: 01/09/2023]
Abstract
Topographically organised responses to visual and tactile stimulation are aligned in the ventral intraparietal cortex. The critical biological importance of this region, which is thought to mediate visually guided defensive movements of the head and upper body, suggests that these maps might be hardwired from birth. Here, we investigated whether visual experience is necessary for the creation and positioning of these maps by assessing the representation of tactile stimulation in congenitally and totally blind participants, who had no visual experience, and late and totally blind participants. We used a single-subject approach to the analysis to focus on the potential individual differences in the functional neuroanatomy that might arise from different causes, durations and sensory experiences of visual impairment among participants. The overall results did not show any significant difference between congenitally and late blind participants; however, single-subject trends suggested that visual experience is not necessary to develop topographically organised maps in the intraparietal cortex, whilst losing vision disrupted topographic maps' integrity and organisation. These results discussed in terms of brain plasticity and sensitive periods.
Collapse
Affiliation(s)
- Achille Pasqualotto
- School of Biological and Chemical Sciences, Queen Mary University of London, London, UK. .,Department of Psychology, University of Bath, Bath, UK. .,Faculty of Arts and Social Sciences, Sabanci University, 34956, Tuzla, Istanbul, Turkey.
| | - Michele Furlan
- SISSA (Scuola Internazionale Superiore di Studi Avanzati), Trieste, Italy
| | - Michael J Proulx
- School of Biological and Chemical Sciences, Queen Mary University of London, London, UK.,Department of Psychology, University of Bath, Bath, UK
| | | |
Collapse
|
30
|
Brayda L, Leo F, Baccelliere C, Ferrari E, Vigini C. Updated Tactile Feedback with a Pin Array Matrix Helps Blind People to Reduce Self-Location Errors. MICROMACHINES 2018; 9:E351. [PMID: 30424284 PMCID: PMC6082250 DOI: 10.3390/mi9070351] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 06/28/2018] [Accepted: 07/09/2018] [Indexed: 11/16/2022]
Abstract
Autonomous navigation in novel environments still represents a challenge for people with visual impairment (VI). Pin array matrices (PAM) are an effective way to display spatial information to VI people in educative/rehabilitative contexts, as they provide high flexibility and versatility. Here, we tested the effectiveness of a PAM in VI participants in an orientation and mobility task. They haptically explored a map showing a scaled representation of a real room on the PAM. The map further included a symbol indicating a virtual target position. Then, participants entered the room and attempted to reach the target three times. While a control group only reviewed the same, unchanged map on the PAM between trials, an experimental group also received an updated map representing, in addition, the position they previously reached in the room. The experimental group significantly improved across trials by having both reduced self-location errors and reduced completion time, unlike the control group. We found that learning spatial layouts through updated tactile feedback on programmable displays outperforms conventional procedures on static tactile maps. This could represent a powerful tool for navigation, both in rehabilitation and everyday life contexts, improving spatial abilities and promoting independent living for VI people.
Collapse
Affiliation(s)
- Luca Brayda
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Fabrizio Leo
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Caterina Baccelliere
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Elisabetta Ferrari
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | | |
Collapse
|
31
|
Massiceti D, Hicks SL, van Rheede JJ. Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm. PLoS One 2018; 13:e0199389. [PMID: 29975734 PMCID: PMC6033394 DOI: 10.1371/journal.pone.0199389] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2017] [Accepted: 06/06/2018] [Indexed: 01/16/2023] Open
Abstract
Sighted people predominantly use vision to navigate spaces, and sight loss has negative consequences for independent navigation and mobility. The recent proliferation of devices that can extract 3D spatial information from visual scenes opens up the possibility of using such mobility-relevant information to assist blind and visually impaired people by presenting this information through modalities other than vision. In this work, we present two new methods for encoding visual scenes using spatial audio: simulated echolocation and distance-dependent hum volume modulation. We implemented both methods in a virtual reality (VR) environment and tested them using a 3D motion-tracking device. This allowed participants to physically walk through virtual mobility scenarios, generating data on real locomotion behaviour. Blindfolded sighted participants completed two tasks: maze navigation and obstacle avoidance. Results were measured against a visual baseline in which participants performed the same two tasks without blindfolds. Task completion time, speed and number of collisions were used as indicators of successful navigation, with additional metrics exploring detailed dynamics of performance. In both tasks, participants were able to navigate using only audio information after minimal instruction. While participants were 65% slower using audio compared to the visual baseline, they reduced their audio navigation time by an average 21% over just 6 trials. Hum volume modulation proved over 20% faster than simulated echolocation in both mobility scenarios, and participants also showed the greatest improvement with this sonification method. Nevertheless, we do speculate that simulated echolocation remains worth exploring as it provides more spatial detail and could therefore be more useful in more complex environments. The fact that participants were intuitively able to successfully navigate space with two new visual-to-audio mappings for conveying spatial information motivates the further exploration of these and other mappings with the goal of assisting blind and visually impaired individuals with independent mobility.
Collapse
Affiliation(s)
- Daniela Massiceti
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- * E-mail:
| | - Stephen Lloyd Hicks
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Joram Jacob van Rheede
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
32
|
Aggius-Vella E, Campus C, Gori M. Different audio spatial metric representation around the body. Sci Rep 2018; 8:9383. [PMID: 29925849 PMCID: PMC6010478 DOI: 10.1038/s41598-018-27370-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 05/14/2018] [Indexed: 11/10/2022] Open
Abstract
Vision seems to have a pivotal role in developing spatial cognition. A recent approach, based on sensory calibration, has highlighted the role of vision in calibrating hearing in spatial tasks. It was shown that blind individuals have specific impairments during audio spatial bisection tasks. Vision is available only in the frontal space, leading to a "natural" blindness in the back. If vision is important for audio space calibration, then the auditory frontal space should be better represented than the back auditory space. In this study, we investigated this point by comparing frontal and back audio spatial metric representations. We measured precision in the spatial bisection task, for which vision seems to be fundamental to calibrate audition, in twenty-three sighted subjects. Two control tasks, a minimum audible angle and a temporal bisection were employed in order to evaluate auditory precision in the different regions considered. While no differences were observed between frontal and back space in the minimum audible angle (MAA) and temporal bisection task, a significant difference was found in the spatial bisection task, where subjects performed better in the frontal space. Our results are in agreement with the idea that vision is important in developing auditory spatial metric representation in sighted individuals.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired people, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy.
| |
Collapse
|
33
|
Hamilton-Fletcher G, Pisanski K, Reby D, Stefańczyk M, Ward J, Sorokowska A. The role of visual experience in the emergence of cross-modal correspondences. Cognition 2018; 175:114-121. [DOI: 10.1016/j.cognition.2018.02.023] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Revised: 02/20/2018] [Accepted: 02/22/2018] [Indexed: 11/26/2022]
|
34
|
Nelson JS, Kuling IA. Spatial Representation of the Workspace in Blind, Low Vision, and Sighted Human Participants. Iperception 2018; 9:2041669518781877. [PMID: 29977492 PMCID: PMC6024533 DOI: 10.1177/2041669518781877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 05/16/2018] [Indexed: 11/17/2022] Open
Abstract
It has been proposed that haptic spatial perception depends on one's visual abilities. We tested spatial perception in the workspace using a combination of haptic matching and line drawing tasks. There were 132 participants with varying degrees of visual ability ranging from congenitally blind to normally sighted. Each participant was blindfolded and asked to match a haptic target position felt under a table with their nondominant hand using a pen in their dominant hand. Once the pen was in position on the tabletop, they had to draw a line of equal length to a previously felt reference object by moving the pen laterally. We used targets at three different locations to evaluate whether different starting positions relative to the body give rise to different matching errors, drawn line lengths, or drawn line angles. We found no influence of visual ability on matching error, drawn line length, or line angle, but we found that early-blind participants are slightly less consistent in their matching errors across space. We conclude that the elementary haptic abilities tested in these tasks do not depend on visual experience.
Collapse
Affiliation(s)
- Jacob S. Nelson
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Irene A. Kuling
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| |
Collapse
|
35
|
Tinti C, Chiesa S, Cavaglià R, Dalmasso S, Pia L, Schmidt S. On my right or on your left? Spontaneous spatial perspective taking in blind people. Conscious Cogn 2018; 62:1-8. [PMID: 29689492 DOI: 10.1016/j.concog.2018.03.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Revised: 03/08/2018] [Accepted: 03/29/2018] [Indexed: 01/06/2023]
Abstract
Spatial perspective taking is a human ability that permits to assume another person's spatial viewpoint. Data show that spatial perspective taking might arise even spontaneously by the mere presence of another person in the environment. We investigated whether this phenomenon is observable also in blind people. Blind and blindfolded sighted participants explored a tridimensional tactile map and memorized the localization of different landmarks. Then, after the presentation of sounds coming from three landmarks-positioned on the right, on the left, and in front-participants had to indicate the reciprocal position of the two lateral landmarks. Results showed that when the sound coming from the frontal landmark suggested the presence of a speaking (voice) or moving person (footsteps), several blind and sighted people adopted this person's perspective. These findings suggest that auditory stimuli can trigger spontaneous spatial perspective taking in sighted as well as in blind people.
Collapse
Affiliation(s)
- Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | - Silvia Chiesa
- Department of Psychology, University of Turin, Turin, Italy
| | | | | | - Lorenzo Pia
- SAMBA (SpAtial, Motor and Bodily Awareness) Research Group, Department of Psychology, University of Turin, Turin, Italy; NIT (Neuroscience Institute of Turin), Turin, Italy.
| | | |
Collapse
|
36
|
Congenital blindness limits allocentric to egocentric switching ability. Exp Brain Res 2018; 236:813-820. [PMID: 29340716 DOI: 10.1007/s00221-018-5176-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 01/09/2018] [Indexed: 10/18/2022]
Abstract
Many everyday spatial activities require the cooperation or switching between egocentric (subject-to-object) and allocentric (object-to-object) spatial representations. The literature on blind people has reported that the lack of vision (congenital blindness) may limit the capacity to represent allocentric spatial information. However, research has mainly focused on the selective involvement of egocentric or allocentric representations, not the switching between them. Here we investigated the effect of visual deprivation on the ability to switch between spatial frames of reference. To this aim, congenitally blind (long-term visual deprivation), blindfolded sighted (temporary visual deprivation) and sighted (full visual availability) participants were compared on the Ego-Allo switching task. This task assessed the capacity to verbally judge the relative distances between memorized stimuli in switching (from egocentric-to-allocentric: Ego-Allo; from allocentric-to-egocentric: Allo-Ego) and non-switching (only-egocentric: Ego-Ego; only-allocentric: Allo-Allo) conditions. Results showed a difficulty in congenitally blind participants when switching from allocentric to egocentric representations, not when the first anchor point was egocentric. In line with previous results, a deficit in processing allocentric representations in non-switching conditions also emerged. These findings suggest that the allocentric deficit in congenital blindness may determine a difficulty in simultaneously maintaining and combining different spatial representations. This deficit alters the capacity to switch between reference frames specifically when the first anchor point is external and not body-centered.
Collapse
|
37
|
Kuehn E, Chen X, Geise P, Oltmer J, Wolbers T. Social targets improve body-based and environment-based strategies during spatial navigation. Exp Brain Res 2018; 236:755-764. [PMID: 29327266 DOI: 10.1007/s00221-018-5169-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Accepted: 01/05/2018] [Indexed: 12/24/2022]
Abstract
Encoding the position of another person in space is vital for everyday life. Nevertheless, little is known about the specific navigational strategies associated with encoding the position of another person in the wider spatial environment. We asked two groups of participants to learn the location of a target (person or object) during active navigation, while optic flow information, a landmark, or both optic flow information and a landmark were available in a virtual environment. Whereas optic flow information is used for body-based encoding, such as the simulation of motor movements, landmarks are used to form an abstract, disembodied representation of the environment. During testing, we passively moved participants through virtual space, and compared their abilities to correctly decide whether the non-visible target was before or behind them. Using psychometric functions and the Bayes Theorem, we show that both groups assigned similar weights to body-based and environment-based cues in the condition, where both cue types were available. However, the group who was provided with a person as target showed generally reduced position errors compared to the group who was provided with an object as target. We replicated this effect in a second study with novel participants. This indicates a social advantage in spatial encoding, with facilitated processing of both body-based and environment-based cues during spatial navigation when the position of a person is encoded. This may underlie our critical ability to make accurate distance judgments during social interactions, for example, during fight or flight responses.
Collapse
Affiliation(s)
- Esther Kuehn
- Aging and Cognition Research Group, DZNE, 39120, Magdeburg, Germany. .,Center for Behavioral Brain Sciences Magdeburg, 39106, Magdeburg, Germany. .,Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany.
| | - Xiaoli Chen
- Aging and Cognition Research Group, DZNE, 39120, Magdeburg, Germany
| | - Pia Geise
- Aging and Cognition Research Group, DZNE, 39120, Magdeburg, Germany
| | - Jan Oltmer
- Aging and Cognition Research Group, DZNE, 39120, Magdeburg, Germany
| | - Thomas Wolbers
- Aging and Cognition Research Group, DZNE, 39120, Magdeburg, Germany.,Center for Behavioral Brain Sciences Magdeburg, 39106, Magdeburg, Germany
| |
Collapse
|
38
|
Ricciardi E, Menicagli D, Leo A, Costantini M, Pietrini P, Sinigaglia C. Peripersonal space representation develops independently from visual experience. Sci Rep 2017; 7:17673. [PMID: 29247162 PMCID: PMC5732274 DOI: 10.1038/s41598-017-17896-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 12/01/2017] [Indexed: 11/09/2022] Open
Abstract
Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects’ reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects’ reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one’s own and others’ peripersonal space representation.
Collapse
Affiliation(s)
| | - Dario Menicagli
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy
| | - Andrea Leo
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy.,Research Center "E. Piaggio", University of Pisa, Pisa, I-56100, Italy
| | - Marcello Costantini
- Department of Neuroscience and Imaging and Clinical Science, University G. d'Annunzio, Chieti, I-66100, Italy.,Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, I-66100, Italy.,Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK
| | - Pietro Pietrini
- MOMILab, IMT School for Advanced Studies Lucca, I-55100, Lucca, Italy
| | - Corrado Sinigaglia
- Department of Philosophy, University of Milan, via Festa del Perdono 7, I-20122, Milano, Italy. .,CSSA, Centre for the Study of Social Action, University of Milan, Milan, I-20122, Italy.
| |
Collapse
|
39
|
Differences between blind people's cognitive maps after proximity and distant exploration of virtual environments. COMPUTERS IN HUMAN BEHAVIOR 2017. [DOI: 10.1016/j.chb.2017.09.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
40
|
Thaler L, Foresteire D. Visual sensory stimulation interferes with people's ability to echolocate object size. Sci Rep 2017; 7:13069. [PMID: 29026115 PMCID: PMC5638915 DOI: 10.1038/s41598-017-12967-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 09/14/2017] [Indexed: 12/03/2022] Open
Abstract
Echolocation is the ability to use sound-echoes to infer spatial information about the environment. People can echolocate for example by making mouth clicks. Previous research suggests that echolocation in blind people activates brain areas that process light in sighted people. Research has also shown that echolocation in blind people may replace vision for calibration of external space. In the current study we investigated if echolocation may also draw on ‘visual’ resources in the sighted brain. To this end, we paired a sensory interference paradigm with an echolocation task. We found that exposure to an uninformative visual stimulus (i.e. white light) while simultaneously echolocating significantly reduced participants’ ability to accurately judge object size. In contrast, a tactile stimulus (i.e. vibration on the skin) did not lead to a significant change in performance (neither in sighted, nor blind echo expert participants). Furthermore, we found that the same visual stimulus did not affect performance in auditory control tasks that required detection of changes in sound intensity, sound frequency or sound location. The results suggest that processing of visual and echo-acoustic information draw on common neural resources.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - D Foresteire
- Department of Psychology, Durham University, Durham, United Kingdom
| |
Collapse
|
41
|
Chiesa S, Schmidt S, Tinti C, Cornoldi C. Allocentric and contra-aligned spatial representations of a town environment in blind people. Acta Psychol (Amst) 2017; 180:8-15. [PMID: 28806576 DOI: 10.1016/j.actpsy.2017.08.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 05/27/2017] [Accepted: 08/04/2017] [Indexed: 11/29/2022] Open
Abstract
Evidence concerning the representation of space by blind individuals is still unclear, as sometimes blind people behave like sighted people do, while other times they present difficulties. A better understanding of blind people's difficulties, especially with reference to the strategies used to form the representation of the environment, may help to enhance knowledge of the consequences of the absence of vision. The present study examined the representation of the locations of landmarks of a real town by using pointing tasks that entailed either allocentric points of reference with mental rotations of different degrees, or contra-aligned representations. Results showed that, in general, people met difficulties when they had to point from a different perspective to aligned landmarks or from the original perspective to contra-aligned landmarks, but this difficulty was particularly evident for the blind. The examination of the strategies adopted to perform the tasks showed that only a small group of blind participants used a survey strategy and that this group had a better performance with respect to people who adopted route or verbal strategies. Implications for the comprehension of the consequences on spatial cognition of the absence of visual experience are discussed, focusing in particular on conceivable interventions.
Collapse
Affiliation(s)
- Silvia Chiesa
- University of Turin, via Verdi 10, 10124 Turin, Italy.
| | | | - Carla Tinti
- University of Turin, via Verdi 10, 10124 Turin, Italy.
| | | |
Collapse
|
42
|
Occelli V, Lacey S, Stephens C, Merabet LB, Sathian K. Enhanced verbal abilities in the congenitally blind. Exp Brain Res 2017; 235:1709-1718. [PMID: 28280879 DOI: 10.1007/s00221-017-4931-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 02/21/2017] [Indexed: 11/28/2022]
Abstract
Numerous studies have found that congenitally blind individuals have better verbal memory than their normally sighted counterparts. However, it is not known whether this reflects superiority of verbal or memory abilities. In order to distinguish between these possibilities, we tested congenitally blind participants and normally sighted control participants, matched for age and education, on a range of verbal and spatial tasks. Congenitally blind participants were significantly better than sighted controls on all the verbal tasks but the groups did not differ significantly on the spatial tasks. Thus, the congenitally blind appear to have superior verbal, but not spatial, abilities. This may reflect greater reliance on verbal information and the involvement of visual cortex in language processing in the congenitally blind.
Collapse
Affiliation(s)
- Valeria Occelli
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Simon Lacey
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Careese Stephens
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, 30322, USA
- Rehabilitation R&D Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, GA, USA
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
- Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, 30322, USA.
- Rehabilitation Medicine, Emory University, Atlanta, GA, USA.
- Psychology, Emory University, Atlanta, GA, USA.
- Rehabilitation R&D Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, GA, USA.
| |
Collapse
|
43
|
Ma G, Yang D, Qin W, Liu Y, Jiang T, Yu C. Enhanced Functional Coupling of Hippocampal Sub-regions in Congenitally and Late Blind Subjects. Front Neurosci 2017; 10:612. [PMID: 28119560 PMCID: PMC5222804 DOI: 10.3389/fnins.2016.00612] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 12/26/2016] [Indexed: 11/13/2022] Open
Abstract
The hippocampus has exhibited navigation-related changes in volume and activity after visual deprivation; however, the resting-state functional connectivity (rsFC) changes of the hippocampus in the blind remains unknown. In this study, we focused on sub-region-specific rsFC changes of the hippocampus and their association with the onset age of blindness. The rsFC patterns of the hippocampal sub-regions (head, body and tail) were compared among 20 congenitally blind (CB), 42 late blind (LB), and 50 sighted controls (SC). Compared with the SC, both the CB and the LB showed increased hippocampal rsFCs with the posterior cingulate cortex, angular gyrus, parieto-occpital sulcus, middle occipito-temporal conjunction, inferior temporal gyrus, orbital frontal cortex, and middle frontal gyrus. In the blind subjects, the hippocampal tail had more extensive rsFC changes than the anterior hippocampus (body and head). The CB and the LB had similar changes in hippocampal rsFC. These altered rsFCs of the hippocampal sub-regions were neither correlated with onset age in the LB nor the duration of blindness in CB or LB subjects. The increased coupling of the hippocampal intrinsic functional network may reflect enhanced loading of the hippocampal-related networks for non-visual memory processing. Furthermore, the similar changes of hippocampal rsFCs between the CB and the LB suggests an experience-dependent rather than a developmental-dependent plasticity of the hippocampal intrinsic functional network.
Collapse
Affiliation(s)
- Guangyang Ma
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General HospitalTianjin, China; Key Laboratory of Hormones and Development (Ministry of Health), Tianjin Key Laboratory of Metabolic Diseases, Tianjin Metabolic Diseases Hospital & Tianjin Institute of Endocrinology, Tianjin Medical UniversityTianjin, China
| | - Dan Yang
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General HospitalTianjin, China; Tianjin Central Hospital of Gynecology ObstetricsTianjin, China
| | - Wen Qin
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital Tianjin, China
| | - Yong Liu
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences Beijing, China
| | - Tianzi Jiang
- Brainnetome Center, Institute of Automation, Chinese Academy of Sciences Beijing, China
| | - Chunshui Yu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital Tianjin, China
| |
Collapse
|
44
|
Arnold G, Pesnot-Lerousseau J, Auvray M. Individual Differences in Sensory Substitution. Multisens Res 2017; 30:579-600. [DOI: 10.1163/22134808-00002561] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2016] [Accepted: 03/16/2017] [Indexed: 12/23/2022]
Abstract
Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.
Collapse
Affiliation(s)
- Gabriel Arnold
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Jacques Pesnot-Lerousseau
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| |
Collapse
|
45
|
Abstract
Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits.
Collapse
Affiliation(s)
- Patrice Voss
- Cognitive Neuroscience Unit, Department of Neurology and Neurosurgery, Montreal Neurological Institute – McGill UniversityMontreal, QC, Canada
| |
Collapse
|
46
|
Cecchetti L, Kupers R, Ptito M, Pietrini P, Ricciardi E. Are Supramodality and Cross-Modal Plasticity the Yin and Yang of Brain Development? From Blindness to Rehabilitation. Front Syst Neurosci 2016; 10:89. [PMID: 27877116 PMCID: PMC5099160 DOI: 10.3389/fnsys.2016.00089] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 10/27/2016] [Indexed: 12/20/2022] Open
Abstract
Research in blind individuals has primarily focused for a long time on the brain plastic reorganization that occurs in early visual areas. Only more recently, scientists have developed innovative strategies to understand to what extent vision is truly a mandatory prerequisite for the brain's fine morphological architecture to develop and function. As a whole, the studies conducted to date in sighted and congenitally blind individuals have provided ample evidence that several "visual" cortical areas develop independently from visual experience and do process information content regardless of the sensory modality through which a particular stimulus is conveyed: a property named supramodality. At the same time, lack of vision leads to a structural and functional reorganization within "visual" brain areas, a phenomenon known as cross-modal plasticity. Cross-modal recruitment of the occipital cortex in visually deprived individuals represents an adaptative compensatory mechanism that mediates processing of non-visual inputs. Supramodality and cross-modal plasticity appears to be the "yin and yang" of brain development: supramodal is what takes place despite the lack of vision, whereas cross-modal is what happens because of lack of vision. Here we provide a critical overview of the research in this field and discuss the implications that these novel findings have for the development of educative/rehabilitation approaches and sensory substitution devices (SSDs) in sensory-impaired individuals.
Collapse
Affiliation(s)
- Luca Cecchetti
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; Clinical Psychology Branch, Pisa University HospitalPisa, Italy
| | - Ron Kupers
- BRAINlab, Department of Neuroscience and Pharmacology, Panum Institute, University of CopenhagenCopenhagen, Denmark; Department of Radiology and Biomedical Imaging, Yale UniversityNew Haven, CT, USA
| | - Maurice Ptito
- Laboratory of Neuropsychiatry, Psychiatric Centre CopenhagenCopenhagen, Denmark; School of Optometry, Université de MontréalMontréal, QC, Canada
| | | | - Emiliano Ricciardi
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; MOMILab, IMT School for Advanced Studies LuccaLucca, Italy
| |
Collapse
|
47
|
Proulx MJ, Gwinnutt J, Dell'Erba S, Levy-Tzedek S, de Sousa AA, Brown DJ. Other ways of seeing: From behavior to neural mechanisms in the online "visual" control of action with sensory substitution. Restor Neurol Neurosci 2016; 34:29-44. [PMID: 26599473 PMCID: PMC4927905 DOI: 10.3233/rnn-150541] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - James Gwinnutt
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Sara Dell'Erba
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Shelly Levy-Tzedek
- Cognition, Aging and Rehabilitation Lab, Recanati School for Community Health Professions, Department of Physical Therapy & Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Alexandra A de Sousa
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK.,Department of Science, Bath Spa University, Bath, UK
| | - David J Brown
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
48
|
Pasqualotto A, Esenkaya T. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes. Front Behav Neurosci 2016; 10:79. [PMID: 27148000 PMCID: PMC4838627 DOI: 10.3389/fnbeh.2016.00079] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 04/08/2016] [Indexed: 12/19/2022] Open
Abstract
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Collapse
Affiliation(s)
| | - Tayfun Esenkaya
- Faculty of Arts and Social Sciences, Sabanci UniversityIstanbul, Turkey; Department of Psychology, University of BathBath, UK
| |
Collapse
|
49
|
Proulx MJ, Todorov OS, Taylor Aiken A, de Sousa AA. Where am I? Who am I? The Relation Between Spatial Cognition, Social Cognition and Individual Differences in the Built Environment. Front Psychol 2016; 7:64. [PMID: 26903893 PMCID: PMC4749931 DOI: 10.3389/fpsyg.2016.00064] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 01/12/2016] [Indexed: 11/13/2022] Open
Abstract
Knowing who we are, and where we are, are two fundamental aspects of our physical and mental experience. Although the domains of spatial and social cognition are often studied independently, a few recent areas of scholarship have explored the interactions of place and self. This fits in with increasing evidence for embodied theories of cognition, where mental processes are grounded in action and perception. Who we are might be integrated with where we are, and impact how we move through space. Individuals vary in personality, navigational strategies, and numerous cognitive and social competencies. Here we review the relation between social and spatial spheres of existence in the realms of philosophical considerations, neural and psychological representations, and evolutionary context, and how we might use the built environment to suit who we are, or how it creates who we are. In particular we investigate how two spatial reference frames, egocentric and allocentric, might transcend into the social realm. We then speculate on how environments may interact with spatial cognition. Finally, we suggest how a framework encompassing spatial and social cognition might be taken in consideration by architects and urban planners.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Laboratory, Department of Psychology, University of Bath Bath, UK
| | - Orlin S Todorov
- European Network for Brain Evolution Research The Hague, Netherlands
| | | | | |
Collapse
|
50
|
Schinazi VR, Thrash T, Chebat DR. Spatial navigation by congenitally blind individuals. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 7:37-58. [PMID: 26683114 PMCID: PMC4737291 DOI: 10.1002/wcs.1375] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Revised: 10/16/2015] [Accepted: 11/17/2015] [Indexed: 11/08/2022]
Abstract
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over‐reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. WIREs Cogn Sci 2016, 7:37–58. doi: 10.1002/wcs.1375 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Victor R Schinazi
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | - Tyler Thrash
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | | |
Collapse
|