1
|
Senna I, Piller S, Martolini C, Cocchi E, Gori M, Ernst MO. Multisensory training improves the development of spatial cognition after sight restoration from congenital cataracts. iScience 2024; 27:109167. [PMID: 38414862 PMCID: PMC10897914 DOI: 10.1016/j.isci.2024.109167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/04/2023] [Accepted: 02/05/2024] [Indexed: 02/29/2024] Open
Abstract
Spatial cognition and mobility are typically impaired in congenitally blind individuals, as vision usually calibrates space perception by providing the most accurate distal spatial cues. We have previously shown that sight restoration from congenital bilateral cataracts guides the development of more accurate space perception, even when cataract removal occurs years after birth. However, late cataract-treated individuals do not usually reach the performance levels of the typically sighted population. Here, we developed a brief multisensory training that associated audiovisual feedback with body movements. Late cataract-treated participants quickly improved their space representation and mobility, performing as well as typically sighted controls in most tasks. Their improvement was comparable with that of a group of blind participants, who underwent training coupling their movements with auditory feedback alone. These findings suggest that spatial cognition can be enhanced by a training program that strengthens the association between bodily movements and their sensory feedback (either auditory or audiovisual).
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
- Department of Psychology, Liverpool Hope University, Liverpool L16 9JD, UK
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| | - Chiara Martolini
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, 16145 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, 16152 Genova, Italy
| | - Marc O. Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, 89069 Ulm, Germany
| |
Collapse
|
2
|
Bleau M, van Acker C, Martiniello N, Nemargut JP, Ptito M. Cognitive map formation in the blind is enhanced by three-dimensional tactile information. Sci Rep 2023; 13:9736. [PMID: 37322150 PMCID: PMC10272191 DOI: 10.1038/s41598-023-36578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
For blind individuals, tactile maps are useful tools to form cognitive maps through touch. However, they still experience challenges in cognitive map formation and independent navigation. Three-dimensional (3D) tactile information is thus increasingly being considered to convey enriched spatial information, but it remains unclear if it can facilitate cognitive map formation compared to traditional two-dimensional (2D) tactile information. Consequently, the present study investigated the impact of the type of sensory input (tactile 2D vs. tactile 3D vs. a visual control condition) on cognitive map formation. To do so, early blind (EB, n = 13), late blind (LB, n = 12), and sighted control (SC, n = 14) participants were tasked to learn the layouts of mazes produced with different sensory information (tactile 2D vs. tactile 3D vs. visual control) and to infer routes from memory. Results show that EB manifested stronger cognitive map formation with 3D mazes, LB performed equally well with 2D and 3D tactile mazes, and SC manifested equivalent cognitive map formation with visual and 3D tactile mazes but were negatively impacted by 2D tactile mazes. 3D tactile maps therefore have the potential to improve spatial learning for EB and newly blind individuals through a reduction of cognitive overload. Installation of 3D tactile maps in public spaces should be considered to promote universal accessibility and reduce blind individuals' wayfinding deficits related to the inaccessibility of spatial information through non-visual means.
Collapse
Affiliation(s)
- Maxime Bleau
- School of Optometry, University of Montreal, Montreal, QC, Canada
| | - Camille van Acker
- School of Optometry, University of Montreal, Montreal, QC, Canada
- Institut Royal Pour Sourds et Aveugles, Brussels, Belgium
| | | | | | - Maurice Ptito
- School of Optometry, University of Montreal, Montreal, QC, Canada.
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark.
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
| |
Collapse
|
3
|
Maimon A, Wald IY, Ben Oz M, Codron S, Netzer O, Heimler B, Amedi A. The Topo-Speech sensory substitution system as a method of conveying spatial information to the blind and vision impaired. Front Hum Neurosci 2023; 16:1058093. [PMID: 36776219 PMCID: PMC9909096 DOI: 10.3389/fnhum.2022.1058093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 12/13/2022] [Indexed: 01/27/2023] Open
Abstract
Humans, like most animals, integrate sensory input in the brain from different sensory modalities. Yet humans are distinct in their ability to grasp symbolic input, which is interpreted into a cognitive mental representation of the world. This representation merges with external sensory input, providing modality integration of a different sort. This study evaluates the Topo-Speech algorithm in the blind and visually impaired. The system provides spatial information about the external world by applying sensory substitution alongside symbolic representations in a manner that corresponds with the unique way our brains acquire and process information. This is done by conveying spatial information, customarily acquired through vision, through the auditory channel, in a combination of sensory (auditory) features and symbolic language (named/spoken) features. The Topo-Speech sweeps the visual scene or image and represents objects' identity by employing naming in a spoken word and simultaneously conveying the objects' location by mapping the x-axis of the visual scene or image to the time it is announced and the y-axis by mapping the location to the pitch of the voice. This proof of concept study primarily explores the practical applicability of this approach in 22 visually impaired and blind individuals. The findings showed that individuals from both populations could effectively interpret and use the algorithm after a single training session. The blind showed an accuracy of 74.45%, while the visually impaired had an average accuracy of 72.74%. These results are comparable to those of the sighted, as shown in previous research, with all participants above chance level. As such, we demonstrate practically how aspects of spatial information can be transmitted through non-visual channels. To complement the findings, we weigh in on debates concerning models of spatial knowledge (the persistent, cumulative, or convergent models) and the capacity for spatial representation in the blind. We suggest the present study's findings support the convergence model and the scenario that posits the blind are capable of some aspects of spatial representation as depicted by the algorithm comparable to those of the sighted. Finally, we present possible future developments, implementations, and use cases for the system as an aid for the blind and visually impaired.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Sophie Codron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Ophir Netzer
- Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
4
|
Senna I, Piller S, Gori M, Ernst M. The power of vision: calibration of auditory space after sight restoration from congenital cataracts. Proc Biol Sci 2022; 289:20220768. [PMID: 36196538 PMCID: PMC9532985 DOI: 10.1098/rspb.2022.0768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
Early visual deprivation typically results in spatial impairments in other sensory modalities. It has been suggested that, since vision provides the most accurate spatial information, it is used for calibrating space in the other senses. Here we investigated whether sight restoration after prolonged early onset visual impairment can lead to the development of more accurate auditory space perception. We tested participants who were surgically treated for congenital dense bilateral cataracts several years after birth. In Experiment 1 we assessed participants' ability to understand spatial relationships among sounds, by asking them to spatially bisect three consecutive, laterally separated sounds. Participants performed better after surgery than participants tested before. However, they still performed worse than sighted controls. In Experiment 2, we demonstrated that single sound localization in the two-dimensional frontal plane improves quickly after surgery, approaching performance levels of sighted controls. Such recovery seems to be mediated by visual acuity, as participants gaining higher post-surgical visual acuity performed better in both experiments. These findings provide strong support for the hypothesis that vision calibrates auditory space perception. Importantly, this also demonstrates that this process can occur even when vision is restored after years of visual deprivation.
Collapse
Affiliation(s)
- Irene Senna
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Sophia Piller
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Marc Ernst
- Applied Cognitive Psychology, Faculty for Computer Science, Engineering, and Psychology, Ulm University, Ulm, Germany
| |
Collapse
|
5
|
Ottink L, Buimer H, van Raalte B, Doeller CF, van der Geest TM, van Wezel RJA. Cognitive map formation supported by auditory, haptic, and multimodal information in persons with blindness. Neurosci Biobehav Rev 2022; 140:104797. [PMID: 35902045 DOI: 10.1016/j.neubiorev.2022.104797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/23/2022] [Accepted: 07/24/2022] [Indexed: 10/16/2022]
Abstract
For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.
Collapse
Affiliation(s)
- Loes Ottink
- Donders Institute, Radboud University, Nijmegen, the Netherlands.
| | - Hendrik Buimer
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Bram van Raalte
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Christian F Doeller
- Psychology Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Kavli Insitute for Systems Neuroscience, NTNU, Trondheim, Norway
| | - Thea M van der Geest
- Lectorate Media Design, HAN University of Applied Sciences, Arnhem, the Netherlands
| | - Richard J A van Wezel
- Donders Institute, Radboud University, Nijmegen, the Netherlands; Techmed Centre, Biomedical Signals and System, University of Twente, Enschede, the Netherlands
| |
Collapse
|
6
|
Cognitive map formation through tactile map navigation in visually impaired and sighted persons. Sci Rep 2022; 12:11567. [PMID: 35798929 PMCID: PMC9262941 DOI: 10.1038/s41598-022-15858-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 06/30/2022] [Indexed: 11/09/2022] Open
Abstract
The human brain can form cognitive maps of a spatial environment, which can support wayfinding. In this study, we investigated cognitive map formation of an environment presented in the tactile modality, in visually impaired and sighted persons. In addition, we assessed the acquisition of route and survey knowledge. Ten persons with a visual impairment (PVIs) and ten sighted control participants learned a tactile map of a city-like environment. The map included five marked locations associated with different items. Participants subsequently estimated distances between item pairs, performed a direction pointing task, reproduced routes between items and recalled item locations. In addition, we conducted questionnaires to assess general navigational abilities and the use of route or survey strategies. Overall, participants in both groups performed well on the spatial tasks. Our results did not show differences in performance between PVIs and sighted persons, indicating that both groups formed an equally accurate cognitive map. Furthermore, we found that the groups generally used similar navigational strategies, which correlated with performance on some of the tasks, and acquired similar and accurate route and survey knowledge. We therefore suggest that PVIs are able to employ a route as well as survey strategy if they have the opportunity to access route-like as well as map-like information such as on a tactile map.
Collapse
|
7
|
Spatial Knowledge via Auditory Information for Blind Individuals: Spatial Cognition Studies and the Use of Audio-VR. SENSORS 2022; 22:s22134794. [PMID: 35808291 PMCID: PMC9268803 DOI: 10.3390/s22134794] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/21/2022] [Indexed: 11/25/2022]
Abstract
Spatial cognition is a daily life ability, developed in order to be able to understand and interact with our environment. Even if all the senses are involved in mental representation of space elaboration, the lack of vision makes it more difficult, especially because of the importance of peripheral information in updating the relative positions of surrounding landmarks when one is moving. Spatial audio technology has long been used for studies of human perception, particularly in the area of auditory source localisation. The ability to reproduce individual sounds at desired positions, or complex spatial audio scenes, without the need to manipulate physical devices has provided researchers with many benefits. We present a review of several studies employing the power of spatial audio virtual reality for research in spatial cognition with blind individuals. These include studies investigating simple spatial configurations, architectural navigation, reaching to sounds, and sound design for improved acceptability. Prospects for future research, including those currently underway, are also discussed.
Collapse
|
8
|
Ottink L, Hoogendonk M, Doeller CF, Van der Geest TM, Van Wezel RJA. Cognitive map formation through haptic and visual exploration of tactile city-like maps. Sci Rep 2021; 11:15254. [PMID: 34315940 PMCID: PMC8316501 DOI: 10.1038/s41598-021-94778-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 07/13/2021] [Indexed: 11/09/2022] Open
Abstract
In this study, we compared cognitive map formation of small-scale models of city-like environments presented in visual or tactile/haptic modalities. Previous research often addresses only a limited amount of cognitive map aspects. We wanted to combine several of these aspects to elucidate a more complete view. Therefore, we assessed different types of spatial information, and consider egocentric as well as allocentric perspectives. Furthermore, we compared haptic map learning with visual map learning. In total 18 sighted participants (9 in a haptic condition, 9 visuo-haptic) learned three tactile maps of city-like environments. The maps differed in complexity, and had five marked locations associated with unique items. Participants estimated distances between item pairs, rebuilt the map, recalled locations, and navigated two routes, after learning each map. All participants overall performed well on the spatial tasks. Interestingly, only on the complex maps, participants performed worse in the haptic condition than the visuo-haptic, suggesting no distinct advantage of vision on the simple map. These results support ideas of modality-independent representations of space. Although it is less clear on the more complex maps, our findings indicate that participants using only haptic or a combination of haptic and visual information both form a quite accurate cognitive map of a simple tactile city-like map.
Collapse
Affiliation(s)
- Loes Ottink
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Marit Hoogendonk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Christian F Doeller
- Psychology Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Kavli Insitute for Systems Neuroscience, NTNU, Trondheim, Norway
| | - Thea M Van der Geest
- Lectorate Media Design, HAN University of Applied Sciences, Arnhem, The Netherlands
| | - Richard J A Van Wezel
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Techmed Centre, Biomedical Signals and System, University of Twente, Enschede, The Netherlands
| |
Collapse
|
9
|
Valente D, Bara F, Afonso-Jaco A, Baltenneck N, Gentaz É. La perception tactile des propriétés spatiales des objets chez les personnes aveugles. ENFANCE 2021. [DOI: 10.3917/enf2.211.0069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
10
|
Ecological Validity of Immersive Virtual Reality (IVR) Techniques for the Perception of Urban Sound Environments. ACOUSTICS 2020. [DOI: 10.3390/acoustics3010003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Immersive Virtual Reality (IVR) is a simulated technology used to deliver multisensory information to people under different environmental conditions. When IVR is generally applied in urban planning and soundscape research, it reveals attractive possibilities for the assessment of urban sound environments with higher immersion for human participation. In virtual sound environments, various topics and measures are designed to collect subjective responses from participants under simulated laboratory conditions. Soundscape or noise assessment studies during virtual experiences adopt an evaluation approach similar to in situ methods. This paper aims to review the approaches that are utilized to assess the ecological validity of IVR for the perception of urban sound environments and the necessary technologies during audio–visual reproduction to establish a dynamic IVR experience that ensures ecological validity. The review shows that, through the use of laboratory tests including subjective response surveys, cognitive performance tests and physiological responses, the ecological validity of IVR can be assessed for the perception of urban sound environments. The reproduction system with head-tracking functions synchronizing spatial audio and visual stimuli (e.g., head-mounted displays (HMDs) with first-order Ambisonics (FOA)-tracked binaural playback) represents the prevailing trend to achieve high ecological validity. These studies potentially contribute to the outcomes of a normalized evaluation framework for subjective soundscape and noise assessments in virtual environments.
Collapse
|
11
|
Bertonati G, Tonelli A, Cuturi LF, Setti W, Gori M. Assessment of spatial reasoning in blind individuals using a haptic version of the Kohs Block Design Test. CURRENT RESEARCH IN BEHAVIORAL SCIENCES 2020. [DOI: 10.1016/j.crbeha.2020.100004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
12
|
Grison E, Jaco AA. Is the construction of spatial models multimodal? New evidences towards sensory-motor information involvement from temporary blindness study. PSYCHOLOGICAL RESEARCH 2020; 85:2636-2653. [PMID: 33033895 DOI: 10.1007/s00426-020-01427-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 09/24/2020] [Indexed: 10/23/2022]
Abstract
Using new developments of interference paradigm, this paper addresses the raising question of the involvement of sensory-motor information in the construction of elaborate spatial models (Johnson-Laird in Mental models: towards a cognitive science of language, inference, and consciousness Cambridge University Press Cambridge, 1983). In two experiments, 112 participants had to explore and memorize the spatial arrangement of 12 objects, disposed on 3 tables. Participants were either sighted or blindfolded, leading to a visual or a more sensory-motor based exploration of the room. During exploration, participants were required to perform a classical verbal, a visuo-spatial dual task or none. In the second experiment, more exploratory, we draw on interference paradigm literature and its recent development in the embodied field to develop two original dual tasks meant to interfere directly with the acquisition of sensory-motor information (haptic and action). After this learning phase, five tasks addressing spatial memory and reasoning used in the construction of spatial models were performed. Results showed classical effects for both verbal and visuo-spatial tasks for sighted participants, but not for blindfolded sighted ones, suggesting that a temporary visual deprivation led participants to use other way to build their spatial models. Our second experiment confirmed this point by showing effect of both sensory-motor dual tasks, especially for blindfolded sighted participants. Taking together, our results support a multimodal view of spatial models, and that exploration modality will influence the information used to construct them. Moreover, this challenges the Baddeley's dualist view of working memory as a reference to theorize the construction of spatial models and provide new experimental evidences towards an embodied view of spatial models.
Collapse
Affiliation(s)
- Elise Grison
- IFSTTAR, Laboratoire de Psychologie des Comportements et des mobilités, 78000, Versailles, France.
| | - Amandine Afonso Jaco
- Université de Paris, Laboratoire Mémoire, Cerveau et Cognition, 92100, Boulogne Billancourt, France.,Université Lumière Lyon 2, Laboratoire Développement, Individu, Processus, Handicap, Éducation, 69676, Bron Cedex, France
| |
Collapse
|
13
|
Lhuillier S, Gyselinck V, Piolino P, Nicolas S. "Walk this way": specific contributions of active walking to the encoding of metric properties during spatial learning. PSYCHOLOGICAL RESEARCH 2020; 85:2502-2517. [PMID: 32918143 DOI: 10.1007/s00426-020-01415-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 09/01/2020] [Indexed: 11/25/2022]
Abstract
The effect of body-based information on spatial memory has been traditionally described as a facilitating factor for large-scale spatial learning in the field of active learning research (Chrastil & Warren, Psychonomic Bulletin and Review, 19(1):1-23; 2012). The specific contribution of body-based information to spatial representation properties is however not yet well defined and the mechanisms through which body-based information contributes to spatial learning are not clear enough. To disambiguate the effect of active spatial learning on the quality of spatial representations from the beneficial effect of physiological arousal, we compared four experimental conditions (walking on a unidirectional treadmill during learning, retrieval, both phases or no walking). Results showed no effect of the walking condition for a route perspective task, but a significant effect on a survey perspective task (landmark positioning on a map): participants who walked during encoding (encoding group and encoding + retrieval group) obtained better results than those who did not walk or walked only during retrieval. Geometrical analysis of spatial positions on maps revealed that the activity of walking during encoding improves the correlation between participants' coordinates and actual coordinates through better distance estimations and angular accuracy, even though the optic flow was not matched with individual walking speed. Control group variance in all measures was higher than that of the walking groups (regardless of the moment of walking). Taken together, these results provide arguments for the multimodal nature of spatial representations, where body-related information derived from walking is involved in metric properties accuracy and perspective switching.
Collapse
Affiliation(s)
- Simon Lhuillier
- LAPEA, Université Gustave Eiffel, IFSTTAR, 78000, Versailles, France.
- LAPEA, Université de Paris, 92000, Boulogne-Billancourt, France.
- MC2, Université de Paris, 92000, Boulogne-Billancourt, France.
| | - Valérie Gyselinck
- LAPEA, Université Gustave Eiffel, IFSTTAR, 78000, Versailles, France
- LAPEA, Université de Paris, 92000, Boulogne-Billancourt, France
| | - Pascale Piolino
- MC2, Université de Paris, 92000, Boulogne-Billancourt, France
| | - Serge Nicolas
- MC2, Université de Paris, 92000, Boulogne-Billancourt, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
14
|
Hersh M. Mental Maps and the Use of Sensory Information by Blind and Partially Sighted People. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2020. [DOI: 10.1145/3375279] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
This article aims to fill an important gap in the literature by reporting on blind and partially sighted people's use of spatial representations (mental maps) from their perspective and when travelling on real routes. The results presented here were obtained from semi-structured interviews with 100 blind and partially sighted people in five different countries. They are intended to answer three questions about the representation of space by blind and partially sighted people, how these representations are used to support travel, and the implications for the design of travel aids and orientation and mobility training. They show that blind and partially sighted people do have spatial representations and that a number of them explicitly use the term mental map. This article discusses the variety of approaches to spatial representations, including the sensory modalities used, the use of global or local representations, and the applications to support travel. The conclusions summarize the answers to the three questions and include a two-level preliminary classification of the spatial representations of blind and partially sighted people.
Collapse
Affiliation(s)
- Marion Hersh
- Biomedical Engineering, University of Glasgow, Glasgow, Scotland
| |
Collapse
|
15
|
Santoro I, Murgia M, Sors F, Agostini T. The Influence of the Encoding Modality on Spatial Navigation for Sighted and Late-Blind People. Multisens Res 2020; 33:505-520. [PMID: 31648190 DOI: 10.1163/22134808-20191431] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 09/09/2019] [Indexed: 11/19/2022]
Abstract
People usually rely on sight to encode spatial information, becoming aware of other sensory cues when deprived of vision. In the absence of vision, it has been demonstrated that physical movements and spatial descriptions can effectively provide the spatial information that is necessary for the construction of an adequate spatial mental model. However, no study has previously compared the influence of these encoding modalities on complex movements such as human spatial navigation within real room-size environments. Thus, we investigated whether the encoding of a spatial layout through verbal cues - that is, spatial description - and motor cues - that is, physical exploration of the environment - differently affect spatial navigation within a real room-size environment, by testing blindfolded sighted (Experiment 1) and late-blind (Experiment 2) participants. Our results reveal that encoding the environment through physical movement is more effective than through verbal descriptions in supporting active navigation. Thus, our findings are in line with the studies claiming that the physical exploration of an environment enhances the development of a global spatial representation and improves spatial updating. From an applied perspective, the present results suggest that it might be possible to improve the experience for visually impaired people within a new environment by allowing them to explore it.
Collapse
Affiliation(s)
- Ilaria Santoro
- Department of Life Sciences, University of Trieste, Trieste, Italy
| | - Mauro Murgia
- Department of Life Sciences, University of Trieste, Trieste, Italy
| | - Fabrizio Sors
- Department of Life Sciences, University of Trieste, Trieste, Italy
| | - Tiziano Agostini
- Department of Life Sciences, University of Trieste, Trieste, Italy
| |
Collapse
|
16
|
Stronger responses in the visual cortex of sighted compared to blind individuals during auditory space representation. Sci Rep 2019; 9:1935. [PMID: 30760758 PMCID: PMC6374481 DOI: 10.1038/s41598-018-37821-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 12/11/2018] [Indexed: 01/02/2023] Open
Abstract
It has been previously shown that the interaction between vision and audition involves early sensory cortices. However, the functional role of these interactions and their modulation due to sensory impairment is not yet understood. To shed light on the impact of vision on auditory spatial processing, we recorded ERPs and collected psychophysical responses during space and time bisection tasks in sighted and blind participants. They listened to three consecutive sounds and judged whether the second sound was either spatially or temporally further from the first or the third sound. We demonstrate that spatial metric representation of sounds elicits an early response of the visual cortex (P70) which is different between sighted and visually deprived individuals. Indeed, only in sighted and not in blind people P70 is strongly selective for the spatial position of sounds, mimicking many aspects of the visual-evoked C1. These results suggest that early auditory processing associated with the construction of spatial maps is mediated by visual experience. The lack of vision might impair the projection of multi-sensory maps on the retinotopic maps used by the visual cortex.
Collapse
|
17
|
Amadeo MB, Campus C, Gori M. Impact of years of blindness on neural circuits underlying auditory spatial representation. Neuroimage 2019; 191:140-149. [PMID: 30710679 DOI: 10.1016/j.neuroimage.2019.01.073] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 01/10/2019] [Accepted: 01/29/2019] [Indexed: 11/30/2022] Open
Abstract
Early visual deprivation impacts negatively on spatial bisection abilities. Recently, an early (50-90 ms) ERP response, selective for sound position in space, has been observed in the visual cortex of sighted individuals during the spatial but not the temporal bisection task. Here, we clarify the role of vision on spatial bisection abilities and neural correlates by studying late blind individuals. Results highlight that a shorter period of blindness is linked to a stronger contralateral activation in the visual cortex and a better performance during the spatial bisection task. Contrarily, not lateralized visual activation and lower performance are observed in individuals with a longer period of blindness. To conclude, the amount of time spent without vision may gradually impact on neural circuits underlying the construction of spatial representations in late blind participants. These findings suggest a key relationship between visual deprivation and auditory spatial abilities in humans.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy; Università degli studi di Genova, Department of Informatics, Bioengineering, Robotics and Systems Engineering, Via all'Opera Pia, 13 - 16145, Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy.
| |
Collapse
|
18
|
The Spatial Musical Association of Response Codes does not depend on a normal visual experience: A study with early blind individuals. Atten Percept Psychophys 2018; 80:813-821. [DOI: 10.3758/s13414-018-1495-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
19
|
Gandemer L, Parseihian G, Kronland-Martinet R, Bourdin C. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map? Front Neurosci 2017; 11:357. [PMID: 28694770 PMCID: PMC5483472 DOI: 10.3389/fnins.2017.00357] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 06/08/2017] [Indexed: 11/13/2022] Open
Abstract
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.
Collapse
Affiliation(s)
- Lennie Gandemer
- Aix Marseille Univ, CNRS, Perception, Representations, Image, Sound, Music (PRISM)Marseille, France
| | - Gaetan Parseihian
- Aix Marseille Univ, CNRS, Perception, Representations, Image, Sound, Music (PRISM)Marseille, France
| | | | | |
Collapse
|
20
|
Bălan O, Moldoveanu A, Moldoveanu F, Nagy H, Wersényi G, Unnórsson R. Improving the Audio Game–Playing Performances of People with Visual Impairments through Multimodal Training. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2017. [DOI: 10.1177/0145482x1711100206] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Introduction As the number of people with visual impairments (that is, those who are blind or have low vision) is continuously increasing, rehabilitation and engineering researchers have identified the need to design sensory-substitution devices that would offer assistance and guidance to these people for performing navigational tasks. Auditory and haptic cues have been shown to be an effective approach towards creating a rich spatial representation of the environment, so they are considered for inclusion in the development of assistive tools that would enable people with visual impairments to acquire knowledge of the surrounding space in a way close to the visually based perception of sighted individuals. However, achieving efficiency through a sensory substitution device requires extensive training for visually impaired users to learn how to process the artificial auditory cues and convert them into spatial information. Methods Considering all the potential advantages game-based learning can provide, we propose a new method for training sound localization and virtual navigational skills of visually impaired people in a 3D audio game with hierarchical levels of difficulty. The training procedure is focused on a multimodal (auditory and haptic) learning approach in which the subjects have been asked to listen to 3D sounds while simultaneously perceiving a series of vibrations on a haptic headband that corresponds to the direction of the sound source in space. Results The results we obtained in a sound-localization experiment with 10 visually impaired people showed that the proposed training strategy resulted in significant improvements in auditory performance and navigation skills of the subjects, thus ensuring behavioral gains in the spatial perception of the environment.
Collapse
Affiliation(s)
- Oana Bălan
- University Politehnica of Bucharest, Splaiul Independentei, 313, Bucharest, Romania
| | | | | | - Hunor Nagy
- Széchenyi István University, Egyetem tér 1., Hungary
| | - György Wersényi
- Széchenyi István University, Gyõr, Egyetem tér 1., 9026 Hungary
| | - Rúnar Unnórsson
- University of Iceland, School of Engineering and Natural Sciences—Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, VR-2/V02-237, Reykjavik, Iceland
| |
Collapse
|
21
|
Tao Q, Chan CCH, Luo YJ, Li JJ, Ting KH, Lu ZL, Whitfield-Gabrieli S, Wang J, Lee TMC. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals. Brain Topogr 2017; 30:364-379. [PMID: 28161728 PMCID: PMC5408050 DOI: 10.1007/s10548-017-0549-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Accepted: 01/19/2017] [Indexed: 11/26/2022]
Abstract
Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.
Collapse
Affiliation(s)
- Qian Tao
- Psychology Department, School of Medicine, Jinan University, Guangzhou, China
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong.
| | - Yue-Jia Luo
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Jian-Jun Li
- China Rehabilitation Research Center, Beijing, China
| | - Kin-Hung Ting
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong
| | - Zhong-Lin Lu
- Center for Cognitive and Behavioral Brain Imaging, Arts, & Sciences, Department of Psychology, The Ohio State University, Ohio, OH, 43210, USA
| | | | - Jun Wang
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Tatia M C Lee
- Laboratory of Neuropsychology, Department of Psychology, The University of Hong Kong, Hong Kong, Hong Kong.
- Laboratory of Cognitive Affective Neuroscience, The University of Hong Kong, Hong Kong, Hong Kong.
- State Key Laboratory of Brain and Cognitive Science, The University of Hong Kong, Hong Kong, Hong Kong.
| |
Collapse
|
22
|
Simon LSR, Zacharov N, Katz BFG. Perceptual attributes for the comparison of head-related transfer functions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3623. [PMID: 27908072 DOI: 10.1121/1.4966115] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The benefit of using individual head-related transfer functions (HRTFs) in binaural audio is well documented with regards to improving localization precision. However, with the increased use of binaural audio in more complex scene renderings, cognitive studies, and virtual and augmented reality simulations, the perceptual impact of HRTF selection may go beyond simple localization. In this study, the authors develop a list of attributes which qualify the perceived differences between HRTFs, providing a qualitative understanding of the perceptual variance of non-individual binaural renderings. The list of attributes was designed using a Consensus Vocabulary Protocol elicitation method. Participants followed an Individual Vocabulary Protocol elicitation procedure, describing the perceived differences between binaural stimuli based on binauralized extracts of multichannel productions. This was followed by an automated lexical reduction and a series of consensus group meetings during which participants agreed on a list of relevant attributes. Finally, the proposed list of attributes was then evaluated through a listening test, leading to eight valid perceptual attributes for describing the perceptual dimensions affected by HRTF set variations.
Collapse
Affiliation(s)
- Laurent S R Simon
- Audio Acoustics Group, LIMSI, CNRS, Université Paris-Saclay, 91405 Orsay, France
| | | | - Brian F G Katz
- Audio Acoustics Group, LIMSI, CNRS, Université Paris-Saclay, 91405 Orsay, France
| |
Collapse
|
23
|
Forner-Cordero A, Garcia VD, Rodrigues ST, Duysens J. Obstacle Crossing Differences Between Blind and Blindfolded Subjects After Haptic Exploration. J Mot Behav 2016; 48:468-78. [PMID: 27253608 DOI: 10.1080/00222895.2015.1134434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Little is known about the ability of blind people to cross obstacles after they have explored haptically their size and position. Long-term absence of vision may affect spatial cognition in the blind while their extensive experience with the use of haptic information for guidance may lead to compensation strategies. Seven blind and 7 sighted participants (with vision available and blindfolded) walked along a flat pathway and crossed an obstacle after a haptic exploration. Blind and blindfolded subjects used different strategies to cross the obstacle. After the first 20 trials the blindfolded subjects reduced the distance between the foot and the obstacle at the toe-off instant, while the blind behaved as the subjects with full vision. Blind and blindfolded participants showed larger foot clearance than participants with vision. At foot landing the hip was more behind the foot in the blindfolded condition, while there were no differences between the blind and the vision conditions. For several parameters of the obstacle crossing task, blind people were more similar to subjects with full vision indicating that the blind subjects were able to compensate for the lack of vision.
Collapse
Affiliation(s)
- Arturo Forner-Cordero
- a Biomechatronics Lab. Mechatronics Department , Escola Politécnica da Universidade de São Paulo , São Paulo , Brazil
| | - Valéria D Garcia
- b Neuroscience and Behavior, Institute of Psychology, University of São Paulo , São Paulo , Brazil
| | - Sérgio T Rodrigues
- c Laboratory of Information, Vision, and Action (LIVIA), UNESP-State University of São Paulo , Bauru , Brazil
| | | |
Collapse
|
24
|
Cogné M, Taillade M, N'Kaoua B, Tarruella A, Klinger E, Larrue F, Sauzéon H, Joseph PA, Sorita E. The contribution of virtual reality to the diagnosis of spatial navigation disorders and to the study of the role of navigational aids: A systematic literature review. Ann Phys Rehabil Med 2016; 60:164-176. [PMID: 27017533 DOI: 10.1016/j.rehab.2015.12.004] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2015] [Revised: 12/23/2015] [Accepted: 12/23/2015] [Indexed: 10/22/2022]
Abstract
INTRODUCTION Spatial navigation, which involves higher cognitive functions, is frequently implemented in daily activities, and is critical to the participation of human beings in mainstream environments. Virtual reality is an expanding tool, which enables on one hand the assessment of the cognitive functions involved in spatial navigation, and on the other the rehabilitation of patients with spatial navigation difficulties. Topographical disorientation is a frequent deficit among patients suffering from neurological diseases. The use of virtual environments enables the information incorporated into the virtual environment to be manipulated empirically. But the impact of manipulations seems differ according to their nature (quantity, occurrence, and characteristics of the stimuli) and the target population. METHODS We performed a systematic review of research on virtual spatial navigation covering the period from 2005 to 2015. We focused first on the contribution of virtual spatial navigation for patients with brain injury or schizophrenia, or in the context of ageing and dementia, and then on the impact of visual or auditory stimuli on virtual spatial navigation. RESULTS On the basis of 6521 abstracts identified in 2 databases (Pubmed and Scopus) with the keywords « navigation » and « virtual », 1103 abstracts were selected by adding the keywords "ageing", "dementia", "brain injury", "stroke", "schizophrenia", "aid", "help", "stimulus" and "cue"; Among these, 63 articles were included in the present qualitative analysis. CONCLUSION Unlike pencil-and-paper tests, virtual reality is useful to assess large-scale navigation strategies in patients with brain injury or schizophrenia, or in the context of ageing and dementia. Better knowledge about both the impact of the different aids and the cognitive processes involved is essential for the use of aids in neurorehabilitation.
Collapse
Affiliation(s)
- M Cogné
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Service de médecine physique et de réadaptation, centre hospitalier universitaire, 33076 Bordeaux, France.
| | - M Taillade
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France
| | - B N'Kaoua
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Institut national de recherche en informatique et automatique (INRIA), 33405 Talence cedex, France
| | - A Tarruella
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Institut de formation en ergothérapie, centre hospitalier universitaire, 33076 Bordeaux, France
| | - E Klinger
- Laboratoire interactions numériques santé handicap, ESIEA, 53000 Laval, France
| | - F Larrue
- Laboratoire Bordelais de recherche en informatique (LaBRI), université de Bordeaux, 33045 Bordeaux, France
| | - H Sauzéon
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Institut national de recherche en informatique et automatique (INRIA), 33405 Talence cedex, France
| | - P-A Joseph
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Service de médecine physique et de réadaptation, centre hospitalier universitaire, 33076 Bordeaux, France
| | - E Sorita
- EA4136 handicap et système nerveux, université de Bordeaux, 33076 Bordeaux, France; Institut de formation en ergothérapie, centre hospitalier universitaire, 33076 Bordeaux, France
| |
Collapse
|
25
|
Occelli V, Lacey S, Stephens C, John T, Sathian K. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People. Perception 2015; 45:337-45. [PMID: 26562881 DOI: 10.1177/0301006615614489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.
Collapse
Affiliation(s)
| | - Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Careese Stephens
- Department of Neurology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| | - Thomas John
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USADepartment of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
26
|
Park CH, Ryu ES, Howard AM. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments. IEEE TRANSACTIONS ON HAPTICS 2015; 8:327-338. [PMID: 26219098 DOI: 10.1109/toh.2015.2460253] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
Collapse
|
27
|
Buckmann M, Gaschler R, Höfer S, Loeben D, Frensch PA, Brock O. Learning to explore the structure of kinematic objects in a virtual environment. Front Psychol 2015; 6:374. [PMID: 25904878 PMCID: PMC4387864 DOI: 10.3389/fpsyg.2015.00374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 03/16/2015] [Indexed: 11/13/2022] Open
Abstract
The current study tested the quantity and quality of human exploration learning in a virtual environment. Given the everyday experience of humans with physical object exploration, we document substantial practice gains in the time, force, and number of actions needed to classify the structure of virtual chains, marking the joints as revolute, prismatic, or rigid. In line with current work on skill acquisition, participants could generalize the new and efficient psychomotor patterns of object exploration to novel objects. On the one hand, practice gains in exploration performance could be captured by a negative exponential practice function. On the other hand, they could be linked to strategies and strategy change. After quantifying how much was learned in object exploration and identifying the time course of practice-related gains in exploration efficiency (speed), we identified what was learned. First, we identified strategy components that were associated with efficient (fast) exploration performance: sequential processing, simultaneous use of both hands, low use of pulling rather than pushing, and low use of force. Only the latter was beneficial irrespective of the characteristics of the other strategy components. Second, we therefore characterized efficient exploration behavior by strategies that simultaneously take into account the abovementioned strategy components. We observed that participants maintained a high level of flexibility, sampling from a pool of exploration strategies trading the level of psycho-motoric challenges with exploration speed. We discuss the findings pursuing the aim of advancing intelligent object exploration by combining analytic (object exploration in humans) and synthetic work (object exploration in robots) in the same virtual environment.
Collapse
Affiliation(s)
- Marcus Buckmann
- Department of Psychology, Humboldt-Universität Berlin, Germany ; Center for Adaptive Behavior and Cognition Max Planck Institute for Human Development, Germany
| | - Robert Gaschler
- Department of Psychology, Universität Koblenz-Landau Landau, Germany ; Department of Psychology FernUniversität in Hagen, Germany
| | - Sebastian Höfer
- Robotics and Biology Laboratory, Technische Universität Berlin Berlin, Germany
| | - Dennis Loeben
- Robotics and Biology Laboratory, Technische Universität Berlin Berlin, Germany
| | - Peter A Frensch
- Department of Psychology, Humboldt-Universität Berlin, Germany
| | - Oliver Brock
- Robotics and Biology Laboratory, Technische Universität Berlin Berlin, Germany
| |
Collapse
|
28
|
Parseihian G, Jouffrais C, Katz BFG. Reaching nearby sources: comparison between real and virtual sound and visual targets. Front Neurosci 2014; 8:269. [PMID: 25228855 PMCID: PMC4151089 DOI: 10.3389/fnins.2014.00269] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/11/2014] [Indexed: 12/01/2022] Open
Abstract
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments.
Collapse
Affiliation(s)
- Gaëtan Parseihian
- Laboratoire de Mécanique et d'Informatique pour les Sciences de l'Ingénieur, LIMSI - CNRS, Universite Paris Sud Orsay, France
| | | | - Brian F G Katz
- Laboratoire de Mécanique et d'Informatique pour les Sciences de l'Ingénieur, LIMSI - CNRS, Universite Paris Sud Orsay, France
| |
Collapse
|
29
|
Viaud-Delmon I, Warusfel O. From ear to body: the auditory-motor loop in spatial cognition. Front Neurosci 2014; 8:283. [PMID: 25249933 PMCID: PMC4155796 DOI: 10.3389/fnins.2014.00283] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 08/19/2014] [Indexed: 11/30/2022] Open
Abstract
Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.
Collapse
Affiliation(s)
- Isabelle Viaud-Delmon
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| | - Olivier Warusfel
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| |
Collapse
|
30
|
Connors EC, Chrastil ER, Sánchez J, Merabet LB. Virtual environments for the transfer of navigation skills in the blind: a comparison of directed instruction vs. video game based learning approaches. Front Hum Neurosci 2014; 8:223. [PMID: 24822044 PMCID: PMC4013463 DOI: 10.3389/fnhum.2014.00223] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Accepted: 03/30/2014] [Indexed: 11/13/2022] Open
Abstract
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation.
Collapse
Affiliation(s)
- Erin C Connors
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| | - Elizabeth R Chrastil
- Department of Psychology, Center for Memory and Brain, Boston University Boston, MA, USA
| | - Jaime Sánchez
- Department of Computer Science, Center for Advanced Research in Education, University of Chile Santiago, Chile
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| |
Collapse
|
31
|
Connors EC, Chrastil ER, Sánchez J, Merabet LB. Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Front Hum Neurosci 2014; 8:133. [PMID: 24653690 PMCID: PMC3949101 DOI: 10.3389/fnhum.2014.00133] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Accepted: 02/21/2014] [Indexed: 11/16/2022] Open
Abstract
For individuals who are blind, navigating independently in an unfamiliar environment represents a considerable challenge. Inspired by the rising popularity of video games, we have developed a novel approach to train navigation and spatial cognition skills in adolescents who are blind. Audio-based Environment Simulator (AbES) is a software application that allows for the virtual exploration of an existing building set in an action video game metaphor. Using this ludic-based approach to learning, we investigated the ability and efficacy of adolescents with early onset blindness to acquire spatial information gained from the exploration of a target virtual indoor environment. Following game play, participants were assessed on their ability to transfer and mentally manipulate acquired spatial information on a set of navigation tasks carried out in the real environment. Success in transfer of navigation skill performance was markedly high suggesting that interacting with AbES leads to the generation of an accurate spatial mental representation. Furthermore, there was a positive correlation between success in game play and navigation task performance. The role of virtual environments and gaming in the development of mental spatial representations is also discussed. We conclude that this game based learning approach can facilitate the transfer of spatial knowledge and further, can be used by individuals who are blind for the purposes of navigation in real-world environments.
Collapse
Affiliation(s)
- Erin C Connors
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| | - Elizabeth R Chrastil
- Department of Psychological and Brain Sciences, Center for Memory and Brain, Boston University Boston, MA, USA
| | - Jaime Sánchez
- Department of Computer Science, Center for Advanced Research in Education, University of Chile Santiago, Chile
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School Boston, MA, USA
| |
Collapse
|
32
|
The effect of vertical and horizontal symmetry on memory for tactile patterns in late blind individuals. Atten Percept Psychophys 2014; 75:375-82. [PMID: 23150215 DOI: 10.3758/s13414-012-0393-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual stimuli that exhibit vertical symmetry are easier to remember than stimuli symmetric along other axes, an advantage that extends to the haptic modality as well. Critically, the vertical symmetry memory advantage has not been found in early blind individuals, despite their overall superior memory, as compared with sighted individuals, and the presence of an overall advantage for identifying symmetric over asymmetric patterns. The absence of the vertical axis memory advantage in the early blind may depend on their total lack of visual experience or on the effect of prolonged visual deprivation. To disentangle this issue, in this study, we measured the ability of late blind individuals to remember tactile spatial patterns that were either vertically or horizontally symmetric or asymmetric. Late blind participants showed better memory performance for symmetric patterns. An additional advantage for the vertical axis of symmetry over the horizontal one was reported, but only for patterns presented in the frontal plane. In the horizontal plane, no difference was observed between vertical and horizontal symmetric patterns, due to the latter being recalled particularly well. These results are discussed in terms of the influence of the spatial reference frame adopted during exploration. Overall, our data suggest that prior visual experience is sufficient to drive the vertical symmetry memory advantage, at least when an external reference frame based on geocentric cues (i.e., gravity) is adopted.
Collapse
|
33
|
Gandhi TK, Ganesh S, Sinha P. Improvement in spatial imagery following sight onset late in childhood. Psychol Sci 2014; 25:693-701. [PMID: 24406396 DOI: 10.1177/0956797613513906] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The factors contributing to the development of spatial imagery skills are not well understood. Here, we consider whether visual experience shapes these skills. Although differences in spatial imagery between sighted and blind individuals have been reported, it is unclear whether these differences are truly due to visual deprivation or instead are due to extraneous factors, such as reduced opportunities for the blind to interact with their environment. A direct way of assessing vision's contribution to the development of spatial imagery is to determine whether spatial imagery skills change soon after the onset of sight in congenitally blind individuals. We tested 10 children who gained sight after several years of congenital blindness and found significant improvements in their spatial imagery skills following sight-restoring surgeries. These results provide evidence of vision's contribution to spatial imagery and also have implications for the nature of internal spatial representations.
Collapse
Affiliation(s)
- Tapan K Gandhi
- 1Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | | | |
Collapse
|
34
|
Abstract
It seems intuitively obvious that active exploration of a new environment will lead to better spatial learning than will passive exposure. However, the literature on this issue is decidedly mixed-in part, because the concept itself is not well defined. We identify five potential components of active spatial learning and review the evidence regarding their role in the acquisition of landmark, route, and survey knowledge. We find that (1) idiothetic information in walking contributes to metric survey knowledge, (2) there is little evidence as yet that decision making during exploration contributes to route or survey knowledge, (3) attention to place-action associations and relevant spatial relations contributes to route and survey knowledge, although landmarks and boundaries appear to be learned without effort, (4) route and survey information are differentially encoded in subunits of working memory, and (5) there is preliminary evidence that mental manipulation of such properties facilitates spatial learning. Idiothetic information appears to be necessary to reveal the influence of attention and, possibly, decision making in survey learning, which may explain the mixed results in desktop virtual reality. Thus, there is indeed an active advantage in spatial learning, which manifests itself in the task-dependent acquisition of route and survey knowledge.
Collapse
|
35
|
Kammoun S, Parseihian G, Gutierrez O, Brilhault A, Serpa A, Raynal M, Oriola B, Macé MM, Auvray M, Denis M, Thorpe S, Truillet P, Katz B, Jouffrais C. Navigation and space perception assistance for the visually impaired: The NAVIG project. Ing Rech Biomed 2012. [DOI: 10.1016/j.irbm.2012.01.009] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
36
|
Abstract
A theory of how concept formation begins is presented that accounts for conceptual activity in the first year of life, shows how increasing conceptual complexity comes about, and predicts the order in which new types of information accrue to the conceptual system. In a compromise between nativist and empiricist views, it offers a single domain-general mechanism that redescribes attended spatiotemporal information into an iconic form. The outputs of this mechanism consist of types of spatial information that we know infants attend to in the first months of life. These primitives form the initial basis of concept formation, allow explicit preverbal thought, such as recall, inferences, and simple mental problem solving, and support early language learning. The theory details how spatial concepts become associated with bodily feelings of force and trying. It also explains why concepts of emotions, sensory concepts such as color, and theory of mind concepts are necessarily later acquisitions because they lack contact with spatial descriptions to interpret unstructured internal experiences. Finally, commonalities between the concepts of preverbal infants and nonhuman primates are discussed.
Collapse
Affiliation(s)
- Jean M Mandler
- Department of Cognitive Science, University of California San Diego, La Jolla, CA 92093-0515, USA.
| |
Collapse
|
37
|
Ruotolo F, Ruggiero G, Vinciguerra M, Iachini T. Sequential vs simultaneous encoding of spatial information: a comparison between the blind and the sighted. Acta Psychol (Amst) 2012; 139:382-9. [PMID: 22192440 DOI: 10.1016/j.actpsy.2011.11.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Revised: 10/06/2011] [Accepted: 11/28/2011] [Indexed: 10/14/2022] Open
Abstract
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images.
Collapse
|
38
|
Mental rotation in blind and sighted adolescents: The effects of haptic strategies. EUROPEAN REVIEW OF APPLIED PSYCHOLOGY 2011. [DOI: 10.1016/j.erap.2011.05.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
39
|
Blind individuals show pseudoneglect in bisecting numerical intervals. Atten Percept Psychophys 2011; 73:1021-8. [DOI: 10.3758/s13414-011-0094-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|