51
|
MEG sensor patterns reflect perceptual but not categorical similarity of animate and inanimate objects. Neuroimage 2019; 193:167-177. [DOI: 10.1016/j.neuroimage.2019.03.028] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/11/2019] [Accepted: 03/12/2019] [Indexed: 12/21/2022] Open
|
52
|
Rączy K, Urbańczyk A, Korczyk M, Szewczyk JM, Sumera E, Szwed M. Orthographic Priming in Braille Reading as Evidence for Task-specific Reorganization in the Ventral Visual Cortex of the Congenitally Blind. J Cogn Neurosci 2019; 31:1065-1078. [PMID: 30938589 DOI: 10.1162/jocn_a_01407] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The task-specific principle asserts that, following deafness or blindness, the deprived cortex is reorganized in a manner such that the task of a given area is preserved even though its input modality has been switched. Accordingly, tactile reading engages the ventral occipitotemporal cortex (vOT) in the blind in a similar way to regular reading in the sighted. Others, however, show that the vOT of the blind processes spoken sentence structure, which suggests that the task-specific principle might not apply to vOT. The strongest evidence for the vOT's engagement in sighted reading comes from orthographic repetition-suppression studies. Here, congenitally blind adults were tested in an fMRI repetition-suppression paradigm. Results reveal a double dissociation, with tactile orthographic priming in the vOT and auditory priming in general language areas. Reconciling our finding with other evidence, we propose that the vOT in the blind serves multiple functions, one of which, orthographic processing, overlaps with its function in the sighted.
Collapse
Affiliation(s)
| | | | | | | | - Ewa Sumera
- Institute for the Blind and Partially Sighted Children, Krakow, Poland
| | | |
Collapse
|
53
|
Recruitment of the occipital cortex by arithmetic processing follows computational bias in the congenitally blind. Neuroimage 2019; 186:549-556. [DOI: 10.1016/j.neuroimage.2018.11.034] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 11/20/2018] [Accepted: 11/21/2018] [Indexed: 11/23/2022] Open
|
54
|
Buchs G, Heimler B, Amedi A. The Effect of Irrelevant Environmental Noise on the Performance of Visual-to-Auditory Sensory Substitution Devices Used by Blind Adults. Multisens Res 2019; 32:87-109. [DOI: 10.1163/22134808-20181327] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 11/30/2018] [Indexed: 02/04/2023]
Abstract
Abstract
Visual-to-auditory Sensory Substitution Devices (SSDs) are a family of non-invasive devices for visual rehabilitation aiming at conveying whole-scene visual information through the intact auditory modality. Although proven effective in lab environments, the use of SSDs has yet to be systematically tested in real-life situations. To start filling this gap, in the present work we tested the ability of expert SSD users to filter out irrelevant background noise while focusing on the relevant audio information. Specifically, nine blind expert users of the EyeMusic visual-to-auditory SSD performed a series of identification tasks via SSDs (i.e., shape, color, and conjunction of the two features). Their performance was compared in two separate conditions: silent baseline, and with irrelevant background sounds from real-life situations, using the same stimuli in a pseudo-random balanced design. Although the participants described the background noise as disturbing, no significant performance differences emerged between the two conditions (i.e., noisy; silent) for any of the tasks. In the conjunction task (shape and color) we found a non-significant trend for a disturbing effect of the background noise on performance. These findings suggest that visual-to-auditory SSDs can indeed be successfully used in noisy environments and that users can still focus on relevant auditory information while inhibiting irrelevant sounds. Our findings take a step towards the actual use of SSDs in real-life situations while potentially impacting rehabilitation of sensory deprived individuals.
Collapse
Affiliation(s)
- Galit Buchs
- 1Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Benedetta Heimler
- 2The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 3Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Amir Amedi
- 1Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 2The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 3Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- 4Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
55
|
Plasticity based on compensatory effector use in the association but not primary sensorimotor cortex of people born without hands. Proc Natl Acad Sci U S A 2018; 115:7801-7806. [PMID: 29997174 PMCID: PMC6065047 DOI: 10.1073/pnas.1803926115] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
What forces direct brain organization and its plasticity? When brain regions are deprived of their input, which regions reorganize based on compensation for the disability and experience, and which regions show topographically constrained plasticity? People born without hands activate their primary sensorimotor hand region while moving body parts used to compensate for this disability (e.g., their feet). This was taken to suggest a neural organization based on functions, such as performing manual-like dexterous actions, rather than on body parts, in primary sensorimotor cortex. We tested the selectivity for the compensatory body parts in the primary and association sensorimotor cortex of people born without hands (dysplasic individuals). Despite clear compensatory foot use, the primary sensorimotor hand area in the dysplasic subjects showed preference for adjacent body parts that are not compensatorily used as effectors. This suggests that function-based organization, proposed for congenital blindness and deafness, does not apply to the primary sensorimotor cortex deprivation in dysplasia. These findings stress the roles of neuroanatomical constraints like topographical proximity and connectivity in determining the functional development of primary cortex even in extreme, congenital deprivation. In contrast, increased and selective foot movement preference was found in dysplasics' association cortex in the inferior parietal lobule. This suggests that the typical motor selectivity of this region for manual actions may correspond to high-level action representations that are effector-invariant. These findings reveal limitations to compensatory plasticity and experience in modifying brain organization of early topographical cortex compared with association cortices driven by function-based organization.
Collapse
|
56
|
Massiceti D, Hicks SL, van Rheede JJ. Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm. PLoS One 2018; 13:e0199389. [PMID: 29975734 PMCID: PMC6033394 DOI: 10.1371/journal.pone.0199389] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2017] [Accepted: 06/06/2018] [Indexed: 01/16/2023] Open
Abstract
Sighted people predominantly use vision to navigate spaces, and sight loss has negative consequences for independent navigation and mobility. The recent proliferation of devices that can extract 3D spatial information from visual scenes opens up the possibility of using such mobility-relevant information to assist blind and visually impaired people by presenting this information through modalities other than vision. In this work, we present two new methods for encoding visual scenes using spatial audio: simulated echolocation and distance-dependent hum volume modulation. We implemented both methods in a virtual reality (VR) environment and tested them using a 3D motion-tracking device. This allowed participants to physically walk through virtual mobility scenarios, generating data on real locomotion behaviour. Blindfolded sighted participants completed two tasks: maze navigation and obstacle avoidance. Results were measured against a visual baseline in which participants performed the same two tasks without blindfolds. Task completion time, speed and number of collisions were used as indicators of successful navigation, with additional metrics exploring detailed dynamics of performance. In both tasks, participants were able to navigate using only audio information after minimal instruction. While participants were 65% slower using audio compared to the visual baseline, they reduced their audio navigation time by an average 21% over just 6 trials. Hum volume modulation proved over 20% faster than simulated echolocation in both mobility scenarios, and participants also showed the greatest improvement with this sonification method. Nevertheless, we do speculate that simulated echolocation remains worth exploring as it provides more spatial detail and could therefore be more useful in more complex environments. The fact that participants were intuitively able to successfully navigate space with two new visual-to-audio mappings for conveying spatial information motivates the further exploration of these and other mappings with the goal of assisting blind and visually impaired individuals with independent mobility.
Collapse
Affiliation(s)
- Daniela Massiceti
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- * E-mail:
| | - Stephen Lloyd Hicks
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Joram Jacob van Rheede
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
57
|
Graven T, Desebrock C. Bouba or kiki with and without vision: Shape-audio regularities and mental images. Acta Psychol (Amst) 2018; 188:200-212. [PMID: 29982038 DOI: 10.1016/j.actpsy.2018.05.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 05/18/2018] [Accepted: 05/27/2018] [Indexed: 10/28/2022] Open
Abstract
95% of the world's population associate a rounded visual shape with the spoken word 'bouba', and an angular visual shape with the spoken word 'kiki', known as the bouba/kiki-effect. The bouba/kiki-effect occurs irrespective of familiarity with either the shape or word. This study investigated the bouba/kiki-effect when using haptic touch instead of vision, including the role of visual imagery. It also investigated whether the bouba/kiki shape-audio regularities are noticed at all, that is, whether they affect the bouba/kiki-effect itself and/or the recognition of individual bouba/kiki shapes, and finally what mental images they produce. Three experiments were conducted, with three groups of participants: blind, blindfold, and vision. In Experiment 1, the participants were asked to pick out the tactile/visual shape that they associated with the auditory bouba/kiki. Experiment 1 found that the participants who were blind did not show an instant bouba/kiki-effect (in Trial 1), whereas the blindfolded and the fully sighted did. It also found that the bouba/kiki shape-audio regularities affected the bouba/kiki-effect when using haptic touch: Those who were blind did show the bouba/kiki-effect from Trial 4, and those who were blindfolded no longer did. In Experiment 2, the participants were asked to name one tactile/visual shape and a segment of audio together as either 'bouba' or 'kiki'. Experiment 2 found that corresponding shape and audio improved the accuracy of both the blindfolded and the fully sighted, but not of those who were blind - they ignored the audio. Finally, in Experiment 3, the participants were asked to draw the shape that they associated with the auditory bouba/kiki. Experiment 3 found that their mental images, as depicted in their drawings, were not affected by whether they had experienced the bouba/kiki shapes by haptic touch or by vision. Regardless of their prior shape experience, that is, tactile or visual, their mental images included the most characteristic shape feature of bouba and kiki: curve and angle, respectively, and typically not the global shape. When taken together, these experiments suggest that the sensory regularities and mental images concerning bouba and kiki do not have to be based on, or even include visual information.
Collapse
|
58
|
Vercillo T, Tonelli A, Gori M. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization. Cognition 2018; 170:263-269. [DOI: 10.1016/j.cognition.2017.10.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 09/05/2017] [Accepted: 10/18/2017] [Indexed: 02/07/2023]
|
59
|
Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers. J Neurosci 2017; 37:11495-11504. [PMID: 29061700 DOI: 10.1523/jneurosci.0997-17.2017] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Revised: 10/09/2017] [Accepted: 10/16/2017] [Indexed: 11/21/2022] Open
Abstract
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex.SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible.
Collapse
|
60
|
Heimler B, Baruffaldi F, Bonmassar C, Venturini M, Pavani F. Multisensory Interference in Early Deaf Adults. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:422-433. [PMID: 28961871 DOI: 10.1093/deafed/enx025] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 06/22/2017] [Indexed: 06/07/2023]
Abstract
Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23 early deaf and 25 hearing adults performed two consecutive visuo-tactile spatial interference tasks. Participants responded either to the elevation of the tactile target while ignoring a concurrent visual distractor at central or peripheral locations (respond to touch/ignore vision), or they performed the opposite task (respond to vision/ignore touch). Multisensory spatial interference emerged in both tasks for both groups. Crucially, deaf participants showed increased interference compared to hearing adults when they attempted to respond to tactile targets and ignore visual distractors, with enhanced difficulties with ipsilateral visual distractors. Analyses on task-order revealed that in deaf adults, interference of visual distractors on tactile targets was much stronger when this task followed the task in which vision was behaviorally relevant (respond to vision/ignore touch). These novel results suggest that behavioral/neural changes related to early deafness determine enhanced visual dominance during visuo-tactile multisensory conflict.
Collapse
Affiliation(s)
- Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Hadassah Ein-Kerem, Building 3, 5th Floor, Jerusalem 91120, Israel
- The Edmond and Lily Safra Center for Brain Research, The Hebrew University of Jerusalem, Hadassah Ein-Kerem, Building 3, 5th Floor, Jerusalem 91120, Israel
| | | | - Claudia Bonmassar
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31, Rovereto TN 38068, Italy
| | - Marta Venturini
- Department of Psychology and Cognitive Sciences, University of Trento, Corso Bettini, 31, Rovereto TN 38068, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31, Rovereto TN 38068, Italy
- Department of Psychology and Cognitive Sciences, University of Trento, Corso Bettini, 31, Rovereto TN 38068, Italy
| |
Collapse
|
61
|
Whether the hearing brain hears it or the deaf brain sees it, it’s just the same. Proc Natl Acad Sci U S A 2017; 114:8135-8137. [DOI: 10.1073/pnas.1710492114] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|