1
|
Nardini M, Scheller M, Ramsay M, Kristiansen O, Allen C. Towards Human Sensory Augmentation: A Cognitive Neuroscience Framework for Evaluating Integration of New Signals within Perception, Brain Representations, and Subjective Experience. AUGMENTED HUMAN RESEARCH 2024; 10:1. [PMID: 39497728 PMCID: PMC11533871 DOI: 10.1007/s41133-024-00075-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 08/29/2024] [Accepted: 10/12/2024] [Indexed: 11/07/2024]
Abstract
New wearable devices and technologies provide unprecedented scope to augment or substitute human perceptual abilities. However, the flexibility to reorganize brain processing to use novel sensory signals during early sensitive periods in infancy is much less evident at later ages, making integration of new signals into adults' perception a significant challenge. We believe that an approach informed by cognitive neuroscience is crucial for maximizing the true potential of new sensory technologies. Here, we present a framework for measuring and evaluating the extent to which new signals are integrated within existing structures of perception and experience. As our testbed, we use laboratory tasks in which healthy volunteers learn new, augmented perceptual-motor skills. We describe a suite of measures of (i) perceptual function (psychophysics), (ii) neural representations (fMRI/decoding), and (iii) subjective experience (qualitative interview/micro-phenomenology) targeted at testing hypotheses about how newly learned signals become integrated within perception and experience. As proof of concept, we provide example data showing how this approach allows us to measure changes in perception, neural processing, and subjective experience. We argue that this framework, in concert with targeted approaches to optimizing training and learning, provides the tools needed to develop and optimize new approaches to human sensory augmentation and substitution.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| | | | | | | | - Chris Allen
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
2
|
Maimon A, Wald IY, Snir A, Ben Oz M, Amedi A. Perceiving depth beyond sight: Evaluating intrinsic and learned cues via a proof of concept sensory substitution method in the visually impaired and sighted. PLoS One 2024; 19:e0310033. [PMID: 39321152 PMCID: PMC11423994 DOI: 10.1371/journal.pone.0310033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 08/23/2024] [Indexed: 09/27/2024] Open
Abstract
This study explores spatial perception of depth by employing a novel proof of concept sensory substitution algorithm. The algorithm taps into existing cognitive scaffolds such as language and cross modal correspondences by naming objects in the scene while representing their elevation and depth by manipulation of the auditory properties for each axis. While the representation of verticality utilized a previously tested correspondence with pitch, the representation of depth employed an ecologically inspired manipulation, based on the loss of gain and filtration of higher frequency sounds over distance. The study, involving 40 participants, seven of which were blind (5) or visually impaired (2), investigates the intrinsicness of an ecologically inspired mapping of auditory cues for depth by comparing it to an interchanged condition where the mappings of the two axes are swapped. All participants successfully learned to use the algorithm following a very brief period of training, with the blind and visually impaired participants showing similar levels of success for learning to use the algorithm as did their sighted counterparts. A significant difference was found at baseline between the two conditions, indicating the intuitiveness of the original ecologically inspired mapping. Despite this, participants were able to achieve similar success rates following the training in both conditions. The findings indicate that both intrinsic and learned cues come into play with respect to depth perception. Moreover, they suggest that by employing perceptual learning, novel sensory mappings can be trained in adulthood. Regarding the blind and visually impaired, the results also support the convergence view, which claims that with training, their spatial abilities can converge with those of the sighted. Finally, we discuss how the algorithm can open new avenues for accessibility technologies, virtual reality, and other practical applications.
Collapse
Affiliation(s)
- Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, University of Bremen, Bremen, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Meshi Ben Oz
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| |
Collapse
|
3
|
Wenger M, Maimon A, Yizhar O, Snir A, Sasson Y, Amedi A. Hearing temperatures: employing machine learning for elucidating the cross-modal perception of thermal properties through audition. Front Psychol 2024; 15:1353490. [PMID: 39156805 PMCID: PMC11327021 DOI: 10.3389/fpsyg.2024.1353490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 06/28/2024] [Indexed: 08/20/2024] Open
Abstract
People can use their sense of hearing for discerning thermal properties, though they are for the most part unaware that they can do so. While people unequivocally claim that they cannot perceive the temperature of pouring water through the auditory properties of hearing it being poured, our research further strengthens the understanding that they can. This multimodal ability is implicitly acquired in humans, likely through perceptual learning over the lifetime of exposure to the differences in the physical attributes of pouring water. In this study, we explore people's perception of this intriguing cross modal correspondence, and investigate the psychophysical foundations of this complex ecological mapping by employing machine learning. Our results show that not only can the auditory properties of pouring water be classified by humans in practice, the physical characteristics underlying this phenomenon can also be classified by a pre-trained deep neural network.
Collapse
Affiliation(s)
- Mohr Wenger
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amber Maimon
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Department of Brain and Cognitive Sciences, Ben Gurion University, Be’er Sheva, Israel
| | - Or Yizhar
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
| | - Adi Snir
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Yonatan Sasson
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain Cognition and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| |
Collapse
|
4
|
Goral O, Wald IY, Maimon A, Snir A, Golland Y, Goral A, Amedi A. Enhancing interoceptive sensibility through exteroceptive-interoceptive sensory substitution. Sci Rep 2024; 14:14855. [PMID: 38937475 PMCID: PMC11211335 DOI: 10.1038/s41598-024-63231-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 05/27/2024] [Indexed: 06/29/2024] Open
Abstract
Exploring a novel approach to mental health technology, this study illuminates the intricate interplay between exteroception (the perception of the external world), and interoception (the perception of the internal world). Drawing on principles of sensory substitution, we investigated how interoceptive signals, particularly respiration, could be conveyed through exteroceptive modalities, namely vision and hearing. To this end, we developed a unique, immersive multisensory environment that translates respiratory signals in real-time into dynamic visual and auditory stimuli. The system was evaluated by employing a battery of various psychological assessments, with the findings indicating a significant increase in participants' interoceptive sensibility and an enhancement of the state of flow, signifying immersive and positive engagement with the experience. Furthermore, a correlation between these two variables emerged, revealing a bidirectional enhancement between the state of flow and interoceptive sensibility. Our research is the first to present a sensory substitution approach for substituting between interoceptive and exteroceptive senses, and specifically as a transformative method for mental health interventions, paving the way for future research.
Collapse
Affiliation(s)
- Oran Goral
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Iddo Yehoshua Wald
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Digital Media Lab, Bremen University, Bremen, Germany
| | - Amber Maimon
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- Computational Psychiatry and Neurotechnology Lab, Ben Gurion University, Be'er Sheva, Israel
| | - Adi Snir
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Yulia Golland
- Sagol Center for Brain and Mind, Reichman University, Herzliya, Israel
| | - Aviva Goral
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
| | - Amir Amedi
- Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel.
| |
Collapse
|
5
|
Norman LJ, Hartley T, Thaler L. Changes in primary visual and auditory cortex of blind and sighted adults following 10 weeks of click-based echolocation training. Cereb Cortex 2024; 34:bhae239. [PMID: 38897817 PMCID: PMC11186672 DOI: 10.1093/cercor/bhae239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 05/14/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024] Open
Abstract
Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.
Collapse
Affiliation(s)
- Liam J Norman
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| | - Tom Hartley
- Department of Psychology and York Biomedical Research Institute, University of York, Heslington, YO10 5DD, UK
| | - Lore Thaler
- Department of Psychology, Durham University, Durham, DH1 3LE, UK
| |
Collapse
|
6
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
7
|
Lettieri G, Handjaras G, Cappello EM, Setti F, Bottari D, Bruno V, Diano M, Leo A, Tinti C, Garbarini F, Pietrini P, Ricciardi E, Cecchetti L. Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain. SCIENCE ADVANCES 2024; 10:eadk6840. [PMID: 38457501 PMCID: PMC10923499 DOI: 10.1126/sciadv.adk6840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/10/2024]
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
Collapse
Affiliation(s)
- Giada Lettieri
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giacomo Handjaras
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Elisa M. Cappello
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesca Setti
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Bottari
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Andrea Leo
- Department of of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Pietro Pietrini
- Forensic Neuroscience and Psychiatry Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Emiliano Ricciardi
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
8
|
Negen J, Slater H, Nardini M. Sensory augmentation for a rapid motor task in a multisensory environment. Restor Neurol Neurosci 2024; 42:113-120. [PMID: 37302045 PMCID: PMC11492005 DOI: 10.3233/rnn-221279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Background Sensory substitution and augmentation systems (SSASy) seek to either replace or enhance existing sensory skills by providing a new route to access information about the world. Tests of such systems have largely been limited to untimed, unisensory tasks. Objective To test the use of a SSASy for rapid, ballistic motor actions in a multisensory environment. Methods Participants played a stripped-down version of air hockey in virtual reality with motion controls (Oculus Touch). They were trained to use a simple SASSy (novel audio cue) for the puck's location. They were tested on ability to strike an oncoming puck with the SASSy, degraded vision, or both. Results Participants coordinated vision and the SSASy to strike the target with their hand more consistently than with the best single cue alone, t(13) = 9.16, p <.001, Cohen's d = 2.448. Conclusions People can adapt flexibly to using a SSASy in tasks that require tightly timed, precise, and rapid body movements. SSASys can augment and coordinate with existing sensorimotor skills rather than being limited to replacement use cases - in particular, there is potential scope for treating moderate vision loss. These findings point to the potential for augmenting human abilities, not only for static perceptual judgments, but in rapid and demanding perceptual-motor tasks.
Collapse
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | | | - Marko Nardini
- Psychology Department, Durham University, Durham, UK
| |
Collapse
|
9
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
10
|
Dȩbska A, Wójcik M, Chyl K, Dziȩgiel-Fivet G, Jednoróg K. Beyond the Visual Word Form Area - a cognitive characterization of the left ventral occipitotemporal cortex. Front Hum Neurosci 2023; 17:1199366. [PMID: 37576470 PMCID: PMC10416454 DOI: 10.3389/fnhum.2023.1199366] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/10/2023] [Indexed: 08/15/2023] Open
Abstract
The left ventral occipitotemporal cortex has been traditionally viewed as a pathway for visual object recognition including written letters and words. Its crucial role in reading was strengthened by the studies on the functionally localized "Visual Word Form Area" responsible for processing word-like information. However, in the past 20 years, empirical studies have challenged the assumptions of this brain region as processing exclusively visual or even orthographic stimuli. In this review, we aimed to present the development of understanding of the left ventral occipitotemporal cortex from the visually based letter area to the modality-independent symbolic language related region. We discuss theoretical and empirical research that includes orthographic, phonological, and semantic properties of language. Existing results showed that involvement of the left ventral occipitotemporal cortex is not limited to unimodal activity but also includes multimodal processes. The idea of the integrative nature of this region is supported by the broad functional and structural connectivity with language-related and attentional brain networks. We conclude that although the function of the area is not yet fully understood in human cognition, its role goes beyond visual word form processing. The left ventral occipitotemporal cortex seems to be crucial for combining higher-level language information with abstract forms that convey meaning independently of modality.
Collapse
Affiliation(s)
- Agnieszka Dȩbska
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Marta Wójcik
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Chyl
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
- The Educational Research Institute, Warsaw, Poland
| | - Gabriela Dziȩgiel-Fivet
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
11
|
Pang W, Zhou W, Ruan Y, Zhang L, Shu H, Zhang Y, Zhang Y. Visual Deprivation Alters Functional Connectivity of Neural Networks for Voice Recognition: A Resting-State fMRI Study. Brain Sci 2023; 13:brainsci13040636. [PMID: 37190601 DOI: 10.3390/brainsci13040636] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/29/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Humans recognize one another by identifying their voices and faces. For sighted people, the integration of voice and face signals in corresponding brain networks plays an important role in facilitating the process. However, individuals with vision loss primarily resort to voice cues to recognize a person's identity. It remains unclear how the neural systems for voice recognition reorganize in the blind. In the present study, we collected behavioral and resting-state fMRI data from 20 early blind (5 females; mean age = 22.6 years) and 22 sighted control (7 females; mean age = 23.7 years) individuals. We aimed to investigate the alterations in the resting-state functional connectivity (FC) among the voice- and face-sensitive areas in blind subjects in comparison with controls. We found that the intranetwork connections among voice-sensitive areas, including amygdala-posterior "temporal voice areas" (TVAp), amygdala-anterior "temporal voice areas" (TVAa), and amygdala-inferior frontal gyrus (IFG) were enhanced in the early blind. The blind group also showed increased FCs of "fusiform face area" (FFA)-IFG and "occipital face area" (OFA)-IFG but decreased FCs between the face-sensitive areas (i.e., FFA and OFA) and TVAa. Moreover, the voice-recognition accuracy was positively related to the strength of TVAp-FFA in the sighted, and the strength of amygdala-FFA in the blind. These findings indicate that visual deprivation shapes functional connectivity by increasing the intranetwork connections among voice-sensitive areas while decreasing the internetwork connections between the voice- and face-sensitive areas. Moreover, the face-sensitive areas are still involved in the voice-recognition process in blind individuals through pathways such as the subcortical-occipital or occipitofrontal connections, which may benefit the visually impaired greatly during voice processing.
Collapse
Affiliation(s)
- Wenbin Pang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Zhou
- Beijing Key Lab of Learning and Cognition, School of Psychology, Capital Normal University, Beijing 100048, China
| | - Yufang Ruan
- School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montréal, QC H3A 1G1, Canada
- Centre for Research on Brain, Language and Music, Montréal, QC H3A 1G1, Canada
| | - Linjun Zhang
- School of Chinese as a Second Language, Peking University, Beijing 100871, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, The University of Minnesota, Minneapolis, MN 55455, USA
| | - Yumei Zhang
- China National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Department of Rehabilitation, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
| |
Collapse
|
12
|
Gaglianese A, Fracasso A, Fernandes FG, Harvey B, Dumoulin SO, Petridou N. Mechanisms of speed encoding in the human middle temporal cortex measured by 7T fMRI. Hum Brain Mapp 2023; 44:2050-2061. [PMID: 36637226 PMCID: PMC9980888 DOI: 10.1002/hbm.26193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 11/28/2022] [Accepted: 12/11/2022] [Indexed: 01/14/2023] Open
Abstract
Perception of dynamic scenes in our environment results from the evaluation of visual features such as the fundamental spatial and temporal frequency components of a moving object. The ratio between these two components represents the object's speed of motion. The human middle temporal cortex hMT+ has a crucial biological role in the direct encoding of object speed. However, the link between hMT+ speed encoding and the spatiotemporal frequency components of a moving object is still under explored. Here, we recorded high resolution 7T blood oxygen level-dependent BOLD responses to different visual motion stimuli as a function of their fundamental spatial and temporal frequency components. We fitted each hMT+ BOLD response with a 2D Gaussian model allowing for two different speed encoding mechanisms: (1) distinct and independent selectivity for the spatial and temporal frequencies of the visual motion stimuli; (2) pure tuning for the speed of motion. We show that both mechanisms occur but in different neuronal groups within hMT+, with the largest subregion of the complex showing separable tuning for the spatial and temporal frequency of the visual stimuli. Both mechanisms were highly reproducible within participants, reconciling single cell recordings from MT in animals that have showed both encoding mechanisms. Our findings confirm that a more complex process is involved in the perception of speed than initially thought and suggest that hMT+ plays a primary role in the evaluation of the spatial features of the moving visual input.
Collapse
Affiliation(s)
- Anna Gaglianese
- The Laboratory for Investigative Neurophysiology (The LINE), Department of RadiologyUniversity Hospital Center and University of LausanneLausanneSwitzerland
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| | - Alessio Fracasso
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
- University of GlasgowSchool of Psychology and NeuroscienceGlasgowUK
- Spinoza Center for NeuroimagingAmsterdamNetherlands
| | - Francisco G. Fernandes
- Department of Neurosurgery and Neurology, UMC Utrecht Brain CenterUniversity Medical CenterUtrechtNetherlands
| | - Ben Harvey
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Serge O. Dumoulin
- Experimental Psychology, Helmholtz InstituteUtrecht UniversityUtrechtNetherlands
| | - Natalia Petridou
- Department of Radiology, Center for Image SciencesUniversity Medical CenterUtrechtNetherlands
| |
Collapse
|
13
|
Setti F, Handjaras G, Bottari D, Leo A, Diano M, Bruno V, Tinti C, Cecchetti L, Garbarini F, Pietrini P, Ricciardi E. A modality-independent proto-organization of human multisensory areas. Nat Hum Behav 2023; 7:397-410. [PMID: 36646839 PMCID: PMC10038796 DOI: 10.1038/s41562-022-01507-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023]
Abstract
The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to combine information across modalities. However, it is still unclear whether the representation of coherent auditory and visual events requires any prior audiovisual experience to develop and function. Here we measured brain synchronization during the presentation of an audiovisual, audio-only or video-only version of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically developed individuals. Intersubject correlation analysis revealed that the superior temporal cortex was synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features, and relied on a similar modality-independent topographical organization of slow temporal dynamics. The human superior temporal cortex is naturally endowed with a functional scaffolding to yield a common representation across multisensory events.
Collapse
Affiliation(s)
- Francesca Setti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Davide Bottari
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Andrea Leo
- Department of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Valentina Bruno
- Manibus Lab, Department of Psychology, University of Turin, Turin, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | - Luca Cecchetti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Pietro Pietrini
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | |
Collapse
|
14
|
Zhu H, Tang X, Chen T, Yang J, Wang A, Zhang M. Audiovisual illusion training improves multisensory temporal integration. Conscious Cogn 2023; 109:103478. [PMID: 36753896 DOI: 10.1016/j.concog.2023.103478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 01/26/2023] [Accepted: 01/26/2023] [Indexed: 02/08/2023]
Abstract
When we perceive external physical stimuli from the environment, the brain must remain somewhat flexible to unaligned stimuli within a specific range, as multisensory signals are subject to different transmission and processing delays. Recent studies have shown that the width of the 'temporal binding window (TBW)' can be reduced by perceptual learning. However, to date, the vast majority of studies examining the mechanisms of perceptual learning have focused on experience-dependent effects, failing to reach a consensus on its relationship with the underlying perception influenced by audiovisual illusion. The sound-induced flash illusion (SiFI) training is a reliable function for improving perceptual sensitivity. The present study utilized the classic auditory-dominated SiFI paradigm with feedback training to investigate the effect of a 5-day SiFI training on multisensory temporal integration, as evaluated by a simultaneity judgment (SJ) task and temporal order judgment (TOJ) task. We demonstrate that audiovisual illusion training enhances multisensory temporal integration precision in the form of (i) the point of subjective simultaneity (PSS) shifts to reality (0 ms) and (ii) a narrowing TBW. The results are consistent with a Bayesian model of causal inference, suggesting that perception learning reduce the susceptibility to SiFI, whilst improving the precision of audiovisual temporal estimation.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China
| | - Tingji Chen
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Jiajia Yang
- Applied Brain Science Lab Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China; Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
15
|
Aggius-Vella E, Chebat DR, Maidenbaum S, Amedi A. Activation of human visual area V6 during egocentric navigation with and without visual experience. Curr Biol 2023; 33:1211-1219.e5. [PMID: 36863342 DOI: 10.1016/j.cub.2023.02.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 11/23/2022] [Accepted: 02/07/2023] [Indexed: 03/04/2023]
Abstract
V6 is a retinotopic area located in the dorsal visual stream that integrates eye movements with retinal and visuo-motor signals. Despite the known role of V6 in visual motion, it is unknown whether it is involved in navigation and how sensory experiences shape its functional properties. We explored the involvement of V6 in egocentric navigation in sighted and in congenitally blind (CB) participants navigating via an in-house distance-to-sound sensory substitution device (SSD), the EyeCane. We performed two fMRI experiments on two independent datasets. In the first experiment, CB and sighted participants navigated the same mazes. The sighted performed the mazes via vision, while the CB performed them via audition. The CB performed the mazes before and after a training session, using the EyeCane SSD. In the second experiment, a group of sighted participants performed a motor topography task. Our results show that right V6 (rhV6) is selectively involved in egocentric navigation independently of the sensory modality used. Indeed, after training, rhV6 of CB is selectively recruited for auditory navigation, similarly to rhV6 in the sighted. Moreover, we found activation for body movement in area V6, which can putatively contribute to its involvement in egocentric navigation. Taken together, our findings suggest that area rhV6 is a unique hub that transforms spatially relevant sensory information into an egocentric representation for navigation. While vision is clearly the dominant modality, rhV6 is in fact a supramodal area that can develop its selectivity for navigation in the absence of visual experience.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- The Baruch Ivcher Institute for Brain, Cognition & Technology, Reichman University, 4610101 Herzliya, Israel.
| | - Daniel-Robert Chebat
- Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, 4076414 Ariel, Israel; Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, 4076414 Ariel, Israel.
| | - Shachar Maidenbaum
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, 8410501 Beersheba, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, 8410501 Beersheba, Israel.
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, Reichman University, 4610101 Herzliya, Israel.
| |
Collapse
|
16
|
Maimon A, Netzer O, Heimler B, Amedi A. Testing geometry and 3D perception in children following vision restoring cataract-removal surgery. Front Neurosci 2023; 16:962817. [PMID: 36711132 PMCID: PMC9879291 DOI: 10.3389/fnins.2022.962817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 12/19/2022] [Indexed: 01/13/2023] Open
Abstract
As neuroscience and rehabilitative techniques advance, age-old questions concerning the visual experience of those who gain sight after blindness, once thought to be philosophical alone, take center stage and become the target for scientific inquiries. In this study, we employ a battery of visual perception tasks to study the unique experience of a small group of children who have undergone vision-restoring cataract removal surgery as part of the Himalayan Cataract Project. We tested their abilities to perceive in three dimensions (3D) using a binocular rivalry task and the Brock string task, perceive visual illusions, use cross-modal mappings between touch and vision, and spatially group based on geometric cues. Some of the children in this study gained a sense of sight for the first time in their lives, having been born with bilateral congenital cataracts, while others suffered late-onset blindness in one eye alone. This study simultaneously supports yet raises further questions concerning Hubel and Wiesel's critical periods theory and provides additional insight into Molyneux's problem, the ability to correlate vision with touch quickly. We suggest that our findings present a relatively unexplored intermediate stage of 3D vision development. Importantly, we spotlight some essential geometrical perception visual abilities that strengthen the idea that spontaneous geometry intuitions arise independently from visual experience (and education), thus replicating and extending previous studies. We incorporate a new model, not previously explored, of testing children with congenital cataract removal surgeries who perform the task via vision. In contrast, previous work has explored these abilities in the congenitally blind via touch. Taken together, our findings provide insight into the development of what is commonly known as the visual system in the visually deprived and highlight the need to further empirically explore an amodal, task-based interpretation of specializations in the development and structure of the brain. Moreover, we propose a novel objective method, based on a simple binocular rivalry task and the Brock string task, for determining congenital (early) vs. late blindness where medical history and records are partial or lacking (e.g., as is often the case in cataract removal cases).
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel,*Correspondence: Amber Maimon,
| | - Ophir Netzer
- Gonda Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel,The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
17
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
18
|
Negen J, Slater H, Bird LA, Nardini M. Internal biases are linked to disrupted cue combination in children and adults. J Vis 2022; 22:14. [DOI: 10.1167/jov.22.12.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- James Negen
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | | | - Laura-Ashleigh Bird
- Department of Psychology, Durham University, Durham, UK
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
19
|
Korczyk M, Zimmermann M, Bola Ł, Szwed M. Superior visual rhythm discrimination in expert musicians is most likely not related to cross-modal recruitment of the auditory cortex. Front Psychol 2022; 13:1036669. [PMID: 36337485 PMCID: PMC9632485 DOI: 10.3389/fpsyg.2022.1036669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/06/2022] [Indexed: 11/25/2022] Open
Abstract
Training can influence behavioral performance and lead to brain reorganization. In particular, training in one modality, for example, auditory, can improve performance in another modality, for example, visual. Previous research suggests that one of the mechanisms behind this phenomenon could be the cross-modal recruitment of the sensory areas, for example, the auditory cortex. Studying expert musicians offers a chance to explore this process. Rhythm is an aspect of music that can be presented in various modalities. We designed an fMRI experiment in which professional pianists and non-musicians discriminated between two sequences of rhythms presented auditorily (series of sounds) or visually (series of flashes). Behavioral results showed that musicians performed in both visual and auditory rhythmic tasks better than non-musicians. We found no significant between-group differences in fMRI activations within the auditory cortex. However, we observed that musicians had increased activation in the right Inferior Parietal Lobe when compared to non-musicians. We conclude that the musicians’ superior visual rhythm discrimination is not related to cross-modal recruitment of the auditory cortex; instead, it could be related to activation in higher-level, multimodal areas in the cortex.
Collapse
Affiliation(s)
| | | | - Łukasz Bola
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- Institute of Psychology, Polish Academy of Sciences, Warszawa, Poland
| | - Marcin Szwed
- Intitute of Psychology, Jagiellonian University, Kraków, Poland
- *Correspondence: Marcin Szwed,
| |
Collapse
|
20
|
Arbel R, Heimler B, Amedi A. Face shape processing via visual-to-auditory sensory substitution activates regions within the face processing networks in the absence of visual experience. Front Neurosci 2022; 16:921321. [PMID: 36263367 PMCID: PMC9576157 DOI: 10.3389/fnins.2022.921321] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/05/2022] [Indexed: 11/16/2022] Open
Abstract
Previous evidence suggests that visual experience is crucial for the emergence and tuning of the typical neural system for face recognition. To challenge this conclusion, we trained congenitally blind adults to recognize faces via visual-to-auditory sensory-substitution (SDD). Our results showed a preference for trained faces over other SSD-conveyed visual categories in the fusiform gyrus and in other known face-responsive-regions of the deprived ventral visual stream. We also observed a parametric modulation in the same cortical regions, for face orientation (upright vs. inverted) and face novelty (trained vs. untrained). Our results strengthen the conclusion that there is a predisposition for sensory-independent and computation-specific processing in specific cortical regions that can be retained in life-long sensory deprivation, independently of previous perceptual experience. They also highlight that if the right training is provided, such cortical preference maintains its tuning to what were considered visual-specific face features.
Collapse
Affiliation(s)
- Roni Arbel
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Pediatrics, Hadassah University Hospital-Mount Scopus, Jerusalem, Israel
- *Correspondence: Roni Arbel,
| | - Benedetta Heimler
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation, Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Hadassah Ein-Kerem, Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind, and Technology, Reichman University, Herzeliya, Israel
| |
Collapse
|
21
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
22
|
Investigation of Neural Substrates of Erroneous Behavior in a Delayed-Response Task. eNeuro 2022; 9:ENEURO.0490-21.2022. [PMID: 35365501 PMCID: PMC9007410 DOI: 10.1523/eneuro.0490-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/08/2022] [Accepted: 03/24/2022] [Indexed: 11/29/2022] Open
Abstract
Motor cortical neurons exhibit persistent selective activities (selectivity) during motor planning. Experimental perturbation of selectivity results in the failure of short-term memory retention and consequent behavioral biases, demonstrating selectivity as a neural characteristic of encoding previous sensory input or future action. However, even without experimental manipulation, animals occasionally fail to maintain short-term memory leading to erroneous choice. Here, we investigated neural substrates that lead to the incorrect formation of selectivity during short-term memory. We analyzed neuronal activities in anterior lateral motor cortex (ALM) of mice, a region known to be engaged in motor planning while mice performed the tactile delayed-response task. We found that highly selective neurons lost their selectivity while originally nonselective neurons showed selectivity during the error trials where mice licked toward incorrect direction. We assumed that those alternations would reflect changes in intrinsic properties of population activity. Thus, we estimated an intrinsic manifold shared by neuronal population (shared space), using factor analysis (FA) and measured the association of individual neurons with the shared space by communality, the variance of neuronal activity accounted for by the shared space. We found a positive correlation between selectivity and communality over ALM neurons, which disappeared in erroneous behavior. Notably, neurons showing selectivity alternations between correct and incorrect licking also underwent proportional changes in communality. Our results demonstrated that the extent to which an ALM neuron is associated with the intrinsic manifolds of population activity may elucidate its selectivity and that disruption of this association may alter selectivity, likely leading to erroneous behavior.
Collapse
|
23
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
24
|
de Sousa AA, Todorov OS, Proulx MJ. A natural history of vertebrate vision loss: Insight from mammalian vision for human visual function. Neurosci Biobehav Rev 2022; 134:104550. [PMID: 35074313 DOI: 10.1016/j.neubiorev.2022.104550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 10/08/2021] [Accepted: 01/20/2022] [Indexed: 11/28/2022]
Abstract
Research on the origin of vision and vision loss in naturally "blind" animal species can reveal the tasks that vision fulfills and the brain's role in visual experience. Models that incorporate evolutionary history, natural variation in visual ability, and experimental manipulations can help disentangle visual ability at a superficial level from behaviors linked to vision but not solely reliant upon it, and could assist the translation of ophthalmological research in animal models to human treatments. To unravel the similarities between blind individuals and blind species, we review concepts of 'blindness' and its behavioral correlates across a range of species. We explore the ancestral emergence of vision in vertebrates, and the loss of vision in blind species with reference to an evolution-based classification scheme. We applied phylogenetic comparative methods to a mammalian tree to explore the evolution of visual acuity using ancestral state estimations. Future research into the natural history of vision loss could help elucidate the function of vision and inspire innovations in how to address vision loss in humans.
Collapse
Affiliation(s)
- Alexandra A de Sousa
- Centre for Health and Cognition, Bath Spa University, Bath, United Kingdom; UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom.
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St Lucia, Queensland, Australia
| | - Michael J Proulx
- UKRI Centre for Accessible, Responsible & Transparent Artificial Intelligence (ART:AI), University of Bath, United Kingdom; Department of Psychology, REVEAL Research Centre, University of Bath, Bath, United Kingdom
| |
Collapse
|
25
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
26
|
Zimmermann M, Mostowski P, Rutkowski P, Tomaszewski P, Krzysztofiak P, Jednoróg K, Marchewka A, Szwed M. The Extent of Task Specificity for Visual and Tactile Sequences in the Auditory Cortex of the Deaf and Hard of Hearing. J Neurosci 2021; 41:9720-9731. [PMID: 34663627 PMCID: PMC8612642 DOI: 10.1523/jneurosci.2527-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 09/23/2021] [Accepted: 09/27/2021] [Indexed: 11/21/2022] Open
Abstract
It has been proposed that the auditory cortex in the deaf humans might undergo task-specific reorganization. However, evidence remains scarce as previous experiments used only two very specific tasks (temporal processing and face perception) in visual modality. Here, congenitally deaf/hard of hearing and hearing women and men were enrolled in an fMRI experiment as we sought to fill this evidence gap in two ways. First, we compared activation evoked by a temporal processing task performed in two different modalities, visual and tactile. Second, we contrasted this task with a perceptually similar task that focuses on the spatial dimension. Additional control conditions consisted of passive stimulus observation. In line with the task specificity hypothesis, the auditory cortex in the deaf was activated by temporal processing in both visual and tactile modalities. This effect was selective for temporal processing relative to spatial discrimination. However, spatial processing also led to significant auditory cortex recruitment which, unlike temporal processing, occurred even during passive stimulus observation. We conclude that auditory cortex recruitment in the deaf and hard of hearing might involve interplay between task-selective and pluripotential mechanisms of cross-modal reorganization. Our results open several avenues for the investigation of the full complexity of the cross-modal plasticity phenomenon.SIGNIFICANCE STATEMENT Previous studies suggested that the auditory cortex in the deaf may change input modality (sound to vision) while keeping its function (e.g., rhythm processing). We investigated this hypothesis by asking deaf or hard of hearing and hearing adults to discriminate between temporally and spatially complex sequences in visual and tactile modalities. The results show that such function-specific brain reorganization, as has previously been demonstrated in the visual modality, also occurs for tactile processing. On the other hand, they also show that for some stimuli (spatial) the auditory cortex activates automatically, which is suggestive of a take-over by a different kind of cognitive function. The observed differences in processing of sequences might thus result from an interplay of task-specific and pluripotent plasticity.
Collapse
Affiliation(s)
- M Zimmermann
- Institute of Psychology, Jagiellonian University, 30-060 Krakow, Poland
| | - P Mostowski
- Section for Sign Linguistics, University of Warsaw, 00-927 Warsaw, Poland
| | - P Rutkowski
- Section for Sign Linguistics, University of Warsaw, 00-927 Warsaw, Poland
| | - P Tomaszewski
- Polish Sign Language and Deaf Communication Research Laboratory, Faculty of Psychology, University of Warsaw, 00-183 Warsaw, Poland
| | - P Krzysztofiak
- Faculty of Psychology, University of Social Sciences and Humanities, 03-815 Warsaw, Poland
| | - K Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute for Experimental Biology, 02-093 Warsaw, Poland
| | - A Marchewka
- Laboratory of Brain Imaging, Nencki Institute for Experimental Biology, 02-093 Warsaw, Poland
| | - M Szwed
- Institute of Psychology, Jagiellonian University, 30-060 Krakow, Poland
| |
Collapse
|
27
|
Late development of audio-visual integration in the vertical plane. CURRENT RESEARCH IN BEHAVIORAL SCIENCES 2021. [DOI: 10.1016/j.crbeha.2021.100043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
28
|
Cortical Visual Impairment in Childhood: 'Blindsight' and the Sprague Effect Revisited. Brain Sci 2021; 11:brainsci11101279. [PMID: 34679344 PMCID: PMC8533908 DOI: 10.3390/brainsci11101279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 09/14/2021] [Accepted: 09/24/2021] [Indexed: 11/29/2022] Open
Abstract
The paper discusses and provides support for diverse processes of brain plasticity in visual function after damage in infancy and childhood in comparison with injury that occurs in the adult brain. We provide support and description of neuroplastic mechanisms in childhood that do not seemingly exist in the same way in the adult brain. Examples include the ability to foster the development of thalamocortical connectivities that can circumvent the lesion and reach their cortical destination in the occipital cortex as the developing brain is more efficient in building new connections. Supporting this claim is the fact that in those with central visual field defects we can note that the extrastriatal visual connectivities are greater when a lesion occurs earlier in life as opposed to in the neurologically mature adult. The result is a significantly more optimized system of visual and spatial exploration within the ‘blind’ field of view. The discussion is provided within the context of “blindsight” and the “Sprague Effect”.
Collapse
|
29
|
Abstract
Our experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants' behaviour with the predictions of alternative information processing models. This lets us see when and how-during development, and with experience-the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Science Site, Durham, DH1 3LE, UK.
| |
Collapse
|
30
|
Pesnot Lerousseau J, Arnold G, Auvray M. Training-induced plasticity enables visualizing sounds with a visual-to-auditory conversion device. Sci Rep 2021; 11:14762. [PMID: 34285265 PMCID: PMC8292401 DOI: 10.1038/s41598-021-94133-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022] Open
Abstract
Sensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.
Collapse
Affiliation(s)
| | | | - Malika Auvray
- Sorbonne Université, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France.
| |
Collapse
|
31
|
Hofstetter S, Zuiderbaan W, Heimler B, Dumoulin SO, Amedi A. Topographic maps and neural tuning for sensory substitution dimensions learned in adulthood in a congenital blind subject. Neuroimage 2021; 235:118029. [PMID: 33836269 DOI: 10.1016/j.neuroimage.2021.118029] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 01/18/2021] [Accepted: 03/30/2021] [Indexed: 01/28/2023] Open
Abstract
Topographic maps, a key principle of brain organization, emerge during development. It remains unclear, however, whether topographic maps can represent a new sensory experience learned in adulthood. MaMe, a congenitally blind individual, has been extensively trained in adulthood for perception of a 2D auditory-space (soundscape) where the y- and x-axes are represented by pitch and time, respectively. Using population receptive field mapping we found neural populations tuned topographically to pitch, not only in the auditory cortices but also in the parietal and occipito-temporal cortices. Topographic neural tuning to time was revealed in the parietal and occipito-temporal cortices. Some of these maps were found to represent both axes concurrently, enabling MaMe to represent unique locations in the soundscape space. This case study provides proof of concept for the existence of topographic maps tuned to the newly learned soundscape dimensions. These results suggest that topographic maps can be adapted or recycled in adulthood to represent novel sensory experiences.
Collapse
Affiliation(s)
- Shir Hofstetter
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands.
| | - Wietske Zuiderbaan
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands
| | - Benedetta Heimler
- The Baruch Ivcher Institute for Brain, Mind & Technology, School of Psychology, Interdisciplinary Center (IDC) Herzliya, P.O. Box 167, Herzliya 46150, Israel; Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Serge O Dumoulin
- Spinoza Centre for Neuroimaging, Meibergdreef 75, Amsterdam, BK 1105 Netherlands; Department of Experimental and Applied Psychology, VU University Amsterdam, Amsterdam, BT 1181, Netherlands; Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, CS 3584, Netherlands.
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Mind & Technology, School of Psychology, Interdisciplinary Center (IDC) Herzliya, P.O. Box 167, Herzliya 46150, Israel.
| |
Collapse
|
32
|
Morland AB, Brown HDH, Baseler HA. Cortical Reorganization: Reallocated Responses without Rewiring. Curr Biol 2021; 31:R76-R78. [PMID: 33497635 DOI: 10.1016/j.cub.2020.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Is the brain able to reorganise following loss of sensory input? New work on individuals with sight loss shows that, while brain areas normally allocated to vision respond to other sensory stimuli, those responses are unlikely to mean the brain has rewired.
Collapse
Affiliation(s)
- Antony B Morland
- Department of Psychology, University of York, York, UK; York Biomedical Research Institute, University of York, York, UK.
| | - Holly D H Brown
- Department of Psychology, University of York, York, UK; York Biomedical Research Institute, University of York, York, UK
| | - Heidi A Baseler
- York Biomedical Research Institute, University of York, York, UK; Hull-York Medical School, York, UK
| |
Collapse
|
33
|
Matuszewski J, Kossowski B, Bola Ł, Banaszkiewicz A, Paplińska M, Gyger L, Kherif F, Szwed M, Frackowiak RS, Jednoróg K, Draganski B, Marchewka A. Brain plasticity dynamics during tactile Braille learning in sighted subjects: Multi-contrast MRI approach. Neuroimage 2020; 227:117613. [PMID: 33307223 DOI: 10.1016/j.neuroimage.2020.117613] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/20/2020] [Accepted: 11/29/2020] [Indexed: 01/11/2023] Open
Abstract
A growing body of empirical evidence supports the notion of diverse neurobiological processes underlying learning-induced plasticity changes in the human brain. There are still open questions about how brain plasticity depends on cognitive task complexity, how it supports interactions between brain systems and with what temporal and spatial trajectory. We investigated brain and behavioural changes in sighted adults during 8-months training of tactile Braille reading whilst monitoring brain structure and function at 5 different time points. We adopted a novel multivariate approach that includes behavioural data and specific MRI protocols sensitive to tissue properties to assess local functional and structural and myelin changes over time. Our results show that while the reading network, located in the ventral occipitotemporal cortex, rapidly adapts to tactile input, sensory areas show changes in grey matter volume and intra-cortical myelin at different times. This approach has allowed us to examine and describe neuroplastic mechanisms underlying complex cognitive systems and their (sensory) inputs and (motor) outputs differentially, at a mesoscopic level.
Collapse
Affiliation(s)
- Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland; Institute of Psychology, Jagiellonian University, Krakow, Poland
| | - Anna Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | | | - Lucien Gyger
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Ferath Kherif
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, Krakow, Poland
| | | | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Bogdan Draganski
- LREN, Department for Clinical Neurosciences, CHUV, University of Lausanne, Lausanne, Switzerland; Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
34
|
Abstract
Making sense of the world requires perceptual constancy—the stable perception of an object across changes in one’s sensation of it. To investigate whether constancy is intrinsic to perception, we tested whether humans can learn a form of constancy that is unique to a novel sensory skill (here, the perception of objects through click-based echolocation). Participants judged whether two echoes were different either because: (a) the clicks were different, or (b) the objects were different. For differences carried through spectral changes (but not level changes), blind expert echolocators spontaneously showed a high constancy ability (mean d′ = 1.91) compared to sighted and blind people new to echolocation (mean d′ = 0.69). Crucially, sighted controls improved rapidly in this ability through training, suggesting that constancy emerges in a domain with which the perceiver has no prior experience. This provides strong evidence that constancy is intrinsic to human perception. This study shows that people who learn a new skill to sense their environment - here: listening to sound echoes - can correctly represent the physical properties of objects. This result has implications for effectively rehabilitating people with sensory loss.
Collapse
|
35
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
36
|
Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, Kanwisher N. Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proc Natl Acad Sci U S A 2020; 117:23011-23020. [PMID: 32839334 PMCID: PMC7502773 DOI: 10.1073/pnas.2004607117] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.
Collapse
Affiliation(s)
- N Apurva Ratan Murty
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Santani Teng
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA 94115
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - David Beeler
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Anna Mynick
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA 02139
| |
Collapse
|
37
|
Gaglianese A, Branco MP, Groen IIA, Benson NC, Vansteensel MJ, Murray MM, Petridou N, Ramsey NF. Electrocorticography Evidence of Tactile Responses in Visual Cortices. Brain Topogr 2020; 33:559-570. [PMID: 32661933 PMCID: PMC7429547 DOI: 10.1007/s10548-020-00783-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Accepted: 06/28/2020] [Indexed: 01/30/2023]
Abstract
There is ongoing debate regarding the extent to which human cortices are specialized for processing a given sensory input versus a given type of information, independently of the sensory source. Many neuroimaging and electrophysiological studies have reported that primary and extrastriate visual cortices respond to tactile and auditory stimulation, in addition to visual inputs, suggesting these cortices are intrinsically multisensory. In particular for tactile responses, few studies have proven neuronal processes in visual cortex in humans. Here, we assessed tactile responses in both low-level and extrastriate visual cortices using electrocorticography recordings in a human participant. Specifically, we observed significant spectral power increases in the high frequency band (30-100 Hz) in response to tactile stimuli, reportedly associated with spiking neuronal activity, in both low-level visual cortex (i.e. V2) and in the anterior part of the lateral occipital-temporal cortex. These sites were both involved in processing tactile information and responsive to visual stimulation. More generally, the present results add to a mounting literature in support of task-sensitive and sensory-independent mechanisms underlying functions like spatial, motion, and self-processing in the brain and extending from higher-level as well as to low-level cortices.
Collapse
Affiliation(s)
- Anna Gaglianese
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, University Hospital Center, University of Lausanne, Rue Centrale 7, Lausanne, 1003, Switzerland.
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
- Department of Radiology, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| | - Mariana P Branco
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Iris I A Groen
- Department of Psychology, New York University, Washington Place 6, New York, 10003, NY, USA
| | - Noah C Benson
- Department of Psychology, New York University, Washington Place 6, New York, 10003, NY, USA
- eScience Institute, University of Washington, 15th Ave NE, Seattle, 98195, WA, USA
| | - Mariska J Vansteensel
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, University Hospital Center, University of Lausanne, Rue Centrale 7, Lausanne, 1003, Switzerland
- Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Station 6, Lausanne, 1015, Switzerland
- Ophthalmology Service, Fondation Asile des aveugles and University of Lausanne, Avenue de France 15, Lausanne, 1004, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, 21st Avenue South 1215, Nashville, 37232, TN, USA
| | - Natalia Petridou
- Department of Radiology, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Nick F Ramsey
- Department of Neurosurgery and Neurology, UMC Utrecht Brain Center, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| |
Collapse
|
38
|
Heimler B, Amedi A. Are critical periods reversible in the adult brain? Insights on cortical specializations based on sensory deprivation studies. Neurosci Biobehav Rev 2020; 116:494-507. [DOI: 10.1016/j.neubiorev.2020.06.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 06/07/2020] [Accepted: 06/25/2020] [Indexed: 02/06/2023]
|
39
|
Morelli F, Aprile G, Cappagli G, Luparia A, Decortes F, Gori M, Signorini S. A Multidimensional, Multisensory and Comprehensive Rehabilitation Intervention to Improve Spatial Functioning in the Visually Impaired Child: A Community Case Study. Front Neurosci 2020; 14:768. [PMID: 32792904 PMCID: PMC7393219 DOI: 10.3389/fnins.2020.00768] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 06/30/2020] [Indexed: 12/12/2022] Open
Abstract
Congenital visual impairment may have a negative impact on spatial abilities and result in severe delays in perceptual, social, motor, and cognitive skills across life span. Despite several evidences have highlighted the need for an early introduction of re-habilitation interventions, such interventions are rarely adapted to children’s visual capabilities and very few studies have been conducted to assess their long-term efficacy. In this work, we present a case study of a visually impaired child enrolled in a newly developed re-habilitation intervention aimed at improving the overall development through the diversification of re-habilitation activities based on visual potential and developmental profile, with a focus on spatial functioning. We argue that intervention for visually impaired children should be (a) adapted to their visual capabilities, in order to increase re-habilitation outcomes, (b) multi-interdisciplinary and multidimensional, to improve adaptive abilities across development, (c) multisensory, to promote the integration of different perceptual information coming from the environment.
Collapse
Affiliation(s)
- Federica Morelli
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Giorgia Aprile
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Giulia Cappagli
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Antonella Luparia
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Francesco Decortes
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS, Mondino Foundation, Pavia, Italy
| |
Collapse
|
40
|
Longo E, Nishiyori R, Cruz T, Alter K, Damiano DL. Obstetric Brachial Plexus Palsy: Can a Unilateral Birth Onset Peripheral Injury Significantly Affect Brain Development? Dev Neurorehabil 2020; 23:375-382. [PMID: 31906763 PMCID: PMC7550966 DOI: 10.1080/17518423.2019.1689437] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 11/02/2019] [Indexed: 01/15/2023]
Abstract
Purpose: Examine brain structure and function in OBPP and relate to clinical outcomes to better understand the effects of decreased motor activity on early brain development. Methods: 9 OBPP, 7 controls underwent structural MRI scans. OBPP group completed evaluations of upper-limb function and functional near-infrared spectroscopy (fNIRS) during motor tasks. Results: Mean primary motor area volume was lower in both OBPP hemispheres. No volume differences across sides seen within groups; however, Asymmetry Ratio in supplementary motor area differed between groups. Greater asymmetry in primary somatosensory area correlated with lower ABILHAND-Kids scores. fNIRS revealed more cortical activity in both hemispheres during affected arm reach. Conclusion: Cortical volume differences or asymmetry were found in motor and sensory regions in OBPP that related to clinical outcomes. Widespread cortical activity in fNIRS during affected arm reach suggests reorganization in both hemispheres and is relevant to rehabilitation of those with developmental peripheral and brain injuries.
Collapse
Affiliation(s)
- Egmar Longo
- Federal University of Rio Grande do Norte/Faculty of Health Sciences of Trairi - UFRN/FACISA, Health of Children, Santa Cruz, Brazil
| | - Ryota Nishiyori
- Functional and Applied Biomechanics Section, Rehabilitation Medicine Department, Clinical Center, National Institutes of Health, Bethesda, Maryland, US
| | - Theresa Cruz
- National Center for Medical Rehabilitation Research, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, Bethesda, Maryland, US
| | - Katharine Alter
- Functional and Applied Biomechanics Section, Rehabilitation Medicine Department, Clinical Center, National Institutes of Health, Bethesda, Maryland, US
| | - Diane L. Damiano
- Functional and Applied Biomechanics Section, Rehabilitation Medicine Department, Clinical Center, National Institutes of Health, Bethesda, Maryland, US
| |
Collapse
|
41
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
42
|
Tivadar RI, Chappaz C, Anaflous F, Roche J, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects by the Visually-Impaired. Front Neurosci 2020; 14:197. [PMID: 32265628 PMCID: PMC7099598 DOI: 10.3389/fnins.2020.00197] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 02/24/2020] [Indexed: 11/18/2022] Open
Abstract
In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Fatima Anaflous
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
43
|
Ricciardi E, Bottari D, Ptito M, Röder B, Pietrini P. The sensory-deprived brain as a unique tool to understand brain development and function. Neurosci Biobehav Rev 2020; 108:78-82. [DOI: 10.1016/j.neubiorev.2019.10.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
44
|
Chen L. Education and visual neuroscience: A mini-review. Psych J 2019; 9:524-532. [PMID: 31884725 DOI: 10.1002/pchj.335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 10/04/2019] [Accepted: 11/26/2019] [Indexed: 11/06/2022]
Abstract
Neuroscience, especially visual neuroscience, is a burgeoning field that has greatly shaped the format and efficacy of education. Moreover, findings from visual neuroscience are an ongoing source of great progress in pedagogy. In this mini-review, I review existing evidence and areas of active research to describe the fundamental questions and general applications for visual neuroscience as it applies to education. First, I categorize the research questions and future directions for the role of visual neuroscience in education. Second, I juxtapose opposing views on the roles of neuroscience in education and reveal the "neuromyths" propagated under the guise of educational neuroscience. Third, I summarize the policies and practices applied in different countries and for different age ranges. Fourth, I address and discuss the merits of visual neuroscience in art education and of visual perception theories (e.g., those concerned with perceptual organization with respect to space and time) in reading education. I consider how vision-deprived students could benefit from current knowledge of brain plasticity and visual rehabilitation methods involving compensation from other sensory systems. I also consider the potential educational value of instructional methods based on statistical learning in the visual domain. Finally, I outline the accepted translational framework for applying findings from educational neuroscience to pedagogical theory.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
45
|
Auvray M. Multisensory and spatial processes in sensory substitution. Restor Neurol Neurosci 2019; 37:609-619. [DOI: 10.3233/rnn-190950] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Sorbonne Université, Paris, France
| |
Collapse
|
46
|
The Cross-Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research since 2000. Neural Plast 2019; 2019:9603469. [PMID: 31885540 PMCID: PMC6914961 DOI: 10.1155/2019/9603469] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 07/06/2019] [Accepted: 10/31/2019] [Indexed: 01/12/2023] Open
Abstract
One of the most significant effects of neural plasticity manifests in the case of sensory deprivation when cortical areas that were originally specialized for the functions of the deprived sense take over the processing of another modality. Vision and audition represent two important senses needed to navigate through space and time. Therefore, the current systematic review discusses the cross-modal behavioral and neural consequences of deafness and blindness by focusing on spatial and temporal processing abilities, respectively. In addition, movement processing is evaluated as compiling both spatial and temporal information. We examine whether the sense that is not primarily affected changes in its own properties or in the properties of the deprived modality (i.e., temporal processing as the main specialization of audition and spatial processing as the main specialization of vision). References to the metamodal organization, supramodal functioning, and the revised neural recycling theory are made to address global brain organization and plasticity principles. Generally, according to the reviewed studies, behavioral performance is enhanced in those aspects for which both the deprived and the overtaking senses provide adequate processing resources. Furthermore, the behavioral enhancements observed in the overtaking sense (i.e., vision in the case of deafness and audition in the case of blindness) are clearly limited by the processing resources of the overtaking modality. Thus, the brain regions that were previously recruited during the behavioral performance of the deprived sense now support a similar behavioral performance for the overtaking sense. This finding suggests a more input-unspecific and processing principle-based organization of the brain. Finally, we highlight the importance of controlling for and stating factors that might impact neural plasticity and the need for further research into visual temporal processing in deaf subjects.
Collapse
|
47
|
Norman LJ, Thaler L. Retinotopic-like maps of spatial sound in primary 'visual' cortex of blind human echolocators. Proc Biol Sci 2019; 286:20191910. [PMID: 31575359 PMCID: PMC6790759 DOI: 10.1098/rspb.2019.1910] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 09/12/2019] [Indexed: 01/30/2023] Open
Abstract
The functional specializations of cortical sensory areas were traditionally viewed as being tied to specific modalities. A radically different emerging view is that the brain is organized by task rather than sensory modality, but it has not yet been shown that this applies to primary sensory cortices. Here, we report such evidence by showing that primary 'visual' cortex can be adapted to map spatial locations of sound in blind humans who regularly perceive space through sound echoes. Specifically, we objectively quantify the similarity between measured stimulus maps for sound eccentricity and predicted stimulus maps for visual eccentricity in primary 'visual' cortex (using a probabilistic atlas based on cortical anatomy) to find that stimulus maps for sound in expert echolocators are directly comparable to those for vision in sighted people. Furthermore, the degree of this similarity is positively related with echolocation ability. We also rule out explanations based on top-down modulation of brain activity-e.g. through imagery. This result is clear evidence that task-specific organization can extend even to primary sensory cortices, and in this way is pivotal in our reinterpretation of the functional organization of the human brain.
Collapse
Affiliation(s)
| | - Lore Thaler
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
48
|
Thorat S, Proklova D, Peelen MV. The nature of the animacy organization in human ventral temporal cortex. eLife 2019; 8:e47142. [PMID: 31496518 PMCID: PMC6733573 DOI: 10.7554/elife.47142] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Accepted: 07/17/2019] [Indexed: 12/14/2022] Open
Abstract
The principles underlying the animacy organization of the ventral temporal cortex (VTC) remain hotly debated, with recent evidence pointing to an animacy continuum rather than a dichotomy. What drives this continuum? According to the visual categorization hypothesis, the continuum reflects the degree to which animals contain animal-diagnostic features. By contrast, the agency hypothesis posits that the continuum reflects the degree to which animals are perceived as (social) agents. Here, we tested both hypotheses with a stimulus set in which visual categorizability and agency were dissociated based on representations in convolutional neural networks and behavioral experiments. Using fMRI, we found that visual categorizability and agency explained independent components of the animacy continuum in VTC. Modeled together, they fully explained the animacy continuum. Finally, clusters explained by visual categorizability were localized posterior to clusters explained by agency. These results show that multiple organizing principles, including agency, underlie the animacy continuum in VTC.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenNetherlands
| | - Daria Proklova
- Brain and Mind InstituteUniversity of Western OntarioLondonCanada
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenNetherlands
| |
Collapse
|
49
|
Bola Ł, Matuszewski J, Szczepanik M, Droździel D, Sliwinska MW, Paplińska M, Jednoróg K, Szwed M, Marchewka A. Functional hierarchy for tactile processing in the visual cortex of sighted adults. Neuroimage 2019; 202:116084. [PMID: 31400530 DOI: 10.1016/j.neuroimage.2019.116084] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 07/07/2019] [Accepted: 08/06/2019] [Indexed: 12/15/2022] Open
Abstract
Perception via different sensory modalities was traditionally believed to be supported by largely separate brain systems. However, a growing number of studies demonstrate that the visual cortices of typical, sighted adults are involved in tactile and auditory perceptual processing. Here, we investigated the spatiotemporal dynamics of the visual cortex's involvement in a complex tactile task: Braille letter recognition. Sighted subjects underwent Braille training and then participated in a transcranial magnetic stimulation (TMS) study in which they tactually identified single Braille letters. During this task, TMS was applied to their left early visual cortex, visual word form area (VWFA), and left early somatosensory cortex at five time windows from 20 to 520 ms following the Braille letter presentation's onset. The subjects' response accuracy decreased when TMS was applied to the early visual cortex at the 120-220 ms time window and when TMS was applied to the VWFA at the 320-420 ms time window. Stimulation of the early somatosensory cortex did not have a time-specific effect on the accuracy of the subjects' Braille letter recognition, but rather caused a general slowdown during this task. Our results indicate that the involvement of sighted people's visual cortices in tactile perception respects the canonical visual hierarchy-the early tactile processing stages involve the early visual cortex, whereas more advanced tactile computations involve high-level visual areas. Our findings are compatible with the metamodal account of brain organization and suggest that the whole visual cortex may potentially support spatial perception in a task-specific, sensory-independent manner.
Collapse
Affiliation(s)
- Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland; Institute of Psychology, Jagiellonian University, 6 Ingardena Street, 30-060, Krakow, Poland.
| | - Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Michał Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | | | - Małgorzata Paplińska
- The Maria Grzegorzewska University, 40 Szczęśliwicka Street, 02-353, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, 6 Ingardena Street, 30-060, Krakow, Poland.
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland.
| |
Collapse
|
50
|
Op de Beeck HP, Pillet I, Ritchie JB. Factors Determining Where Category-Selective Areas Emerge in Visual Cortex. Trends Cogn Sci 2019; 23:784-797. [PMID: 31327671 DOI: 10.1016/j.tics.2019.06.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 06/21/2019] [Accepted: 06/21/2019] [Indexed: 11/26/2022]
Abstract
A hallmark of functional localization in the human brain is the presence of areas in visual cortex specialized for representing particular categories such as faces and words. Why do these areas appear where they do during development? Recent findings highlight several general factors to consider when answering this question. Experience-driven category selectivity arises in regions that have: (i) pre-existing selectivity for properties of the stimulus, (ii) are appropriately placed in the computational hierarchy of the visual system, and (iii) exhibit domain-specific patterns of connectivity to nonvisual regions. In other words, cortical location of category selectivity is constrained by what category will be represented, how it will be represented, and why the representation will be used.
Collapse
Affiliation(s)
- Hans P Op de Beeck
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium. @kuleuven.be
| | - Ineke Pillet
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| | - J Brendan Ritchie
- Department of Brain and Cognition and Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|