1
|
Damera SR, Malone PS, Stevens BW, Klein R, Eberhardt SP, Auer ET, Bernstein LE, Riesenhuber M. Metamodal Coupling of Vibrotactile and Auditory Speech Processing Systems through Matched Stimulus Representations. J Neurosci 2023; 43:4984-4996. [PMID: 37197979 PMCID: PMC10324991 DOI: 10.1523/jneurosci.1710-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/10/2023] [Accepted: 04/29/2023] [Indexed: 05/19/2023] Open
Abstract
It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
Collapse
Affiliation(s)
- Srikanth R Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Patrick S Malone
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Benson W Stevens
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Richard Klein
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20007
| | - Silvio P Eberhardt
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Edward T Auer
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | - Lynne E Bernstein
- Department of Speech Language & Hearing Sciences, George Washington University, Washington, DC 20052
| | | |
Collapse
|
2
|
Gilday OD, Mizrahi A. Learning-Induced Odor Modulation of Neuronal Activity in Auditory Cortex. J Neurosci 2023; 43:1375-1386. [PMID: 36650061 PMCID: PMC9987573 DOI: 10.1523/jneurosci.1398-22.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 12/15/2022] [Accepted: 12/21/2022] [Indexed: 01/19/2023] Open
Abstract
Sensory cortices, even of primary regions, are not purely unisensory. Rather, cortical neurons in sensory cortex show various forms of multisensory interactions. While some multisensory interactions naturally co-occur, the combination of others will co-occur through experience. In real life, learning and experience will result in conjunction with seemingly disparate sensory information that ultimately becomes behaviorally relevant, impacting perception, cognition, and action. Here we describe a novel auditory discrimination task in mice, designed to manipulate the expectation of upcoming trials using olfactory cues. We show that, after learning, female mice display a transient period of several days during which they exploit odor-mediated expectations for making correct decisions. Using two-photon calcium imaging of single neurons in auditory cortex (ACx) during behavior, we found that the behavioral effects of odor-mediated expectations are accompanied by an odor-induced modulation of neuronal activity. Further, we find that these effects are manifested differentially, based on the response preference of individual cells. A significant portion of effects, but not all, are consistent with a predictive coding framework. Our data show that learning novel odor-sound associations evoke changes in ACx. We suggest that behaviorally relevant multisensory environments mediate contextual effects as early as ACx.SIGNIFICANCE STATEMENT Natural environments are composed of multisensory objects. It remains unclear whether and how animals learn the regularities of congruent multisensory associations and how these may impact behavior and neural activity. We tested how learned odor-sound associations affected single-neuron responses in auditory cortex. We introduce a novel auditory discrimination task for mice in which odors set different contexts of expectation to upcoming trials. We show that, although the task can be solved purely by sounds, odor-mediated expectation impacts performance. We further show that odors cause a modulation of neuronal activity in auditory cortex, which is correlated with behavior. These results suggest that learning prompts an interaction of odor and sound information as early as sensory cortex.
Collapse
Affiliation(s)
- Omri David Gilday
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Adi Mizrahi
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel,
- Department of Neurobiology, The Hebrew University of Jerusalem, Jerusalem, 91904, Israel
| |
Collapse
|
3
|
Lankinen K, Ahlfors SP, Mamashli F, Blazejewska AI, Raij T, Turpin T, Polimeni JR, Ahveninen J. Cortical depth profiles of auditory and visual 7 T functional MRI responses in human superior temporal areas. Hum Brain Mapp 2023; 44:362-372. [PMID: 35980015 PMCID: PMC9842898 DOI: 10.1002/hbm.26046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 07/06/2022] [Accepted: 07/16/2022] [Indexed: 02/02/2023] Open
Abstract
Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Anna I. Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Division of Health Sciences and TechnologyMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
4
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
5
|
Ren Q, Marshall AC, Kaiser J, Schütz-Bosbach S. Multisensory Integration of Anticipated Cardiac Signals with Visual Targets Affects Their Detection among Multiple Visual Stimuli. Neuroimage 2022; 262:119549. [DOI: 10.1016/j.neuroimage.2022.119549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
|
6
|
Bigelow J, Morrill RJ, Olsen T, Hasenstaub AR. Visual modulation of firing and spectrotemporal receptive fields in mouse auditory cortex. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100040. [PMID: 36518337 PMCID: PMC9743056 DOI: 10.1016/j.crneur.2022.100040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 04/26/2022] [Accepted: 05/06/2022] [Indexed: 10/18/2022] Open
Abstract
Recent studies have established significant anatomical and functional connections between visual areas and primary auditory cortex (A1), which may be important for cognitive processes such as communication and spatial perception. These studies have raised two important questions: First, which cell populations in A1 respond to visual input and/or are influenced by visual context? Second, which aspects of sound encoding are affected by visual context? To address these questions, we recorded single-unit activity across cortical layers in awake mice during exposure to auditory and visual stimuli. Neurons responsive to visual stimuli were most prevalent in the deep cortical layers and included both excitatory and inhibitory cells. The overwhelming majority of these neurons also responded to sound, indicating unimodal visual neurons are rare in A1. Other neurons for which sound-evoked responses were modulated by visual context were similarly excitatory or inhibitory but more evenly distributed across cortical layers. These modulatory influences almost exclusively affected sustained sound-evoked firing rate (FR) responses or spectrotemporal receptive fields (STRFs); transient FR changes at stimulus onset were rarely modified by visual context. Neuron populations with visually modulated STRFs and sustained FR responses were mostly non-overlapping, suggesting spectrotemporal feature selectivity and overall excitability may be differentially sensitive to visual context. The effects of visual modulation were heterogeneous, increasing and decreasing STRF gain in roughly equal proportions of neurons. Our results indicate visual influences are surprisingly common and diversely expressed throughout layers and cell types in A1, affecting nearly one in five neurons overall.
Collapse
Affiliation(s)
- James Bigelow
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Ryan J. Morrill
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Timothy Olsen
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| | - Andrea R. Hasenstaub
- Coleman Memorial Laboratory, University of California, San Francisco, USA
- Neuroscience Graduate Program, University of California, San Francisco, USA
- Department of Otolaryngology–Head and Neck Surgery, University of California, San Francisco, 94143, USA
| |
Collapse
|
7
|
Merrikhi Y, Kok MA, Carrasco A, Meredith MA, Lomber SG. MULTISENSORY RESPONSES IN A BELT REGION OF THE DORSAL AUDITORY CORTICAL PATHWAY. Eur J Neurosci 2021; 55:589-610. [PMID: 34927294 DOI: 10.1111/ejn.15573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 12/13/2021] [Accepted: 12/14/2021] [Indexed: 11/30/2022]
Abstract
A basic function of the cerebral cortex is to receive and integrate information from different sensory modalities into a comprehensive percept of the environment. Neurons that demonstrate multisensory convergence occur across the necortex, but are especially prevalent in higher-order, association areas. However, a recent study of a cat higher-order auditory area, the dorsal zone (DZ) of auditory cortex, did not observe any multisensory features. Therefore, the goal of the present investigation was to address this conflict using recording and testing methodologies that are established for exposing and studying multisensory neuronal processing. Among the 482 neurons studied, we found that 76.6% were influenced by non-auditory stimuli. Of these neurons, 99% were affected by visual stimulation, but only 11% by somatosensory. Furthermore, a large proportion of the multisensory neurons showed integrated responses to multisensory stimulation, constituted a majority of the excitatory and inhibitory neurons encountered (as identified by the duration of their waveshape), and exhibited a distinct spatial distribution within DZ. These findings demonstrate that the dorsal zone of auditory cortex robustly exhibits multisensory properties and that the proportions of multisensory neurons encountered are consistent with those identified in other higher-order cortices.
Collapse
Affiliation(s)
- Yaser Merrikhi
- Department of Physiology, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Melanie A Kok
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario, Canada
| | - Andres Carrasco
- Graduate Program in Neuroscience, University of Western Ontario, London, Ontario, Canada
| | - M Alex Meredith
- Department of Anatomy and Neurobiology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia, USA
| | - Stephen G Lomber
- Department of Physiology, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
8
|
Kulkarni A, Kegler M, Reichenbach T. Effect of visual input on syllable parsing in a computational model of a neural microcircuit for speech processing. J Neural Eng 2021; 18. [PMID: 34547737 DOI: 10.1088/1741-2552/ac28d3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/21/2021] [Indexed: 11/12/2022]
Abstract
Objective.Seeing a person talking can help us understand them, particularly in a noisy environment. However, how the brain integrates the visual information with the auditory signal to enhance speech comprehension remains poorly understood.Approach.Here we address this question in a computational model of a cortical microcircuit for speech processing. The model consists of an excitatory and an inhibitory neural population that together create oscillations in the theta frequency range. When stimulated with speech, the theta rhythm becomes entrained to the onsets of syllables, such that the onsets can be inferred from the network activity. We investigate how well the obtained syllable parsing performs when different types of visual stimuli are added. In particular, we consider currents related to the rate of syllables as well as currents related to the mouth-opening area of the talking faces.Main results.We find that currents that target the excitatory neuronal population can influence speech comprehension, both boosting it or impeding it, depending on the temporal delay and on whether the currents are excitatory or inhibitory. In contrast, currents that act on the inhibitory neurons do not impact speech comprehension significantly.Significance.Our results suggest neural mechanisms for the integration of visual information with the acoustic information in speech and make experimentally-testable predictions.
Collapse
Affiliation(s)
- Anirudh Kulkarni
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom.,Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Konrad-Zuse-Strasse 3/5, Erlangen, 91056, Germany
| |
Collapse
|
9
|
Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
|
10
|
Basso MA, Frey S, Guerriero KA, Jarraya B, Kastner S, Koyano KW, Leopold DA, Murphy K, Poirier C, Pope W, Silva AC, Tansey G, Uhrig L. Using non-invasive neuroimaging to enhance the care, well-being and experimental outcomes of laboratory non-human primates (monkeys). Neuroimage 2021; 228:117667. [PMID: 33359353 PMCID: PMC8005297 DOI: 10.1016/j.neuroimage.2020.117667] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 12/16/2020] [Accepted: 12/17/2020] [Indexed: 02/09/2023] Open
Abstract
Over the past 10-20 years, neuroscience witnessed an explosion in the use of non-invasive imaging methods, particularly magnetic resonance imaging (MRI), to study brain structure and function. Simultaneously, with access to MRI in many research institutions, MRI has become an indispensable tool for researchers and veterinarians to guide improvements in surgical procedures and implants and thus, experimental as well as clinical outcomes, given that access to MRI also allows for improved diagnosis and monitoring for brain disease. As part of the PRIMEatE Data Exchange, we gathered expert scientists, veterinarians, and clinicians who treat humans, to provide an overview of the use of non-invasive imaging tools, primarily MRI, to enhance experimental and welfare outcomes for laboratory non-human primates engaged in neuroscientific experiments. We aimed to provide guidance for other researchers, scientists and veterinarians in the use of this powerful imaging technology as well as to foster a larger conversation and community of scientists and veterinarians with a shared goal of improving the well-being and experimental outcomes for laboratory animals.
Collapse
Affiliation(s)
- M A Basso
- Fuster Laboratory of Cognitive Neuroscience, Department of Psychiatry and Biobehavioral Sciences UCLA Los Angeles CA 90095 USA
| | - S Frey
- Rogue Research, Inc. Montreal, QC, Canada
| | - K A Guerriero
- Washington National Primate Research Center University of Washington Seattle, WA USA
| | - B Jarraya
- Cognitive Neuroimaging Unit, INSERM, CEA, NeuroSpin center, 91191 Gif/Yvette, France; Université Paris-Saclay, UVSQ, Foch hospital, Paris, France
| | - S Kastner
- Princeton Neuroscience Institute & Department of Psychology Princeton University Princeton, NJ USA
| | - K W Koyano
- National Institute of Mental Health NIH Bethesda MD 20892 USA
| | - D A Leopold
- National Institute of Mental Health NIH Bethesda MD 20892 USA
| | - K Murphy
- Biosciences Institute and Centre for Behaviour and Evolution, Faculty of Medical Sciences Newcastle University Newcastle upon Tyne NE2 4HH United Kingdom UK
| | - C Poirier
- Biosciences Institute and Centre for Behaviour and Evolution, Faculty of Medical Sciences Newcastle University Newcastle upon Tyne NE2 4HH United Kingdom UK
| | - W Pope
- Department of Radiology UCLA Los Angeles, CA 90095 USA
| | - A C Silva
- Department of Neurobiology University of Pittsburgh, Pittsburgh PA 15261 USA
| | - G Tansey
- National Eye Institute NIH Bethesda MD 20892 USA
| | - L Uhrig
- Cognitive Neuroimaging Unit, INSERM, CEA, NeuroSpin center, 91191 Gif/Yvette, France
| |
Collapse
|
11
|
Knotts JD, Michel M, Odegaard B. Defending subjective inflation: an inference to the best explanation. Neurosci Conscious 2020; 2020:niaa025. [PMID: 33343930 PMCID: PMC7734437 DOI: 10.1093/nc/niaa025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 09/28/2020] [Accepted: 10/12/2020] [Indexed: 12/25/2022] Open
Abstract
In a recent opinion piece, Abid (2019) criticizes the hypothesis that subjective inflation may partly account for apparent phenomenological richness across the visual field and outside the focus of attention. In response, we address three main issues. First, we maintain that inflation should be interpreted as an intraperceptual-and not post-perceptual-phenomenon. Second, we describe how inflation may differ from filling-in. Finally, we contend that, in general, there is sufficient evidence to tip the scales toward intraperceptual interpretations of visibility and confidence judgments.
Collapse
Affiliation(s)
- J D Knotts
- Department of Psychology, University of California, Los Angeles, 502 Portola Plaza Los Angeles, CA 90095, USA
| | - Matthias Michel
- Centre for Philosophy of Natural and Social Science, London School of Economics and Political Science, Houghton Street London WC2A 2AE, UK
- Consciousness, Cognition & Computation Group, Centre for Research in Cognition & Neurosciences, Université Libre de Bruxelles (ULB), 50 avenue F.D. Roosevelt CP191 B–1050, Bruxelles, Belgium
| | - Brian Odegaard
- Department of Psychology, University of Florida, 945 Center Dr. P.O. Box 112250 Gainesville, FL 32603, USA
| |
Collapse
|
12
|
Meredith MA, Keniston LP, Prickett EH, Bajwa M, Cojanu A, Clemo HR, Allman BL. What is a multisensory cortex? A laminar, connectional, and functional study of a ferret temporal cortical multisensory area. J Comp Neurol 2020; 528:1864-1882. [PMID: 31955427 DOI: 10.1002/cne.24859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 01/13/2020] [Accepted: 01/13/2020] [Indexed: 01/24/2023]
Abstract
Now that examples of multisensory neurons have been observed across the neocortex, this has led to some confusion about the features that actually designate a region as "multisensory." While the documentation of multisensory effects within many different cortical areas is clear, often little information is available about their proportions or net functional effects. To assess the compositional and functional features that contribute to the multisensory nature of a region, the present investigation used multichannel neuronal recording and tract tracing methods to examine the ferret temporal region: the lateral rostral suprasylvian sulcal area. Here, auditory-tactile multisensory neurons were predominant and constituted the majority of neurons across all cortical layers whose responses dominated the net spiking activity of the area. These results were then compared with a literature review of cortical multisensory data and were found to closely resemble multisensory features of other, higher-order sensory areas. Collectively, these observations argue that multisensory processing presents itself in hierarchical and area-specific ways, from regions that exhibit few multisensory features to those whose composition and processes are dominated by multisensory activity. It seems logical that the former exhibit some multisensory features (among many others), while the latter are legitimately designated as "multisensory."
Collapse
Affiliation(s)
- M Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Leslie P Keniston
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Elizabeth H Prickett
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Moazzum Bajwa
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Alexandru Cojanu
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - H Ruth Clemo
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| | - Brian L Allman
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, Richmond, Virginia
| |
Collapse
|
13
|
Multisensory Enhancement of Odor Object Processing in Primary Olfactory Cortex. Neuroscience 2019; 418:254-265. [DOI: 10.1016/j.neuroscience.2019.08.040] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 08/22/2019] [Accepted: 08/23/2019] [Indexed: 01/06/2023]
|
14
|
Macharadze T, Budinger E, Brosch M, Scheich H, Ohl FW, Henschke JU. Early Sensory Loss Alters the Dendritic Branching and Spine Density of Supragranular Pyramidal Neurons in Rodent Primary Sensory Cortices. Front Neural Circuits 2019; 13:61. [PMID: 31611778 PMCID: PMC6773815 DOI: 10.3389/fncir.2019.00061] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 09/03/2019] [Indexed: 01/26/2023] Open
Abstract
Multisensory integration in primary auditory (A1), visual (V1), and somatosensory cortex (S1) is substantially mediated by their direct interconnections and by thalamic inputs across the sensory modalities. We have previously shown in rodents (Mongolian gerbils) that during postnatal development, the anatomical and functional strengths of these crossmodal and also of sensory matched connections are determined by early auditory, somatosensory, and visual experience. Because supragranular layer III pyramidal neurons are major targets of corticocortical and thalamocortical connections, we investigated in this follow-up study how the loss of early sensory experience changes their dendritic morphology. Gerbils were sensory deprived early in development by either bilateral sciatic nerve transection at postnatal day (P) 5, ototoxic inner hair cell damage at P10, or eye enucleation at P10. Sholl and branch order analyses of Golgi-stained layer III pyramidal neurons at P28, which demarcates the end of the sensory critical period in this species, revealed that visual and somatosensory deprivation leads to a general increase of apical and basal dendritic branching in A1, V1, and S1. In contrast, dendritic branching, particularly of apical dendrites, decreased in all three areas following auditory deprivation. Generally, the number of spines, and consequently spine density, along the apical and basal dendrites decreased in both sensory deprived and non-deprived cortical areas. Therefore, we conclude that the loss of early sensory experience induces a refinement of corticocortical crossmodal and other cortical and thalamic connections by pruning of dendritic spines at the end of the critical period. Based on present and previous own results and on findings from the literature, we propose a scenario for multisensory development following early sensory loss.
Collapse
Affiliation(s)
- Tamar Macharadze
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Clinic for Anesthesiology and Intensive Care Medicine, Otto von Guericke University Hospital, Magdeburg, Germany
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Michael Brosch
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Special Lab Primate Neurobiology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Henning Scheich
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Emeritus Group Lifelong Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Magdeburg, Germany.,Institute for Biology, Otto von Guericke University, Magdeburg, Germany
| | - Julia U Henschke
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
15
|
Rutherford HJ, Xu J, Worhunsky PD, Zhang R, Yip SW, Morie KP, Calhoun VD, Kim S, Strathearn L, Mayes LC, Potenza MN. Gradient theories of brain activation: A novel application to studying the parental brain. Curr Behav Neurosci Rep 2019; 6:119-125. [PMID: 32154064 PMCID: PMC7062306 DOI: 10.1007/s40473-019-00182-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW Parental brain research primarily employs general-linear-model-based (GLM-based) analyses to assess blood-oxygenation-level-dependent responses to infant auditory and visual cues, reporting common responses in shared cortical and subcortical structures. However, this approach does not reveal intermixed neural substrates related to different sensory modalities. We consider this notion in studying the parental brain. RECENT FINDINGS Spatial independent component analysis (sICA) has been used to separate mixed source signals from overlapping functional networks. We explore relative differences between GLM-based analysis and sICA as applied to an fMRI dataset acquired from women while they listened to infant cries or viewed infant sad faces. SUMMARY There is growing appreciation for the value of moving beyond GLM-based analyses to consider brain functional organization as continuous, distributive, and overlapping gradients of neural substrates related to different sensory modalities. Preliminary findings suggest sICA can be applied to the study of the parental brain.
Collapse
Affiliation(s)
- Helena J.V. Rutherford
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Jiansong Xu
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Patrick D. Worhunsky
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Rubin Zhang
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Sarah W. Yip
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Kristen P. Morie
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Vince D. Calhoun
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
- The Mind Research Network, Albuquerque, NM 87131, United States
- Dept of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, NM, 87131, United States
| | - Sohye Kim
- Department of Obstetrics and Gynecology, Baylor College of Medicine
- Department of Pediatrics and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine
- Center for Reproductive Psychiatry, Pavilion for Women, Texas Children’s Hospital
| | - Lane Strathearn
- Department of Pediatrics and Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine
- Stead Family Department of Pediatrics, University of Iowa Carver College of Medicine
| | - Linda C. Mayes
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
| | - Marc N. Potenza
- Child Study Center, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06510, United States
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, United States
- The Connecticut Council on Problem Gambling, Wethersfield, CT 06109, United States
- The Connecticut Mental Health Center, New Haven, CT 06519, United States
| |
Collapse
|
16
|
Noel JP, Faivre N, Magosso E, Blanke O, Alais D, Wallace M. Multisensory perceptual awareness: Categorical or graded? Cortex 2019; 120:169-180. [PMID: 31323457 DOI: 10.1016/j.cortex.2019.05.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 03/31/2019] [Accepted: 05/30/2019] [Indexed: 11/18/2022]
Abstract
Neural evidence suggests that mechanisms associated with conscious access (i.e., the ability to report on a conscious state) are "all-or-none". Upon crossing some threshold, neural signals are globally broadcast throughout the brain and allow conscious reports. However, whether subjective experience (phenomenal consciousness) is categorical (i.e., transitioning abruptly from unconscious to conscious states) or graded (i.e., characterized by multiple intermediate states) remains an open question. To address this issue, we built a series of artificial neural networks containing distinct feedback connectivity from "multisensory" to "unisensory" cortices. In line with consciousness theories, we operationalized perceptual consciousness by the presence of feedback from higher-order nodes back to unisensory nodes which allow 'neural ignition' - a rapid, non-linear boost in response putatively leading to phenomenal consciousness. When simulating how these networks responded to unisensory and multisensory inputs, we found the fastest responses for multisensory presentations associated with multisensory feedback, and the slowest responses for multisensory presentations without feedback. Most interestingly, despite being built in line with "all-or-none" models of consciousness, multisensory stimuli associated with unisensory feedback (i.e., auditory or visual), and hence consistent with unisensory phenomenology according to theories of consciousness, generated intermediate reaction times. To extend these models to human perception and performance, we conducted extensive psychophysical testing in 29 subjects who each completed 10 h of a multisensory cue-congruency task. Consistent with the modeling results, we found that reaction times to multisensory cues reported as unisensory were intermediate between those of fully aware and fully unaware cues. These results support the existence of graded forms of phenomenological consciousness that can be instantiated by simple neural networks built in line with "all-or-none" models of consciousness.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, USA; Center for Neural Science, New York University, New York City, NY, USA.
| | - Nathan Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland; Laboratoire de Psychologie et Neurocognition, LPNC CNRS 5105, Université Grenoble Alpes, France.
| | - Elisa Magosso
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland; Department of Neurology, University of Geneva, Geneva, Switzerland
| | - David Alais
- School of Psychology, The University of Sydney, Sydney, Australia
| | - Mark Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, USA; Department of Hearing and Speech, Vanderbilt University, Nashville, USA; Department of Psychology, Vanderbilt University, Nashville, USA
| |
Collapse
|
17
|
Hämäläinen JA, Parviainen T, Hsu YF, Salmelin R. Dynamics of brain activation during learning of syllable-symbol paired associations. Neuropsychologia 2019; 129:93-103. [PMID: 30930303 DOI: 10.1016/j.neuropsychologia.2019.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 02/20/2019] [Accepted: 03/25/2019] [Indexed: 11/15/2022]
Abstract
Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term learning effects at the brain level. Here we examined the cortical dynamics of short-term learning using magnetoencephalography (MEG) and electroencephalography (EEG) in two experiments that respectively addressed active and passive learning of the association between shown symbols and heard syllables. In experiment 1, learning was based on feedback provided after each trial. The learning of the audio-visual associations was contrasted with items for which the feedback was meaningless. In experiment 2, learning was based on statistical learning through passive exposure to audio-visual stimuli that were consistently presented with each other and contrasted with audio-visual stimuli that were randomly paired with each other. After 5-10 min of training and exposure, learning-related changes emerged in neural activation around 200 and 350 ms in the two experiments. The MEG results showed activity changes at 350 ms in caudal middle frontal cortex and posterior superior temporal sulcus, and at 500 ms in temporo-occipital cortex. Changes in brain activity coincided with a decrease in reaction times and an increase in accuracy scores. Changes in EEG activity were observed starting at the auditory P2 response followed by later changes after 300 ms. The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
Collapse
Affiliation(s)
- Jarmo A Hämäläinen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland.
| | - Tiina Parviainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counseling, National Taiwan Normal University, 10610, Taipei, Taiwan; Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, 00076, Aalto University, Finland; Aalto NeuroImaging, 00076, Aalto University, Finland
| |
Collapse
|
18
|
Majka P, Rosa MGP, Bai S, Chan JM, Huo BX, Jermakow N, Lin MK, Takahashi YS, Wolkowicz IH, Worthy KH, Rajan R, Reser DH, Wójcik DK, Okano H, Mitra PP. Unidirectional monosynaptic connections from auditory areas to the primary visual cortex in the marmoset monkey. Brain Struct Funct 2018; 224:111-131. [PMID: 30288557 PMCID: PMC6373361 DOI: 10.1007/s00429-018-1764-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 09/27/2018] [Indexed: 11/26/2022]
Abstract
Until the late twentieth century, it was believed that different sensory modalities were processed by largely independent pathways in the primate cortex, with cross-modal integration only occurring in specialized polysensory areas. This model was challenged by the finding that the peripheral representation of the primary visual cortex (V1) receives monosynaptic connections from areas of the auditory cortex in the macaque. However, auditory projections to V1 have not been reported in other primates. We investigated the existence of direct interconnections between V1 and auditory areas in the marmoset, a New World monkey. Labelled neurons in auditory cortex were observed following 4 out of 10 retrograde tracer injections involving V1. These projections to V1 originated in the caudal subdivisions of auditory cortex (primary auditory cortex, caudal belt and parabelt areas), and targeted parts of V1 that represent parafoveal and peripheral vision. Injections near the representation of the vertical meridian of the visual field labelled few or no cells in auditory cortex. We also placed 8 retrograde tracer injections involving core, belt and parabelt auditory areas, none of which revealed direct projections from V1. These results confirm the existence of a direct, nonreciprocal projection from auditory areas to V1 in a different primate species, which has evolved separately from the macaque for over 30 million years. The essential similarity of these observations between marmoset and macaque indicate that early-stage audiovisual integration is a shared characteristic of primate sensory processing.
Collapse
Affiliation(s)
- Piotr Majka
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
| | - Marcello G P Rosa
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia.
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia.
| | - Shi Bai
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Jonathan M Chan
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Bing-Xing Huo
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA
| | - Natalia Jermakow
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
| | - Meng K Lin
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
| | - Yeonsook S Takahashi
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
| | - Ianina H Wolkowicz
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Katrina H Worthy
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - Ramesh Rajan
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia
- Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, 3800, Australia
| | - David H Reser
- School of Rural Health, Monash University, Churchill, VIC, 3842, Australia
| | - Daniel K Wójcik
- Laboratory of Neuroinformatics, Nencki Institute of Experimental Biology of Polish Academy of Sciences, 02-093, Warsaw, Poland
| | - Hideyuki Okano
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan
- Department of Physiology, Keio University School of Medicine, Tokyo, 160-8582, Japan
| | - Partha P Mitra
- Monash University Node, Australian Research Council, Centre of Excellence for Integrative Brain Function, Clayton, VIC, 3800, Australia.
- Laboratory for Marmoset Neural Architecture, RIKEN Center for Brain Science, Saitama, 351-0106, Japan.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, 11724, USA.
| |
Collapse
|
19
|
Atilgan H, Town SM, Wood KC, Jones GP, Maddox RK, Lee AKC, Bizley JK. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding. Neuron 2018; 97:640-655.e4. [PMID: 29395914 PMCID: PMC5814679 DOI: 10.1016/j.neuron.2017.12.034] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 10/28/2017] [Accepted: 12/22/2017] [Indexed: 12/29/2022]
Abstract
How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Visual stimuli can shape how auditory cortical neurons respond to sound mixtures Temporal coherence between senses enhances sound features of a bound multisensory object Visual stimuli elicit changes in the phase of the local field potential in auditory cortex Vision-induced phase effects are lost when visual cortex is reversibly silenced
Collapse
Affiliation(s)
- Huriye Atilgan
- The Ear Institute, University College London, London, UK
| | - Stephen M Town
- The Ear Institute, University College London, London, UK
| | | | - Gareth P Jones
- The Ear Institute, University College London, London, UK
| | - Ross K Maddox
- Department of Biomedical Engineering and Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA; Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Adrian K C Lee
- Institute for Learning and Brain Sciences and Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | | |
Collapse
|
20
|
Takacs JD, Forrest TJ, Basura GJ. Noise exposure alters long-term neural firing rates and synchrony in primary auditory and rostral belt cortices following bimodal stimulation. Hear Res 2017; 356:1-15. [DOI: 10.1016/j.heares.2017.07.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 06/04/2017] [Accepted: 07/10/2017] [Indexed: 11/16/2022]
|
21
|
Starke J, Ball F, Heinze HJ, Noesselt T. The spatio-temporal profile of multisensory integration. Eur J Neurosci 2017; 51:1210-1223. [PMID: 29057531 DOI: 10.1111/ejn.13753] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Revised: 10/13/2017] [Accepted: 10/16/2017] [Indexed: 12/29/2022]
Abstract
Task-irrelevant visual stimuli can enhance auditory perception. However, while there is some neurophysiological evidence for mechanisms that underlie the phenomenon, the neural basis of visually induced effects on auditory perception remains unknown. Combining fMRI and EEG with psychophysical measurements in two independent studies, we identified the neural underpinnings and temporal dynamics of visually induced auditory enhancement. Lower- and higher-intensity sounds were paired with a non-informative visual stimulus, while participants performed an auditory detection task. Behaviourally, visual co-stimulation enhanced auditory sensitivity. Using fMRI, enhanced BOLD signals were observed in primary auditory cortex for low-intensity audiovisual stimuli which scaled with subject-specific enhancement in perceptual sensitivity. Concordantly, a modulation of event-related potentials could already be observed over frontal electrodes at an early latency (30-80 ms), which again scaled with subject-specific behavioural benefits. Later modulations starting around 280 ms, that is in the time range of the P3, did not fit this pattern of brain-behaviour correspondence. Hence, the latency of the corresponding fMRI-EEG brain-behaviour modulation points at an early interplay of visual and auditory signals in low-level auditory cortex, potentially mediated by crosstalk at the level of the thalamus. However, fMRI signals in primary auditory cortex, auditory thalamus and the P50 for higher-intensity auditory stimuli were also elevated by visual co-stimulation (in the absence of any behavioural effect) suggesting a general, intensity-independent integration mechanism. We propose that this automatic interaction occurs at the level of the thalamus and might signify a first step of audiovisual interplay necessary for visually induced perceptual enhancement of auditory perception.
Collapse
Affiliation(s)
- Johanna Starke
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Hans-Jochen Heinze
- Department of Neurology, Faculty of Medicine, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany.,Center for Behavioural Brain Sciences, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| |
Collapse
|
22
|
Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7TfMRI. Neuroimage 2016; 143:116-127. [PMID: 27608603 DOI: 10.1016/j.neuroimage.2016.09.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 08/15/2016] [Accepted: 09/04/2016] [Indexed: 11/23/2022] Open
Abstract
Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.
Collapse
|
23
|
Montagne C, Zhou Y. Visual capture of a stereo sound: Interactions between cue reliability, sound localization variability, and cross-modal bias. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:471. [PMID: 27475171 DOI: 10.1121/1.4955314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Multisensory interactions involve coordination and sometimes competition between multiple senses. Vision usually dominates audition in spatial judgments when light and sound stimuli are presented from two different physical locations. This study investigated the influence of vision on the perceived location of a phantom sound source placed in a stereo sound field using a pair of loudspeakers emitting identical signals that were delayed or attenuated relative to each other. Results show that although a similar horizontal range (+/-45°) was reported for timing-modulated and level-modulated signals, listeners' localization performance showed greater variability for the timing signals. When visual stimuli were presented simultaneously with the auditory stimuli, listeners showed stronger visual bias for timing-modulated signals than level-modulated and single-speaker control signals. Trial-to-trial errors remained relatively stable over time, suggesting that sound localization uncertainty has an immediate and long-lasting effect on the across-modal bias. Binaural signal analyses further reveal that interaural differences of time and intensity-the two primary cues for sound localization in the azimuthal plane-are inherently more ambiguous for signals placed using timing. These results suggest that binaural ambiguity is intrinsically linked with localization variability and the strength of cross-modal bias in sound localization.
Collapse
Affiliation(s)
- Christopher Montagne
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| | - Yi Zhou
- Laboratory of Auditory Computation & Neurophysiology, Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
24
|
Poliva O. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language. Front Neurosci 2016; 10:307. [PMID: 27445676 PMCID: PMC4928493 DOI: 10.3389/fnins.2016.00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 06/17/2016] [Indexed: 11/24/2022] Open
Abstract
The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences).
Collapse
|
25
|
Geissler DB, Schmidt HS, Ehret G. Knowledge About Sounds-Context-Specific Meaning Differently Activates Cortical Hemispheres, Auditory Cortical Fields, and Layers in House Mice. Front Neurosci 2016; 10:98. [PMID: 27013959 PMCID: PMC4789409 DOI: 10.3389/fnins.2016.00098] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 02/26/2016] [Indexed: 11/13/2022] Open
Abstract
Activation of the auditory cortex (AC) by a given sound pattern is plastic, depending, in largely unknown ways, on the physiological state and the behavioral context of the receiving animal and on the receiver's experience with the sounds. Such plasticity can be inferred when house mouse mothers respond maternally to pup ultrasounds right after parturition and naïve females have to learn to respond. Here we use c-FOS immunocytochemistry to quantify highly activated neurons in the AC fields and layers of seven groups of mothers and naïve females who have different knowledge about and are differently motivated to respond to acoustic models of pup ultrasounds of different behavioral significance. Profiles of FOS-positive cells in the AC primary fields (AI, AAF), the ultrasonic field (UF), the secondary field (AII), and the dorsoposterior field (DP) suggest that activation reflects in AI, AAF, and UF the integration of sound properties with animal state-dependent factors, in the higher-order field AII the news value of a given sound in the behavioral context, and in the higher-order field DP the level of maternal motivation and, by left-hemisphere activation advantage, the recognition of the meaning of sounds in the given context. Anesthesia reduced activation in all fields, especially in cortical layers 2/3. Thus, plasticity in the AC is field-specific preparing different output of AC fields in the process of perception, recognition and responding to communication sounds. Further, the activation profiles of the auditory cortical fields suggest the differentiation between brains hormonally primed to know (mothers) and brains which acquired knowledge via implicit learning (naïve females). In this way, auditory cortical activation discriminates between instinctive (mothers) and learned (naïve females) cognition.
Collapse
Affiliation(s)
| | | | - Günter Ehret
- Institute of Neurobiology, University of Ulm Ulm, Germany
| |
Collapse
|
26
|
Meredith MA, Allman BL. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets. Eur J Neurosci 2015; 41:686-98. [PMID: 25728185 DOI: 10.1111/ejn.12828] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Revised: 11/07/2014] [Accepted: 12/10/2014] [Indexed: 11/29/2022]
Abstract
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions.
Collapse
Affiliation(s)
- M Alex Meredith
- Department of Anatomy and Neurobiology, Virginia Commonwealth University School of Medicine, 1101 E. Marshall Street, Sanger Hall Rm-12-067, Richmond, VA, 23298-0709, USA
| | | |
Collapse
|
27
|
Chen YC, Xia W, Luo B, Muthaiah VPK, Xiong Z, Zhang J, Wang J, Salvi R, Teng GJ. Frequency-specific alternations in the amplitude of low-frequency fluctuations in chronic tinnitus. Front Neural Circuits 2015; 9:67. [PMID: 26578894 PMCID: PMC4624866 DOI: 10.3389/fncir.2015.00067] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Accepted: 10/15/2015] [Indexed: 12/13/2022] Open
Abstract
Tinnitus, a phantom ringing, buzzing, or hissing sensation with potentially debilitating consequences, is thought to arise from aberrant spontaneous neural activity at one or more sites within the central nervous system; however, the location and specific features of these oscillations are poorly understood with respect to specific tinnitus features. Recent resting-state functional magnetic resonance imaging (fMRI) studies suggest that aberrant fluctuations in spontaneous low-frequency oscillations (LFO) of the blood oxygen level-dependent (BOLD) signal may be an important factor in chronic tinnitus; however, the role that frequency-specific components of LFO play in subjective tinnitus remains unclear. A total of 39 chronic tinnitus patients and 41 well-matched healthy controls participated in the resting-state fMRI scans. The LFO amplitudes were investigated using the amplitude of low-frequency fluctuation (ALFF) and fractional ALFF (fALFF) in two different frequency bands (slow-4: 0.027–0.073 Hz and slow-5: 0.01–0.027 Hz). We observed significant differences between tinnitus patients and normal controls in ALFF/fALFF in the two bands (slow-4 and slow-5) in several brain regions including the superior frontal gyrus (SFG), inferior frontal gyrus, middle temporal gyrus, angular gyrus, supramarginal gyrus, and middle occipital gyrus. Across the entire subject pool, significant differences in ALFF/fALFF between the two bands were found in the midbrain, basal ganglia, hippocampus and cerebellum (Slow 4 > Slow 5), and in the middle frontal gyrus, supramarginal gyrus, posterior cingulate cortex, and precuneus (Slow 5 > Slow 4). We also observed significant interaction between frequency bands and patient groups in the orbitofrontal gyrus. Furthermore, tinnitus distress was positively correlated with the magnitude of ALFF in right SFG and the magnitude of fALFF slow-4 band in left SFG, whereas tinnitus duration was positively correlated with the magnitude of ALFF in right SFG and the magnitude of fALFF slow-5 band in left SFG. Resting-state fMRI provides an unbiased method for identifying aberrant spontaneous LFO occurring throughout the central nervous system. Chronic tinnitus patients have widespread abnormalities in ALFF and fALFF slow-4 and slow-5 band which are correlated with tinnitus distress and duration. These results provide new insights on the neuropathophysiology of chronic tinnitus; therapies capable of reversing these aberrant patterns may reduce tinnitus distress.
Collapse
Affiliation(s)
- Yu-Chen Chen
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University Nanjing, China ; Center for Hearing and Deafness, State University of New York at Buffalo, Buffalo NY, USA
| | - Wenqing Xia
- Medical School, Southeast University Nanjing, China
| | - Bin Luo
- Center for Hearing and Deafness, State University of New York at Buffalo, Buffalo NY, USA
| | - Vijaya P K Muthaiah
- Center for Hearing and Deafness, State University of New York at Buffalo, Buffalo NY, USA
| | - Zhenyu Xiong
- Toshiba Stroke and Vascular Research Center, State University of New York at Buffalo, Buffalo NY, USA
| | - Jian Zhang
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University Nanjing, China
| | - Jian Wang
- Department of Physiology, Southeast University Nanjing, China ; School of Human Communication Disorders, Dalhousie University, Halifax NS, Canada
| | - Richard Salvi
- Center for Hearing and Deafness, State University of New York at Buffalo, Buffalo NY, USA
| | - Gao-Jun Teng
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University Nanjing, China
| |
Collapse
|
28
|
Rhone AE, Nourski KV, Oya H, Kawasaki H, Howard MA, McMurray B. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 31:284-302. [PMID: 27182530 PMCID: PMC4865257 DOI: 10.1080/23273798.2015.1101145] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
Collapse
|
29
|
Murray MM, Thelen A, Thut G, Romei V, Martuzzi R, Matusz PJ. The multisensory function of the human primary visual cortex. Neuropsychologia 2015; 83:161-169. [PMID: 26275965 DOI: 10.1016/j.neuropsychologia.2015.08.011] [Citation(s) in RCA: 107] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Revised: 08/08/2015] [Accepted: 08/10/2015] [Indexed: 01/20/2023]
Abstract
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Antonia Thelen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Gregor Thut
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, United Kingdom
| | - Vincenzo Romei
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, United Kingdom
| | - Roberto Martuzzi
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, United Kingdom.
| |
Collapse
|
30
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
31
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
32
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
33
|
Stevenson RA, Nelms CE, Baum SH, Zurkovsky L, Barense MD, Newhouse PA, Wallace MT. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition. Neurobiol Aging 2015; 36:283-91. [PMID: 25282337 PMCID: PMC4268368 DOI: 10.1016/j.neurobiolaging.2014.08.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Revised: 07/22/2014] [Accepted: 08/02/2014] [Indexed: 01/20/2023]
Abstract
Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Nashville, TN, USA; Vanderbilt Kennedy Center, Nashville, TN, USA.
| | - Caitlin E Nelms
- Department of Psychology, Austin Peay State University, Clarksville, TN, USA; Department of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Sarah H Baum
- Vanderbilt Brain Institute, Nashville, TN, USA; Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, TX, USA
| | - Lilia Zurkovsky
- Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Toronto, Ontario, Canada
| | - Paul A Newhouse
- Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Nashville, TN, USA; Vanderbilt Kennedy Center, Nashville, TN, USA; Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
34
|
Li B, Gong L, Wu R, Li A, Xu F. Complex relationship between BOLD-fMRI and electrophysiological signals in different olfactory bulb layers. Neuroimage 2014; 95:29-38. [DOI: 10.1016/j.neuroimage.2014.03.052] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 03/06/2014] [Accepted: 03/17/2014] [Indexed: 01/09/2023] Open
|
35
|
Man K, Kaplan J, Damasio H, Damasio A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J Comp Neurol 2014; 521:4097-111. [PMID: 23840023 DOI: 10.1002/cne.23408] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 04/30/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific and multisensory association cortices, culminating in maximally integrative regions. It was also established that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships and to the proposal of heuristic anatomofunctional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | | | | | | |
Collapse
|
36
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|
37
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
38
|
Massoudi R, Van Wanrooij MM, Van Wetter SMCI, Versnel H, Van Opstal AJ. Task-related preparatory modulations multiply with acoustic processing in monkey auditory cortex. Eur J Neurosci 2014; 39:1538-50. [PMID: 24649904 DOI: 10.1111/ejn.12532] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Accepted: 01/28/2014] [Indexed: 11/30/2022]
Abstract
We characterised task-related top-down signals in monkey auditory cortex cells by comparing single-unit activity during passive sound exposure with neuronal activity during a predictable and unpredictable reaction-time task for a variety of spectral-temporally modulated broadband sounds. Although animals were not trained to attend to particular spectral or temporal sound modulations, their reaction times demonstrated clear acoustic spectral-temporal sensitivity for unpredictable modulation onsets. Interestingly, this sensitivity was absent for predictable trials with fast manual responses, but re-emerged for the slower reactions in these trials. Our analysis of neural activity patterns revealed a task-related dynamic modulation of auditory cortex neurons that was locked to the animal's reaction time, but invariant to the spectral and temporal acoustic modulations. This finding suggests dissociation between acoustic and behavioral signals at the single-unit level. We further demonstrated that single-unit activity during task execution can be described by a multiplicative gain modulation of acoustic-evoked activity and a task-related top-down signal, rather than by linear summation of these signals.
Collapse
Affiliation(s)
- Roohollah Massoudi
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
39
|
Henschke JU, Noesselt T, Scheich H, Budinger E. Possible anatomical pathways for short-latency multisensory integration processes in primary sensory cortices. Brain Struct Funct 2014; 220:955-77. [DOI: 10.1007/s00429-013-0694-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 12/17/2013] [Indexed: 01/25/2023]
|
40
|
Lanz F, Moret V, Rouiller EM, Loquet G. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task. Front Hum Neurosci 2013; 7:799. [PMID: 24319421 PMCID: PMC3837444 DOI: 10.3389/fnhum.2013.00799] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Accepted: 11/03/2013] [Indexed: 12/12/2022] Open
Abstract
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate's model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear.
Collapse
Affiliation(s)
- Florian Lanz
- Domain of Physiology, Department of Medicine, Fribourg Cognition Center, University of Fribourg , Fribourg , Switzerland
| | | | | | | |
Collapse
|
41
|
A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies. BIOMED RESEARCH INTERNATIONAL 2013; 2013:475427. [PMID: 24228250 PMCID: PMC3818813 DOI: 10.1155/2013/475427] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 08/28/2013] [Accepted: 08/28/2013] [Indexed: 11/17/2022]
Abstract
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.
Collapse
|
42
|
Hu S, Tseng YC, Winkler AD, Li CSR. Neural bases of individual variation in decision time. Hum Brain Mapp 2013; 35:2531-42. [PMID: 24027122 DOI: 10.1002/hbm.22347] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Revised: 04/25/2013] [Accepted: 05/28/2013] [Indexed: 11/12/2022] Open
Abstract
People make decisions by evaluating existing evidence against a threshold or level of confidence. Individuals vary widely in response times even when they perform a simple task in the laboratory. We examine the neural bases of this individual variation by combining computational modeling and brain imaging of 64 healthy adults performing a stop signal task. Behavioral performance was modeled by an accumulator model that describes the process of information growth to reach a threshold to respond. In this model, go trial reaction time (goRT) is jointly determined by the information growth rate, threshold, and movement time (MT). In a linear regression of activations in successful go and all stop (Go+Stop) trials against goRT across participants, the insula, supplementary motor area (SMA), pre-SMA, thalamus including the subthalamic nucleus (STN), and caudate head respond to increasing goRT. Among these areas, the insula, SMA, and thalamus including the STN respond to a slower growth rate, the caudate head responds to an elevated threshold, and the pre-SMA responds to a longer MT. In the regression of Go+Stop trials against the stop signal reaction time (SSRT), the pre-SMA shows a negative correlation with SSRT. These results characterize the component processes of decision making and elucidate the neural bases of a critical aspect of inter-subject variation in human behavior. These findings also suggest that the pre-SMA may play a broader role in response selection and cognitive control rather than simply response inhibition in the stop signal task.
Collapse
Affiliation(s)
- Sien Hu
- Department of Psychiatry, Yale University, New Haven, Connecticut
| | | | | | | |
Collapse
|
43
|
Budd TW, Timora JR. Steady state responses to temporally congruent and incongruent auditory and vibrotactile amplitude modulated stimulation. Int J Psychophysiol 2013; 89:419-32. [DOI: 10.1016/j.ijpsycho.2013.06.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2013] [Revised: 05/26/2013] [Accepted: 06/04/2013] [Indexed: 11/16/2022]
|
44
|
Differences between primary auditory cortex and auditory belt related to encoding and choice for AM sounds. J Neurosci 2013; 33:8378-95. [PMID: 23658177 DOI: 10.1523/jneurosci.2672-12.2013] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
We recorded from middle-lateral (ML) and primary (A1) auditory cortex while macaques discriminated amplitude-modulated (AM) noise from unmodulated noise. Compared with A1, ML had a higher proportion of neurons that encoded increasing AM depth by decreasing their firing rates ("decreasing" neurons), particularly with responses that were not synchronized to the modulation. Choice probability (CP) analysis revealed that A1 and ML activity were different during the first half of the test stimulus. In A1, significant CP began before the test stimulus, remained relatively constant (or increased slightly) during the stimulus, and increased greatly within 200 ms of lever release. Neurons in ML behaved similarly, except that significant CP disappeared during the first half of the stimulus and reappeared during the second half and prerelease periods. CP differences between A1 and ML depend on neural response type. In ML (but not A1), when activity was lower during the first half of the stimulus in nonsynchronized, decreasing neurons, the monkey was more likely to report AM. Neurons that both increased firing rate with increasing modulation depth ("increasing" neurons) and synchronized their responses to AM had similar choice-related activity dynamics in ML and A1. These results suggest that, when ascending the auditory system, there is a transformation in coding AM from primarily synchronized increasing responses in A1 to nonsynchronized and dual (increasing/decreasing) coding in ML. This sensory transformation is accompanied by changes in the timing of activity related to choice, suggesting functional differences between A1 and ML related to attention and/or behavior.
Collapse
|
45
|
Bernstein LE, Auer ET, Eberhardt SP, Jiang J. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training. Front Neurosci 2013; 7:34. [PMID: 23515520 PMCID: PMC3600826 DOI: 10.3389/fnins.2013.00034] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2012] [Accepted: 02/28/2013] [Indexed: 11/13/2022] Open
Abstract
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Communication Neuroscience Laboratory, Department of Speech and Hearing Science, George Washington University Washington, DC, USA
| | | | | | | |
Collapse
|
46
|
Pain and analgesia: the value of salience circuits. Prog Neurobiol 2013; 104:93-105. [PMID: 23499729 DOI: 10.1016/j.pneurobio.2013.02.003] [Citation(s) in RCA: 147] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2012] [Revised: 02/04/2013] [Accepted: 02/06/2013] [Indexed: 02/07/2023]
Abstract
Evaluating external and internal stimuli is critical to survival. Potentially tissue-damaging conditions generate sensory experiences that the organism must respond to in an appropriate, adaptive manner (e.g., withdrawal from the noxious stimulus, if possible, or seeking relief from pain and discomfort). The importance we assign to a signal generated by a noxious state, its salience, reflects our belief as to how likely the underlying situation is to impact our chance of survival. Importantly, it has been hypothesized that aberrant functioning of the brain circuits which assign salience values to stimuli may contribute to chronic pain. We describe examples of this phenomenon, including 'feeling pain' in the absence of a painful stimulus, reporting minimal pain in the setting of major trauma, having an 'analgesic' response in the absence of an active treatment, or reporting no pain relief after administration of a potent analgesic medication, which may provide critical insights into the role that salience circuits play in contributing to numerous conditions characterized by persistent pain. Collectively, a refined understanding of abnormal activity or connectivity of elements within the salience network may allow us to more effectively target interventions to relevant components of this network in patients with chronic pain.
Collapse
|
47
|
Abstract
AbstractThere is a strong interaction between multisensory processing and the neuroplasticity of the human brain. On one hand, recent research demonstrates that experience and training in various domains modifies how information from the different senses is integrated; and, on the other hand multisensory training paradigms seem to be particularly effective in driving functional and structural plasticity. Multisensory training affects early sensory processing within separate sensory domains, as well as the functional and structural connectivity between uni- and multisensory brain regions. In this review, we discuss the evidence for interactions of multisensory processes and brain plasticity and give an outlook on promising clinical applications and open questions.
Collapse
|
48
|
King AJ, Walker KMM. Integrating information from different senses in the auditory cortex. BIOLOGICAL CYBERNETICS 2012; 106:617-25. [PMID: 22798035 PMCID: PMC4340563 DOI: 10.1007/s00422-012-0502-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2012] [Accepted: 06/21/2012] [Indexed: 05/09/2023]
Abstract
Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK.
| | | |
Collapse
|
49
|
Humes LE, Dubno JR, Gordon-Salant S, Lister JJ, Cacace AT, Cruickshanks KJ, Gates GA, Wilson RH, Wingfield A. Central presbycusis: a review and evaluation of the evidence. J Am Acad Audiol 2012; 23:635-66. [PMID: 22967738 DOI: 10.3766/jaaa.23.8.5] [Citation(s) in RCA: 239] [Impact Index Per Article: 19.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND The authors reviewed the evidence regarding the existence of age-related declines in central auditory processes and the consequences of any such declines for everyday communication. PURPOSE This report summarizes the review process and presents its findings. DATA COLLECTION AND ANALYSIS The authors reviewed 165 articles germane to central presbycusis. Of the 165 articles, 132 articles with a focus on human behavioral measures for either speech or nonspeech stimuli were selected for further analysis. RESULTS For 76 smaller-scale studies of speech understanding in older adults reviewed, the following findings emerged: (1) the three most commonly studied behavioral measures were speech in competition, temporally distorted speech, and binaural speech perception (especially dichotic listening); (2) for speech in competition and temporally degraded speech, hearing loss proved to have a significant negative effect on performance in most of the laboratory studies; (3) significant negative effects of age, unconfounded by hearing loss, were observed in most of the studies of speech in competing speech, time-compressed speech, and binaural speech perception; and (4) the influence of cognitive processing on speech understanding has been examined much less frequently, but when included, significant positive associations with speech understanding were observed. For 36 smaller-scale studies of the perception of nonspeech stimuli by older adults reviewed, the following findings emerged: (1) the three most frequently studied behavioral measures were gap detection, temporal discrimination, and temporal-order discrimination or identification; (2) hearing loss was seldom a significant factor; and (3) negative effects of age were almost always observed. For 18 studies reviewed that made use of test batteries and medium-to-large sample sizes, the following findings emerged: (1) all studies included speech-based measures of auditory processing; (2) 4 of the 18 studies included nonspeech stimuli; (3) for the speech-based measures, monaural speech in a competing-speech background, dichotic speech, and monaural time-compressed speech were investigated most frequently; (4) the most frequently used tests were the Synthetic Sentence Identification (SSI) test with Ipsilateral Competing Message (ICM), the Dichotic Sentence Identification (DSI) test, and time-compressed speech; (5) many of these studies using speech-based measures reported significant effects of age, but most of these studies were confounded by declines in hearing, cognition, or both; (6) for nonspeech auditory-processing measures, the focus was on measures of temporal processing in all four studies; (7) effects of cognition on nonspeech measures of auditory processing have been studied less frequently, with mixed results, whereas the effects of hearing loss on performance were minimal due to judicious selection of stimuli; and (8) there is a paucity of observational studies using test batteries and longitudinal designs. CONCLUSIONS Based on this review of the scientific literature, there is insufficient evidence to confirm the existence of central presbycusis as an isolated entity. On the other hand, recent evidence has been accumulating in support of the existence of central presbycusis as a multifactorial condition that involves age- and/or disease-related changes in the auditory system and in the brain. Moreover, there is a clear need for additional research in this area.
Collapse
Affiliation(s)
- Larry E Humes
- Department of Speech and Hearing Sciences, Indiana University, Bloomington, IN, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
50
|
Bulkin DA, Groh JM. Distribution of visual and saccade related information in the monkey inferior colliculus. Front Neural Circuits 2012; 6:61. [PMID: 22973196 PMCID: PMC3433683 DOI: 10.3389/fncir.2012.00061] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2012] [Accepted: 08/18/2012] [Indexed: 11/29/2022] Open
Abstract
The inferior colliculus (IC) is an essential stop early in the ascending auditory pathway. Though normally thought of as a predominantly auditory structure, recent work has uncovered a variety of non-auditory influences on firing rate in the IC. Here, we map the location within the IC of neurons that respond to the onset of a fixation-guiding visual stimulus. Visual/visuomotor associated activity was found throughout the IC (overall, 84 of 199 sites tested or 42%), but with a far reduced prevalence and strength along recording penetrations passing through the tonotopically organized region of the IC, putatively the central nucleus (11 of 42 sites tested, or 26%). These results suggest that visual information has only a weak effect on early auditory processing in core regions, but more strongly targets the modulatory shell regions of the IC.
Collapse
Affiliation(s)
- David A Bulkin
- Department of Psychology, Cornell University Ithaca, NY, USA
| | | |
Collapse
|