1
|
Mackey CA, O'Connell MN, Hackett TA, Schroeder CE, Kajikawa Y. Laminar organization of visual responses in core and parabelt auditory cortex. Cereb Cortex 2024; 34:bhae373. [PMID: 39300609 DOI: 10.1093/cercor/bhae373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/24/2024] [Accepted: 08/29/2024] [Indexed: 09/22/2024] Open
Abstract
Audiovisual (AV) interaction has been shown in many studies of auditory cortex. However, the underlying processes and circuits are unclear because few studies have used methods that delineate the timing and laminar distribution of net excitatory and inhibitory processes within areas, much less across cortical levels. This study examined laminar profiles of neuronal activity in auditory core (AC) and parabelt (PB) cortices recorded from macaques during active discrimination of conspecific faces and vocalizations. We found modulation of multi-unit activity (MUA) in response to isolated visual stimulation, characterized by a brief deep MUA spike, putatively in white matter, followed by mid-layer MUA suppression in core auditory cortex; the later suppressive event had clear current source density concomitants, while the earlier MUA spike did not. We observed a similar facilitation-suppression sequence in the PB, with later onset latency. In combined AV stimulation, there was moderate reduction of responses to sound during the visual-evoked MUA suppression interval in both AC and PB. These data suggest a common sequence of afferent spikes, followed by synaptic inhibition; however, differences in timing and laminar location may reflect distinct visual projections to AC and PB.
Collapse
Affiliation(s)
- Chase A Mackey
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
| | - Monica N O'Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Department of Psychiatry, New York University School of Medicine, 145 E 32nd St., New York, NY 10016, United States
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1211 Medical Center Dr., Nashville, TN 37212, United States
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Departments of Psychiatry and Neurology, Columbia University College of Physicians, 630 W 168th St, New York, NY 10032, United States
| | - Yoshinao Kajikawa
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Rd, Orangeburg, NY 10962, United States
- Department of Psychiatry, New York University School of Medicine, 145 E 32nd St., New York, NY 10016, United States
| |
Collapse
|
2
|
Elmer S, Schmitt R, Giroud N, Meyer M. The neuroanatomical hallmarks of chronic tinnitus in comorbidity with pure-tone hearing loss. Brain Struct Funct 2023; 228:1511-1534. [PMID: 37349539 PMCID: PMC10335971 DOI: 10.1007/s00429-023-02669-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Accepted: 06/13/2023] [Indexed: 06/24/2023]
Abstract
Tinnitus is one of the main hearing impairments often associated with pure-tone hearing loss, and typically manifested in the perception of phantom sounds. Nevertheless, tinnitus has traditionally been studied in isolation without necessarily considering auditory ghosting and hearing loss as part of the same syndrome. Hence, in the present neuroanatomical study, we attempted to pave the way toward a better understanding of the tinnitus syndrome, and compared two groups of almost perfectly matched individuals with (TIHL) and without (NTHL) pure-tone tinnitus, but both characterized by pure-tone hearing loss. The two groups were homogenized in terms of sample size, age, gender, handedness, education, and hearing loss. Furthermore, since the assessment of pure-tone hearing thresholds alone is not sufficient to describe the full spectrum of hearing abilities, the two groups were also harmonized for supra-threshold hearing estimates which were collected using temporal compression, frequency selectivity und speech-in-noise tasks. Regions-of-interest (ROI) analyses based on key brain structures identified in previous neuroimaging studies showed that the TIHL group exhibited increased cortical volume (CV) and surface area (CSA) of the right supramarginal gyrus and posterior planum temporale (PT) as well as CSA of the left middle-anterior part of the superior temporal sulcus (STS). The TIHL group also demonstrated larger volumes of the left amygdala and of the left head and body of the hippocampus. Notably, vertex-wise multiple linear regression analyses additionally brought to light that CSA of a specific cluster, which was located in the left middle-anterior part of the STS and overlapped with the one found to be significant in the between-group analyses, was positively associated with tinnitus distress level. Furthermore, distress also positively correlated with CSA of gray matter vertices in the right dorsal prefrontal cortex and the right posterior STS, whereas tinnitus duration was positively associated with CSA and CV of the right angular gyrus (AG) and posterior part of the STS. These results provide new insights into the critical gray matter architecture of the tinnitus syndrome matrix responsible for the emergence, maintenance and distress of auditory phantom sensations.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
- Competence Center Language & Medicine, University of Zurich, Zurich, Switzerland
| | - Raffael Schmitt
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
- Competence Center Language & Medicine, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| |
Collapse
|
3
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
4
|
Retter TL, Webster MA, Jiang F. Directional Visual Motion Is Represented in the Auditory and Association Cortices of Early Deaf Individuals. J Cogn Neurosci 2019; 31:1126-1140. [PMID: 30726181 PMCID: PMC6599583 DOI: 10.1162/jocn_a_01378] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Individuals who are deaf since early life may show enhanced performance at some visual tasks, including discrimination of directional motion. The neural substrates of such behavioral enhancements remain difficult to identify in humans, although neural plasticity has been shown for early deaf people in the auditory and association cortices, including the primary auditory cortex (PAC) and STS region, respectively. Here, we investigated whether neural responses in auditory and association cortices of early deaf individuals are reorganized to be sensitive to directional visual motion. To capture direction-selective responses, we recorded fMRI responses frequency-tagged to the 0.1-Hz presentation of central directional (100% coherent random dot) motion persisting for 2 sec contrasted with nondirectional (0% coherent) motion for 8 sec. We found direction-selective responses in the STS region in both deaf and hearing participants, but the extent of activation in the right STS region was 5.5 times larger for deaf participants. Minimal but significant direction-selective responses were also found in the PAC of deaf participants, both at the group level and in five of six individuals. In response to stimuli presented separately in the right and left visual fields, the relative activation across the right and left hemispheres was similar in both the PAC and STS region of deaf participants. Notably, the enhanced right-hemisphere activation could support the right visual field advantage reported previously in behavioral studies. Taken together, these results show that the reorganized auditory cortices of early deaf individuals are sensitive to directional motion. Speculatively, these results suggest that auditory and association regions can be remapped to support enhanced visual performance.
Collapse
|
5
|
Aytemür A, Almeida N, Lee KH. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS). Neuropsychologia 2017; 96:122-128. [PMID: 28089696 DOI: 10.1016/j.neuropsychologia.2017.01.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Revised: 12/20/2016] [Accepted: 01/11/2017] [Indexed: 01/01/2023]
Abstract
Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals.
Collapse
Affiliation(s)
- Ali Aytemür
- Departments of Neuroscience and Psychology, University of Sheffield, Sheffield, UK
| | - Nathalia Almeida
- Departments of Neuroscience and Psychology, University of Sheffield, Sheffield, UK
| | - Kwang-Hyuk Lee
- Departments of Neuroscience and Psychology, University of Sheffield, Sheffield, UK.
| |
Collapse
|
6
|
Brunel L, Carvalho PF, Goldstone RL. It does belong together: cross-modal correspondences influence cross-modal integration during perceptual learning. Front Psychol 2015; 6:358. [PMID: 25914653 PMCID: PMC4390988 DOI: 10.3389/fpsyg.2015.00358] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 03/14/2015] [Indexed: 11/13/2022] Open
Abstract
Experiencing a stimulus in one sensory modality is often associated with an experience in another sensory modality. For instance, seeing a lemon might produce a sensation of sourness. This might indicate some kind of cross-modal correspondence between vision and gustation. The aim of the current study was to explore whether such cross-modal correspondences influence cross-modal integration during perceptual learning. To that end, we conducted two experiments. Using a speeded classification task, Experiment 1 established a cross-modal correspondence between visual lightness and the frequency of an auditory tone. Using a short-term priming procedure, Experiment 2 showed that manipulation of such cross-modal correspondences led to the creation of a crossmodal unit regardless of the nature of the correspondence (i.e., congruent, Experiment 2a or incongruent, Experiment 2b). However, a comparison of priming effects sizes suggested that cross-modal correspondences modulate cross-modal integration during learning, leading to new learned units that have different stability over time. We discuss the implications of our results for the relation between cross-modal correspondence and perceptual learning in the context of a Bayesian explanation of cross-modal correspondences.
Collapse
Affiliation(s)
- Lionel Brunel
- Laboratoire Epsylon, Department of Psychology, Université Paul-Valéry Montpellier III Montpellier, France
| | - Paulo F Carvalho
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN, USA
| | - Robert L Goldstone
- Department of Psychological and Brain Sciences, Indiana University, Bloomington IN, USA
| |
Collapse
|
7
|
Yokum S, Gearhardt AN, Harris JL, Brownell KD, Stice E. Individual differences in striatum activity to food commercials predict weight gain in adolescents. Obesity (Silver Spring) 2014; 22:2544-51. [PMID: 25155745 PMCID: PMC4236252 DOI: 10.1002/oby.20882] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 08/07/2014] [Indexed: 02/01/2023]
Abstract
OBJECTIVE Adolescents view thousands of food commercials annually, but little is known about how individual differences in neural response to food commercials relate to weight gain. To add to our understanding of individual risk factors for unhealthy weight gain and environmental contributions to the obesity epidemic, we tested the associations between reward region (striatum and orbitofrontal cortex [OFC]) responsivity to food commercials and future change in body mass index (BMI). METHODS Adolescents (N = 30) underwent a scan session at baseline while watching a television show edited to include 20 food commercials and 20 nonfood commercials. BMI was measured at baseline and 1-year follow-up. RESULTS Activation in the striatum, but not OFC, in response to food commercials relative to nonfood commercials and in response to food commercials relative to the television show was positively associated with change in BMI over 1-year follow-up. Baseline BMI did not moderate these effects. CONCLUSIONS The results suggest that there are individual differences in neural susceptibility to food advertising. These findings highlight a potential mechanism for the impact of food marketing on adolescent obesity.
Collapse
Affiliation(s)
- Sonja Yokum
- Oregon Research Institute, Eugene, Oregon, USA
| | | | | | | | | |
Collapse
|
8
|
Rey AE, Riou B, Cherdieu M, Versace R. When memory components act as perceptual components: Facilitatory and interference effects in a visual categorisation task. JOURNAL OF COGNITIVE PSYCHOLOGY 2013. [DOI: 10.1080/20445911.2013.865629] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
9
|
Brunel L, Goldstone RL, Vallet G, Riou B, Versace R. When Seeing a Dog Activates the Bark. Exp Psychol 2013; 60:100-12. [DOI: 10.1027/1618-3169/a000176] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The goal of the present study was to find evidence for a multisensory generalization effect (i.e., generalization from one sensory modality to another sensory modality). The authors used an innovative paradigm (adapted from Brunel, Labeye, Lesourd, & Versace, 2009 ) involving three phases: a learning phase, consisting in the categorization of geometrical shapes, which manipulated the rules of association between shapes and a sound feature, and two test phases. The first of these was designed to examine the priming effect of the geometrical shapes seen in the learning phase on target tones (i.e., priming task), while the aim of the second was to examine the probability of recognizing the previously learned geometrical shapes (i.e., recognition task). When a shape category was mostly presented with a sound during learning, all of the primes (including those not presented with a sound in the learning phase) enhanced target processing compared to a condition in which the primes were mostly seen without a sound during learning. A pattern of results consistent with this initial finding was also observed during recognition, with the participants being unable to pick out the shape seen without a sound during the learning phase. Experiment 1 revealed a multisensory generalization effect across the members of a category when the objects belonging to the same category share the same value on the shape dimension. However, a distinctiveness effect was observed when a salient feature distinguished the objects within the category (Experiment 2a vs. 2b).
Collapse
Affiliation(s)
- Lionel Brunel
- Laboratoire Epsylon, Université Paul-Valery, Montpellier 3, France
| | - Robert L. Goldstone
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| | - Guillaume Vallet
- Laboratoire d’Etude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, France
| | - Benoit Riou
- Laboratoire d’Etude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, France
| | - Rémy Versace
- Laboratoire d’Etude des Mécanismes Cognitifs (EMC), Université Lumière Lyon 2, France
| |
Collapse
|
10
|
Encoding and retrieval of artificial visuoauditory memory traces in the auditory cortex requires the entorhinal cortex. J Neurosci 2013; 33:9963-74. [PMID: 23761892 DOI: 10.1523/jneurosci.4078-12.2013] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Damage to the medial temporal lobe impairs the encoding of new memories and the retrieval of memories acquired immediately before the damage in human. In this study, we demonstrated that artificial visuoauditory memory traces can be established in the rat auditory cortex and that their encoding and retrieval depend on the entorhinal cortex of the medial temporal lobe in the rat. We trained rats to associate a visual stimulus with electrical stimulation of the auditory cortex using a classical conditioning protocol. After conditioning, we examined the associative memory traces electrophysiologically (i.e., visual stimulus-evoked responses of auditory cortical neurons) and behaviorally (i.e., visual stimulus-induced freezing and visual stimulus-guided reward retrieval). The establishment of a visuoauditory memory trace in the auditory cortex, which was detectable by electrophysiological recordings, was achieved over 20-30 conditioning trials and was blocked by unilateral, temporary inactivation of the entorhinal cortex. Retrieval of a previously established visuoauditory memory was also affected by unilateral entorhinal cortex inactivation. These findings suggest that the entorhinal cortex is necessary for the encoding and involved in the retrieval of artificial visuoauditory memory in the auditory cortex, at least during the early stages of memory consolidation.
Collapse
|
11
|
Gearhardt AN, Yokum S, Stice E, Harris JL, Brownell KD. Relation of obesity to neural activation in response to food commercials. Soc Cogn Affect Neurosci 2013; 9:932-8. [PMID: 23576811 DOI: 10.1093/scan/nst059] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Adolescents view thousands of food commercials annually, but the neural response to food advertising and its association with obesity is largely unknown. This study is the first to examine how neural response to food commercials differs from other stimuli (e.g. non-food commercials and television show) and to explore how this response may differ by weight status. The blood oxygen level-dependent functional magnetic resonance imaging activation was measured in 30 adolescents ranging from lean to obese in response to food and non-food commercials imbedded in a television show. Adolescents exhibited greater activation in regions implicated in visual processing (e.g. occipital gyrus), attention (e.g. parietal lobes), cognition (e.g. temporal gyrus and posterior cerebellar lobe), movement (e.g. anterior cerebellar cortex), somatosensory response (e.g. postcentral gyrus) and reward [e.g. orbitofrontal cortex and anterior cingulate cortex (ACC)] during food commercials. Obese participants exhibited less activation during food relative to non-food commercials in neural regions implicated in visual processing (e.g. cuneus), attention (e.g. posterior cerebellar lobe), reward (e.g. ventromedial prefrontal cortex and ACC) and salience detection (e.g. precuneus). Obese participants did exhibit greater activation in a region implicated in semantic control (e.g. medial temporal gyrus). These findings may inform current policy debates regarding the impact of food advertising to minors.
Collapse
Affiliation(s)
- Ashley N Gearhardt
- University of Michigan, 2268 East Hall, 530 Church Street, Ann Arbor, MI 48109 Oregon Research Institute, 1776 Millrace, Dr Eugene, OR 97403 and Yale University, 309 Edwards Street, New Haven, CT 06511
| | - Sonja Yokum
- University of Michigan, 2268 East Hall, 530 Church Street, Ann Arbor, MI 48109 Oregon Research Institute, 1776 Millrace, Dr Eugene, OR 97403 and Yale University, 309 Edwards Street, New Haven, CT 06511
| | - Eric Stice
- University of Michigan, 2268 East Hall, 530 Church Street, Ann Arbor, MI 48109 Oregon Research Institute, 1776 Millrace, Dr Eugene, OR 97403 and Yale University, 309 Edwards Street, New Haven, CT 06511
| | - Jennifer L Harris
- University of Michigan, 2268 East Hall, 530 Church Street, Ann Arbor, MI 48109 Oregon Research Institute, 1776 Millrace, Dr Eugene, OR 97403 and Yale University, 309 Edwards Street, New Haven, CT 06511
| | - Kelly D Brownell
- University of Michigan, 2268 East Hall, 530 Church Street, Ann Arbor, MI 48109 Oregon Research Institute, 1776 Millrace, Dr Eugene, OR 97403 and Yale University, 309 Edwards Street, New Haven, CT 06511
| |
Collapse
|
12
|
Sugano Y, Keetels M, Vroomen J. The Build-Up and Transfer of Sensorimotor Temporal Recalibration Measured via a Synchronization Task. Front Psychol 2012; 3:246. [PMID: 22807921 PMCID: PMC3395050 DOI: 10.3389/fpsyg.2012.00246] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2012] [Accepted: 06/25/2012] [Indexed: 11/13/2022] Open
Abstract
The timing relation between a motor action and the sensory consequences of that action can be adapted by exposing participants to artificially delayed feedback (temporal recalibration). Here, we demonstrate that a sensorimotor synchronization task (i.e., tapping the index finger in synchrony with a pacing signal) can be used as a measure of temporal recalibration. Participants were first exposed to a constant delay (~150 ms) between a voluntary action (a finger tap) and an external feedback stimulus of that action (a visual flash or auditory tone). A subjective "no-delay" condition (~50 ms) served as baseline. After a short exposure phase to delayed feedback participants performed the tapping task in which they tapped their finger in synchrony with a flash or tone. Temporal recalibration manifested itself in that taps were given ~20 ms earlier after exposure to 150 ms delays than in the case of 50 ms delays. This effect quickly built up (within 60 taps) and was bigger for auditory than visual adapters. In Experiment 2, we tested whether temporal recalibration would transfer across modalities by switching the modality of the adapter and pacing signal. Temporal recalibration transferred from visual adapter to auditory test, but not from auditory adapter to visual test. This asymmetric transfer suggests that sensory-specific effects are at play.
Collapse
|
13
|
Cross-modal recruitment of primary visual cortex by auditory stimuli in the nonhuman primate brain: a molecular mapping study. Neural Plast 2012; 2012:197264. [PMID: 22792489 PMCID: PMC3388421 DOI: 10.1155/2012/197264] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Revised: 04/17/2012] [Accepted: 05/07/2012] [Indexed: 11/26/2022] Open
Abstract
Recent studies suggest that exposure to only one component of audiovisual events can lead to cross-modal cortical activation. However, it is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term associations. A recent study demonstrated that crossmodal cortical recruitment can occur even after a brief exposure to bimodal stimuli without semantic association. In addition, the authors showed that the primary visual cortex is under such crossmodal influence. In the present study, we used molecular activity mapping of the immediate early gene zif268. We found that animals, which had previously been exposed to a combination of auditory and visual stimuli, showed increased number of active neurons in the primary visual cortex when presented with sounds alone. As previously implied, this crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (~45 min) and lasted for a relatively long period after the initial exposure (~1 day). These results suggest that the previously reported findings may be directly rooted in the increased activity of the neurons occupying the primary visual cortex.
Collapse
|
14
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1298] [Impact Index Per Article: 108.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
15
|
Macaluso E. Spatial Constraints in Multisensory Attention. Front Neurosci 2011. [DOI: 10.1201/9781439812174-32] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
16
|
Macaluso E. Spatial Constraints in Multisensory Attention. Front Neurosci 2011. [DOI: 10.1201/b11092-32] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
17
|
Bertini C, Leo F, Avenanti A, Làdavas E. Independent mechanisms for ventriloquism and multisensory integration as revealed by theta-burst stimulation. Eur J Neurosci 2010; 31:1791-9. [PMID: 20584183 DOI: 10.1111/j.1460-9568.2010.07200.x] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The visual and auditory systems often concur to create a unified perceptual experience and to determine the localization of objects in the external world. Co-occurring auditory and visual stimuli in spatial coincidence are known to enhance performance of auditory localization due to the integration of stimuli from different sensory channels (i.e. multisensory integration). However, auditory localization of audiovisual stimuli presented at spatial disparity might also induce a mislocalization of the sound towards the visual stimulus (i.e. ventriloquism effect). Using repetitive transcranial magnetic stimulation we tested the role of right temporoparietal (rTPC), right occipital (rOC) and right posterior parietal (rPPC) cortex in an auditory localization task in which indices of ventriloquism and multisensory integration were computed. We found that suppression of rTPC excitability by means of continuous theta-burst stimulation (cTBS) reduced multisensory integration. No similar effect was found for cTBS over rOC. Moreover, inhibition of rOC, but not of rTPC, suppressed the visual bias in the contralateral hemifield. In contrast, cTBS over rPPC did not produce any modulation of ventriloquism or integrative effects. The double dissociation found in the present study suggests that ventriloquism and audiovisual multisensory integration are functionally independent phenomena and may be underpinned by partially different neural circuits.
Collapse
Affiliation(s)
- Caterina Bertini
- Dipartimento di Psicologia, Università di Bologna, Viale Berti Pichat 5, 40127 Bologna, Italy.
| | | | | | | |
Collapse
|
18
|
Brunel L, Lesourd M, Labeye E, Versace R. The sensory nature of knowledge: Sensory priming effects in semantic categorization. Q J Exp Psychol (Hove) 2010; 63:955-64. [DOI: 10.1080/17470210903134369] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The aim of the present study was to show the perceptual nature of conceptual knowledge by using a priming paradigm that excluded an interpretation exclusively in terms of amodal representation. This paradigm was divided into two phases. The first phase consisted in learning a systematic association between a geometrical shape and a white noise. The second phase consisted of a short-term priming paradigm in which a primed shape (either associated or not with a sound in the first phase) preceded a picture of an object, which the participants had to categorize as representing a large or a small object. The objects were chosen in such a way that their principal function either was associated with the production of noise (“noisy” target) or was not typically associated the production of noise (“silent” target). The stimulus onset asynchrony (SOA) between the prime and the target was 100 ms or 500 ms. The results revealed an interference effect with a 100-ms SOA and a facilitatory effect with a 500-ms SOA for the noisy targets only. We interpreted the interference effect obtained at the 100-ms SOA as the result of an overlap between the components reactivated by the sound prime and those activated by the processing of the noisy target. At an SOA of 500 ms, there was no temporal overlap. The observed facilitatory effect was explained by the preactivation of auditory areas by the sound prime, thus facilitating the categorization of the noisy targets only.
Collapse
Affiliation(s)
- Lionel Brunel
- Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon, Lyon, France
| | - Mathieu Lesourd
- Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon, Lyon, France
| | - Elodie Labeye
- Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon, Lyon, France
| | - Rémy Versace
- Laboratoire d'Etude des Mécanismes Cognitifs, Université Lumière Lyon, Lyon, France
| |
Collapse
|
19
|
Grossman ED, Jardine NL, Pyles JA. fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS. Front Hum Neurosci 2010; 4:15. [PMID: 20431723 PMCID: PMC2861476 DOI: 10.3389/neuro.09.015.2010] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2009] [Accepted: 02/04/2010] [Indexed: 11/28/2022] Open
Abstract
Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis.
Collapse
Affiliation(s)
- Emily D. Grossman
- Department of Cognitive Sciences, Center for Cognitive Neuroscience, University of California-IrvineIrvine, CA, USA
| | - Nicole L. Jardine
- Department of Psychology, Vanderbilt Vision Research Center, Vanderbilt UniversityNashville, TN, USA
| | - John A. Pyles
- Center for the Neural Basis of Cognition, Carnegie Mellon UniversityPittsburgh, PA, USA
| |
Collapse
|
20
|
Zangenehpour S, Zatorre RJ. Crossmodal recruitment of primary visual cortex following brief exposure to bimodal audiovisual stimuli. Neuropsychologia 2009; 48:591-600. [PMID: 19883668 DOI: 10.1016/j.neuropsychologia.2009.10.022] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2009] [Revised: 10/14/2009] [Accepted: 10/22/2009] [Indexed: 10/20/2022]
Abstract
Several lines of evidence suggest that exposure to only one component of typically audiovisual events can lead to crossmodal cortical activation. These effects are likely explained by long-term associations formed between the auditory and visual components of such events. It is not certain whether such crossmodal recruitment can occur in the absence of explicit conditioning, semantic factors, or long-term association; nor is it clear whether primary sensory cortices can be recruited in such paradigms. In the present study we tested the hypothesis that crossmodal cortical recruitment would occur even after a brief exposure to bimodal stimuli without semantic association. We used positron emission tomography, and an apparatus allowing presentation of spatially and temporally congruous audiovisual stimuli (noise bursts and light flashes). When presented with only the auditory or visual components of the bimodal stimuli, naïve subjects showed only modality-specific cortical activation, as expected. However, subjects who had previously been exposed to the audiovisual stimuli showed increased cerebral blood flow in the primary visual cortex when presented with sounds alone. Functional connectivity analysis suggested that the auditory cortex was the source of visual cortex activity. This crossmodal activation appears to be the result of implicit associations of the two stimuli, likely driven by their spatiotemporal characteristics; it was observed after a relatively short period of exposure (approximately 45 min), and lasted for a relatively long period after the initial exposure (approximately 1 day). The findings indicate that auditory and visual cortices interact with one another to a larger degree than typically assumed.
Collapse
|
21
|
Multisensory integration of sounds and vibrotactile stimuli in processing streams for "what" and "where". J Neurosci 2009; 29:10950-60. [PMID: 19726653 DOI: 10.1523/jneurosci.0910-09.2009] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The segregation between cortical pathways for the identification and localization of objects is thought of as a general organizational principle in the brain. Yet, little is known about the unimodal versus multimodal nature of these processing streams. The main purpose of the present study was to test whether the auditory and tactile dual pathways converged into specialized multisensory brain areas. We used functional magnetic resonance imaging (fMRI) to compare directly in the same subjects the brain activation related to localization and identification of comparable auditory and vibrotactile stimuli. Results indicate that the right inferior frontal gyrus (IFG) and both left and right insula were more activated during identification conditions than during localization in both touch and audition. The reverse dissociation was found for the left and right inferior parietal lobules (IPL), the left superior parietal lobule (SPL) and the right precuneus-SPL, which were all more activated during localization conditions in the two modalities. We propose that specialized areas in the right IFG and the left and right insula are multisensory operators for the processing of stimulus identity whereas parts of the left and right IPL and SPL are specialized for the processing of spatial attributes independently of sensory modality.
Collapse
|
22
|
Engel LR, Frum C, Puce A, Walker NA, Lewis JW. Different categories of living and non-living sound-sources activate distinct cortical networks. Neuroimage 2009; 47:1778-91. [PMID: 19465134 DOI: 10.1016/j.neuroimage.2009.05.041] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2008] [Revised: 04/28/2009] [Accepted: 05/13/2009] [Indexed: 11/25/2022] Open
Abstract
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places-categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left lateralized fronto-parietal regions, bilateral insular cortices, and sub-cortical regions previously implicated in observation-execution matching, consistent with "embodied" and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception.
Collapse
Affiliation(s)
- Lauren R Engel
- Sensory Neuroscience Research Center, West Virginia University, Morgantown, WV 26506, USA
| | | | | | | | | |
Collapse
|
23
|
Cappe C, Morel A, Barone P, Rouiller EM. The thalamocortical projection systems in primate: an anatomical support for multisensory and sensorimotor interplay. ACTA ACUST UNITED AC 2009; 19:2025-37. [PMID: 19150924 PMCID: PMC2722423 DOI: 10.1093/cercor/bhn228] [Citation(s) in RCA: 167] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed.
Collapse
Affiliation(s)
- Céline Cappe
- Unit of Physiology and Program in Neurosciences, Department of Medicine, Faculty of Sciences, University of Fribourg, Chemin du Musée 5, CH-1700 Fribourg, Switzerland
| | | | | | | |
Collapse
|
24
|
|
25
|
Baumann S, Koeneke S, Schmidt CF, Meyer M, Lutz K, Jancke L. A network for audio–motor coordination in skilled pianists and non-musicians. Brain Res 2007; 1161:65-78. [PMID: 17603027 DOI: 10.1016/j.brainres.2007.05.045] [Citation(s) in RCA: 156] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2006] [Revised: 05/27/2007] [Accepted: 05/30/2007] [Indexed: 10/23/2022]
Abstract
Playing a musical instrument requires efficient auditory and motor processing. Fast feed forward and feedback connections that link the acoustic target to the corresponding motor programs need to be established during years of practice. The aim of our study is to provide a detailed description of cortical structures that participate in this audio-motor coordination network in professional pianists and non-musicians. In order to map these interacting areas using functional magnetic resonance imaging (fMRI), we considered cortical areas that are concurrently activated during silent piano performance and motionless listening to piano sound. Furthermore we investigated to what extent interactions between the auditory and the motor modality happen involuntarily. We observed a network of predominantly secondary and higher order areas belonging to the auditory and motor modality. The extent of activity was clearly increased by imagination of the absent modality. However, this network did neither comprise primary auditory nor primary motor areas in any condition. Activity in the lateral dorsal premotor cortex (PMd) and the pre-supplementary motor cortex (preSMA) was significantly increased for pianists. Our data imply an intermodal transformation network of auditory and motor areas which is subject to a certain degree of plasticity by means of intensive training.
Collapse
Affiliation(s)
- Simon Baumann
- Department of Neuropsychology, Institute for Psychology, University of Zurich, Switzerland.
| | | | | | | | | | | |
Collapse
|