1
|
Stadler J, Brechmann A, Angenstein N. Effect of age on lateralized auditory processing. Hear Res 2023; 434:108791. [PMID: 37209509 DOI: 10.1016/j.heares.2023.108791] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 05/01/2023] [Accepted: 05/10/2023] [Indexed: 05/22/2023]
Abstract
The lateralization of processing in the auditory cortex for different acoustic parameters differs depending on stimuli and tasks. Thus, processing complex auditory stimuli requires an efficient hemispheric interaction. Anatomical connectivity decreases with aging and consequently affects the functional interaction between the left and right auditory cortex and lateralization of auditory processing. Here we studied with magnetic resonance imaging the effect of aging on the lateralization of processing and hemispheric interaction during two tasks utilizing the contralateral noise procedure. Categorization of tones according to their direction of frequency modulations (FM) is known to be processed mainly in the right auditory cortex. Sequential comparison of the same tones according to their FM direction strongly involves additionally the left auditory cortex and therefore a stronger hemispheric interaction than the categorization task. The results showed that older adults more strongly recruit the auditory cortex especially during the comparison task that requires stronger hemispheric interaction. This was the case although the task difficulty was adapted to achieve similar performance as the younger adults. Additionally, functional connectivity from auditory cortex to other brain areas was stronger in older than younger adults especially during the comparison task. Diffusion tensor imaging data showed a reduction in fractional anisotropy and an increase in mean diffusivity in the corpus callosum of older adults compared to younger adults. These changes indicate a reduction of anatomical interhemispheric connections in older adults that makes larger processing capacity necessary when tasks require functional hemispheric interaction.
Collapse
Affiliation(s)
- Jörg Stadler
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Brenneckestr. 6, 39118 Magdeburg, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Brenneckestr. 6, 39118 Magdeburg, Germany
| | - Nicole Angenstein
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Brenneckestr. 6, 39118 Magdeburg, Germany.
| |
Collapse
|
2
|
González-Alvarez J, Cervera-Crespo T. Age of Acquisition and Spoken Words: Examining Hemispheric Differences in Lexical Processing. LANGUAGE AND SPEECH 2023; 66:68-78. [PMID: 35000476 DOI: 10.1177/00238309211068402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The relationship between the age of acquisition (AoA) of words and their cerebral hemispheric representation is controversial because the experimental results have been contradictory. However, most of the lexical processing experiments were performed with stimuli consisting of written words. If we want to compare the processing of words learned very early in infancy-when children cannot read-with words learned later, it seems more logical to employ spoken words as experimental stimuli. This study, based on the auditory lexical decision task, used spoken words that were classified according to an objective criterion of AoA with extremely distant means (2.88 vs. 9.28 years old). As revealed by the reaction times, both early and late words were processed more efficiently in the left hemisphere, with no AoA × Hemisphere interaction. The results are discussed from a theoretical point of view, considering that all the experiments were conducted using adult participants.
Collapse
|
3
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
4
|
Drown L, Philip B, Francis AL, Theodore RM. Revisiting the left ear advantage for phonetic cues to talker identification. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3107. [PMID: 36456295 PMCID: PMC9715276 DOI: 10.1121/10.0015093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/13/2022] [Accepted: 10/18/2022] [Indexed: 06/17/2023]
Abstract
Previous research suggests that learning to use a phonetic property [e.g., voice-onset-time, (VOT)] for talker identity supports a left ear processing advantage. Specifically, listeners trained to identify two "talkers" who only differed in characteristic VOTs showed faster talker identification for stimuli presented to the left ear compared to that presented to the right ear, which is interpreted as evidence of hemispheric lateralization consistent with task demands. Experiment 1 (n = 97) aimed to replicate this finding and identify predictors of performance; experiment 2 (n = 79) aimed to replicate this finding under conditions that better facilitate observation of laterality effects. Listeners completed a talker identification task during pretest, training, and posttest phases. Inhibition, category identification, and auditory acuity were also assessed in experiment 1. Listeners learned to use VOT for talker identity, which was positively associated with auditory acuity. Talker identification was not influenced by ear of presentation, and Bayes factors indicated strong support for the null. These results suggest that talker-specific phonetic variation is not sufficient to induce a left ear advantage for talker identification; together with the extant literature, this instead suggests that hemispheric lateralization for talker-specific phonetic variation requires phonetic variation to be conditioned on talker differences in source characteristics.
Collapse
Affiliation(s)
- Lee Drown
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269-1085, USA
| | - Betsy Philip
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269-1085, USA
| | - Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907-2122, USA
| | - Rachel M Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269-1085, USA
| |
Collapse
|
5
|
Deliano M, Seidel P, Vorwerk U, Stadler B, Angenstein N. Effect of cochlear implant side on early speech processing in adults with single-sided deafness. Clin Neurophysiol 2022; 140:29-39. [DOI: 10.1016/j.clinph.2022.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 04/22/2022] [Accepted: 05/04/2022] [Indexed: 11/03/2022]
|
6
|
Brancucci A, Angenstein N. Editorial: Hemispheric Asymmetries in the Auditory Domain. Front Behav Neurosci 2022; 16:892786. [PMID: 35464144 PMCID: PMC9019809 DOI: 10.3389/fnbeh.2022.892786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Alfredo Brancucci
- Dipartimento di Scienze Motorie, Umane e della Salute, Università di Roma “Foro Italico”, Rome, Italy
- *Correspondence: Alfredo Brancucci
| | - Nicole Angenstein
- Combinatorial NeuroImaging Core Facility, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
7
|
Wendt B, Stadler J, Verhey JL, Hessel H, Angenstein N. Effect of Contralateral Noise on Speech Intelligibility. Neuroscience 2021; 459:59-69. [PMID: 33548367 DOI: 10.1016/j.neuroscience.2021.01.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 01/20/2021] [Accepted: 01/27/2021] [Indexed: 01/17/2023]
Abstract
In patients with strong asymmetric hearing loss, standard clinical practice involves testing speech intelligibility in the ear with the higher hearing threshold by simultaneously presenting noise to the other ear. However, psychoacoustic and functional magnetic resonance imaging (fMRI) studies indicate that this approach may be problematic as contralateral noise has a disruptive effect on task processing. Furthermore, fMRI studies have revealed that the effect of contralateral noise on brain activity depends on the lateralization of task processing. The effect of contralateral noise is stronger when task-relevant stimuli are presented ipsilaterally to the hemisphere that is processing the task. In the present study, we tested the effect of four different levels of contralateral noise on speech intelligibility using the Oldenburg sentence test (OLSA). Cortical lateralization of speech processing was assessed upfront by using a visual speech test with fMRI. Contralateral OLSA noise of 65 or 80 dB SPL significantly reduced word intelligibility irrespective of which ear the speech was presented to. In participants with left-lateralized speech processing, 50 dB SPL contralateral OLSA noise led to a significant reduction in speech intelligibility when speech was presented to the left ear, i.e. when speech was presented ipsilaterally to the hemisphere that is mainly processing speech. Thus, contralateral noise, as used in standard clinical practice, not only prevents listeners from using the information in the better-hearing ear but may also have the unintended effect of hampering central processing of speech.
Collapse
Affiliation(s)
- Beate Wendt
- University Hospital of the Otto von Guericke University Magdeburg, Department of Otorhinolaryngology, Germany
| | - Jörg Stadler
- Leibniz Institute for Neurobiology, Magdeburg, Combinatorial NeuroImaging Core Facility, Germany
| | - Jesko L Verhey
- Otto von Guericke University Magdeburg, Department of Experimental Audiology, Germany
| | - Horst Hessel
- Cochlear Deutschland GmbH & Co. KG, Hannover, Germany
| | - Nicole Angenstein
- Leibniz Institute for Neurobiology, Magdeburg, Combinatorial NeuroImaging Core Facility, Germany.
| |
Collapse
|
8
|
Brechmann A, Angenstein N. The impact of task difficulty on the lateralization of processing in the human auditory cortex. Hum Brain Mapp 2019; 40:5341-5353. [PMID: 31460688 PMCID: PMC6865217 DOI: 10.1002/hbm.24776] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 07/16/2019] [Accepted: 08/18/2019] [Indexed: 12/20/2022] Open
Abstract
Perception of complex auditory stimuli like speech requires the simultaneous processing of different fundamental acoustic parameters. The contribution of left and right auditory cortex (AC) in the processing of these parameters differs. In addition, activity within the AC can vary positively or negatively with task performance depending on the type of task. This might affect the allocation of processing to the left and right AC. Here we studied with functional magnetic resonance imaging the impact of task difficulty on the degree of involvement of the left and right AC in two tasks that have previously been shown to differ in hemispheric involvement: categorization and sequential comparison of the direction of frequency modulations (FM). Task difficulty was manipulated by changing the speed of modulation and by that the frequency range covered by the FM. To study the impact of task‐difficulty despite covarying the stimulus parameters, we utilized the contralateral noise procedure that allows comparing AC activation unconfounded by bottom‐up driven activity. The easiest conditions confirmed the known right AC involvement during the categorization task and the left AC involvement during the comparison task. The involvement of the right AC increased with increasing task difficulty for both tasks presumably due to the common task component of categorizing FM direction. The involvement of left AC varied with task difficulty depending on the task. Thus, task difficulty has a strong impact on lateralized processing in AC. This connection must be taken into account when interpreting future results on lateralized processing in the AC.
Collapse
Affiliation(s)
- André Brechmann
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Nicole Angenstein
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
9
|
Angenstein N, Brechmann A. Effect of sequential comparison on active processing of sound duration. Hum Brain Mapp 2017; 38:4459-4469. [PMID: 28580585 DOI: 10.1002/hbm.23673] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2017] [Accepted: 05/22/2017] [Indexed: 11/06/2022] Open
Abstract
Previous studies on active duration processing on sounds showed opposing results regarding the predominant involvement of the left or right hemisphere. Duration of an acoustic event is normally judged relative to other sounds. This requires sequential comparison as auditory events unfold over time. We hypothesized that increasing the demand on sequential comparison in a task increases the involvement of the left auditory cortex. With the current fMRI study, we investigated the effect of sequential comparison in active duration discrimination by comparing a categorical with a comparative task. During the categorical task, the participant had to categorize the tones according to their duration (short vs long). During the comparative task, they had to decide for each tone whether its length matched the tone presented before. We used the contralateral noise procedure to reveal the degree of participation of the left and right auditory cortex during these tasks. We found that both tasks more strongly involve the left than the right auditory cortex. Furthermore, the left auditory cortex was more strongly involved during comparison than during categorization. Together with previous studies, this suggests that additional demand for sequential comparison during processing of different basic acoustic parameters leads to an increased recruitment of the left auditory cortex. In addition, the comparison task more strongly involved several brain areas outside the auditory cortex, which may also be related to the demand for additional cognitive resources as compared to the more efficient categorization of sounds. Hum Brain Mapp 38:4459-4469, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Nicole Angenstein
- Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, 39118, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Brenneckestr. 6, Magdeburg, 39118, Germany
| |
Collapse
|
10
|
Angenstein N, Stadler J, Brechmann A. Auditory intensity processing: Effect of MRI background noise. Hear Res 2016; 333:87-92. [DOI: 10.1016/j.heares.2016.01.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Revised: 01/07/2016] [Accepted: 01/13/2016] [Indexed: 10/22/2022]
|
11
|
Söderlund GBW, Jobs EN. Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise. Front Psychol 2016; 7:34. [PMID: 26858679 PMCID: PMC4731512 DOI: 10.3389/fpsyg.2016.00034] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 01/08/2016] [Indexed: 01/09/2023] Open
Abstract
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure.
Collapse
Affiliation(s)
- Göran B W Söderlund
- Department of Teacher Education and Sports, Sogn og Fjordane University College Sogndal, Norway
| | | |
Collapse
|
12
|
Auditory intensity processing: Categorization versus comparison. Neuroimage 2015; 119:362-70. [DOI: 10.1016/j.neuroimage.2015.06.074] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 06/23/2015] [Accepted: 06/25/2015] [Indexed: 11/18/2022] Open
|
13
|
Ludwig AA, Fuchs M, Kruse E, Uhlig B, Kotz SA, Rübsamen R. Auditory processing disorders with and without central auditory discrimination deficits. J Assoc Res Otolaryngol 2015; 15:441-64. [PMID: 24658855 DOI: 10.1007/s10162-014-0450-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2012] [Accepted: 02/17/2014] [Indexed: 10/25/2022] Open
Abstract
Auditory processing disorder (APD) is defined as a processing deficit in the auditory modality and spans multiple processes. To date, APD diagnosis is mostly based on the utilization of speech material. Adequate nonspeech tests that allow differentiation between an actual central hearing disorder and related disorders such as specific language impairments are still not adequately available. In the present study, 84 children between 6 and 17 years of age (clinical group), referred to three audiological centers for APD diagnosis, were evaluated with standard audiological tests and additional auditory discrimination tests. Latter tests assessed the processing of basic acoustic features at two different stages of the ascending central auditory system: (1) auditory brainstem processing was evaluated by quantifying interaural frequency, level, and signal duration discrimination (interaural tests). (2) Diencephalic/telencephalic processing was assessed by varying the same acoustic parameters (plus signals with sinusoidal amplitude modulation), but presenting the test signals in conjunction with noise pulses to the contralateral ear (dichotic(signal/noise) tests). Data of children in the clinical group were referenced to normative data obtained from more than 300 normally developing healthy school children. The results in the audiological and the discrimination tests diverged widely. Of the 39 children that were diagnosed with APD in the audiological clinic, 30 had deficits in auditory performance. Even more alarming was the fact that of the 45 children with a negative APD diagnosis, 32 showed clear signs of a central hearing deficit. Based on these results, we suggest revising current diagnostic procedure to evaluate APD in order to more clearly differentiate between central auditory processing deficits and higher-order (cognitive and/or language) processing deficits.
Collapse
|
14
|
Effects of ipsilateral and bilateral auditory stimuli on audiovisual integration: a behavioral and event-related potential study. Neuroreport 2015; 25:668-75. [PMID: 24780895 DOI: 10.1097/wnr.0000000000000155] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We used event-related potential measures to compare the effects of ipsilateral and bilateral auditory stimuli on audiovisual (AV) integration. Behavioral results showed that the responses to visual stimuli with either type of auditory stimulus were faster than responses to visual stimuli only and that perceptual sensitivity (d') for visual detection was enhanced for visual stimuli with ipsilateral auditory stimuli. Furthermore, event-related potential components related to AV integrations were identified over the occipital areas at ∼180-200 ms during early-stage sensory processing by the effect of ipsilateral auditory stimuli and over the frontocentral areas at ∼300-320 ms during late-stage cognitive processing by the effect of ipsilateral and bilateral auditory stimuli. Our results confirmed that AV integration was also elicited, despite the effect of bilateral auditory stimuli, and only occurred at later stages of cognitive processing in response to a visual detection task. Furthermore, integration from early-stage sensory processing was observed by the effect of ipsilateral auditory stimuli, suggesting that the integration of AV information in the human brain might be particularly sensitive to ipsilaterally presented AV stimuli.
Collapse
|
15
|
Gutschalk A, Steinmann I. Stimulus dependence of contralateral dominance in human auditory cortex. Hum Brain Mapp 2014; 36:883-96. [PMID: 25346487 DOI: 10.1002/hbm.22673] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Revised: 10/13/2014] [Accepted: 10/15/2014] [Indexed: 11/11/2022] Open
Abstract
The auditory system is often considered to show little contralateral dominance but physiological reports on the contralateral dominance of activity evoked by monaural sound vary widely. Here, we show that part of this variation is stimulus-dependent: blood oxygen level dependent (BOLD) responses to 32 s of monaurally presented unmodulated noise (UN) showed activation in contralateral auditory cortex (AC) and deactivation in ipsilateral AC compared to nonstimulus baseline. Slow amplitude-modulated (AM) noise evoked strong contralateral activation and minimal ipsilateral activation. The contrast of AM-versus-UN was used to separate fMRI activity related to the slow amplitude modulation per se. This difference activation was bilateral although still stronger in contralateral AC. In magnetoencephalography (MEG), the response was dominated by the steady-state activity phase locked to the amplitude modulation. This MEG activity showed no consistent contralateral dominance across listeners. Subcortical BOLD activation was strongly contralateral subsequent to the superior olivary complex (SOC) and showed no significant difference between modulated and UN. An acallosal participant showed similar fMRI activation as the group, ruling transcallosal transmission an unlikely source of ipsilateral enhancement or ipsilateral deactivation. These results suggest that ascending activity subsequent to the SOC is strongly dominant contralateral to the stimulus ear. In contrast, the part of BOLD and MEG activity related to slow amplitude modulation is more bilateral and only observed in AC. Ipsilateral deactivation can potentially bias measures of contralateral BOLD dominance and should be considered in future studies.
Collapse
|
16
|
Liégeois-Chauvel C, Bénar C, Krieg J, Delbé C, Chauvel P, Giusiano B, Bigand E. How functional coupling between the auditory cortex and the amygdala induces musical emotion: a single case study. Cortex 2014; 60:82-93. [PMID: 25023618 DOI: 10.1016/j.cortex.2014.06.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2013] [Revised: 06/03/2014] [Accepted: 06/04/2014] [Indexed: 10/25/2022]
Abstract
Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling.
Collapse
Affiliation(s)
| | - Christian Bénar
- INS INSERM, UMR U, 1106 Marseilles, France; Aix-Marseille Université, 13005 Marseilles, France
| | - Julien Krieg
- INS INSERM, UMR U, 1106 Marseilles, France; Aix-Marseille Université, 13005 Marseilles, France
| | - Charles Delbé
- LEAD UMR 5022 CNRS, Université de Bourgogne, 21065 Dijon, France
| | - Patrick Chauvel
- INS INSERM, UMR U, 1106 Marseilles, France; Aix-Marseille Université, 13005 Marseilles, France; Hôpitaux de Marseille, Hôpital de la Timone, 13005 Marseille, France
| | - Bernard Giusiano
- INS INSERM, UMR U, 1106 Marseilles, France; Aix-Marseille Université, 13005 Marseilles, France; Hôpitaux de Marseille, Hôpital de la Timone, 13005 Marseille, France
| | - Emmanuel Bigand
- LEAD UMR 5022 CNRS, Université de Bourgogne, 21065 Dijon, France
| |
Collapse
|
17
|
Angenstein N, Brechmann A. Division of labor between left and right human auditory cortices during the processing of intensity and duration. Neuroimage 2013; 83:1-11. [DOI: 10.1016/j.neuroimage.2013.06.071] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Revised: 06/07/2013] [Accepted: 06/25/2013] [Indexed: 10/26/2022] Open
|
18
|
Altmann CF, Gaese BH. Representation of frequency-modulated sounds in the human brain. Hear Res 2013; 307:74-85. [PMID: 23933098 DOI: 10.1016/j.heares.2013.07.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/10/2013] [Revised: 07/26/2013] [Accepted: 07/27/2013] [Indexed: 10/26/2022]
Abstract
Frequency-modulation is a ubiquitous sound feature present in communicative sounds of various animal species and humans. Functional imaging of the human auditory system has seen remarkable advances in the last two decades and studies pertaining to frequency-modulation have centered around two major questions: a) are there dedicated feature-detectors encoding frequency-modulation in the brain and b) is there concurrent representation with amplitude-modulation, another temporal sound feature? In this review, we first describe how these two questions are motivated by psychophysical studies and neurophysiology in animal models. We then review how human non-invasive neuroimaging studies have furthered our understanding of the representation of frequency-modulated sounds in the brain. Finally, we conclude with some suggestions on how human neuroimaging could be used in future studies to address currently still open questions on this fundamental sound feature. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Christian F Altmann
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, Kyoto 606-8507, Japan; Career-Path Promotion Unit for Young Life Scientists, Kyoto University, Kyoto 606-8501, Japan.
| | | |
Collapse
|
19
|
Angenstein N, Brechmann A. Left auditory cortex is involved in pairwise comparisons of the direction of frequency modulated tones. Front Neurosci 2013; 7:115. [PMID: 23847464 PMCID: PMC3705175 DOI: 10.3389/fnins.2013.00115] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2013] [Accepted: 06/18/2013] [Indexed: 11/13/2022] Open
Abstract
Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI) study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex (AC) of humans. For this we used the active categorization of the direction (up vs. down) of slow frequency modulated (FM) tones. Several studies suggest that this task is mainly processed in the right AC. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right AC in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right AC whereas the sequential comparison of this feature between tones in a pair is probably performed in the left AC.
Collapse
Affiliation(s)
- Nicole Angenstein
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | | |
Collapse
|
20
|
Kohrs C, Angenstein N, Scheich H, Brechmann A. Human striatum is differentially activated by delayed, omitted, and immediate registering feedback. Front Hum Neurosci 2012; 6:243. [PMID: 22969713 PMCID: PMC3430931 DOI: 10.3389/fnhum.2012.00243] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Accepted: 08/03/2012] [Indexed: 11/24/2022] Open
Abstract
The temporal contingency of feedback during conversations is an essential requirement of a successful dialog. In the current study, we investigated the effects of delayed and omitted registering feedback on fMRI activation and compared both unexpected conditions to immediate feedback. In the majority of trials of an auditory task, participants received an immediate visual feedback which merely indicated that a button press was registered but not whether the response was correct or not. In a minority of trials, and thus unexpectedly, the feedback was omitted, or delayed by 500 ms. The results reveal a response hierarchy of activation strength in the dorsal striatum and the substantia nigra: the response to the delayed feedback was larger compared to immediate feedback and immediate feedback showed a larger activation compared to the omission of feedback. This suggests that brain regions typically involved in reward processing are also activated by non-rewarding, registering feedback. Furthermore, the comparison with immediate feedback revealed that both omitted and delayed feedback significantly modulated activity in a network of brain regions that reflects attentional demand and adjustments in cognitive and action control, i.e., the posterior medial frontal cortex (pMFC), right dorsolateral prefrontal cortex (dlPFC), bilateral anterior insula (aI), inferior frontal gyrus (Gfi), and inferior parietal lobe (Lpi). This finding emphasizes the importance of immediate feedback in human–computer interaction, as the effects of delayed feedback on brain activity in the described network seem to be similar to that of omitted feedback.
Collapse
Affiliation(s)
- Christin Kohrs
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | | | | | | |
Collapse
|
21
|
Hsieh IH, Fillmore P, Rong F, Hickok G, Saberi K. FM-selective networks in human auditory cortex revealed using fMRI and multivariate pattern classification. J Cogn Neurosci 2012; 24:1896-907. [PMID: 22640390 DOI: 10.1162/jocn_a_00254] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Frequency modulation (FM) is an acoustic feature of nearly all complex sounds. Directional FM sweeps are especially pervasive in speech, music, animal vocalizations, and other natural sounds. Although the existence of FM-selective cells in the auditory cortex of animals has been documented, evidence in humans remains equivocal. Here we used multivariate pattern analysis to identify cortical selectivity for direction of a multitone FM sweep. This method distinguishes one pattern of neural activity from another within the same ROI, even when overall level of activity is similar, allowing for direct identification of FM-specialized networks. Standard contrast analysis showed that despite robust activity in auditory cortex, no clusters of activity were associated with up versus down sweeps. Multivariate pattern analysis classification, however, identified two brain regions as selective for FM direction, the right primary auditory cortex on the supratemporal plane and the left anterior region of the superior temporal gyrus. These findings are the first to directly demonstrate existence of FM direction selectivity in the human auditory cortex.
Collapse
Affiliation(s)
- I-Hui Hsieh
- National Central University, Jhongli City, Taiwan.
| | | | | | | | | |
Collapse
|
22
|
Interaction between bottom-up and top-down effects during the processing of pitch intervals in sequences of spoken and sung syllables. Neuroimage 2012; 61:715-22. [PMID: 22503936 DOI: 10.1016/j.neuroimage.2012.03.086] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Revised: 03/14/2012] [Accepted: 03/29/2012] [Indexed: 11/21/2022] Open
Abstract
The processing of pitch intervals may be differentially influenced when musical or speech stimuli carry the pitch information. Most insights into the neural basis of pitch interval processing come from studies on music perception. However, music, in contrast to speech, contains a stable set of pitch intervals. To converge the investigation of pitch interval processing in music and speech, we used sequences of the same spoken or sung syllables. The pitch of these syllables varied either by semitone steps like in music or by smaller intervals. Participants had to differentiate the sequences according to their different sizes of pitch intervals or to the direction of the last frequency step in the sequence. The results depended strongly on the specific task demands. Whereas the interval-size task itself recruited more regions in right lateralized fronto-parietal brain network, stronger activity on semitone than on non-semitone sequences was found in the left hemisphere (mainly in frontal cortex) during this task. These effects were also influenced by the speech mode (spoken or sung syllables). Our findings suggest that the processing of pitch intervals in sequences of syllables depends on an interaction between bottom-up (speech mode, pitch interval) and top-down effects (task).
Collapse
|
23
|
Ross B, Miyazaki T, Fujioka T. Interference in dichotic listening: the effect of contralateral noise on oscillatory brain networks. Eur J Neurosci 2011; 35:106-18. [PMID: 22171970 DOI: 10.1111/j.1460-9568.2011.07935.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Coupling of thalamocortical networks through synchronous oscillations at gamma frequencies (30-80 Hz) has been suggested as a mechanism for binding of auditory sensory information into an object representation, which then becomes accessible for perception and cognition. This study investigated whether contralateral noise interferes with this step of central auditory processing. Neuromagnetic 40-Hz oscillations were examined in young healthy participants while they listened to amplitude-modulated sound in one ear and a multi-talker masking noise in the contralateral ear. Participants were engaged in a gap-detection task, for which their behavioural performance declined under masking. The amplitude modulation of the stimulus elicited steady 40-Hz oscillations with sources in bilateral auditory cortices. Analysis of the temporal dynamics of phase synchrony between source activity and the stimulus revealed two oscillatory components; the first was indicated by an instant onset in phase synchrony with the stimulus while the second showed a 200-ms time constant of gradual increase in phase synchrony after phase resetting by the gap. Masking abolished only the second component. This coincided with masking-related decrease of the P2 wave of the transient auditory-evoked responses whereas the N1 wave, reflecting early sensory processing, was unaffected. Given that the P2 response has been associated with object representation, we propose that the first 40-Hz component is related to representation of low-level sensory input whereas the second is related to internal auditory processing in thalamocortical networks. The observed modulation of oscillatory activity is discussed as reflecting a neural mechanism critical for speech understanding in noise.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.
| | | | | |
Collapse
|
24
|
Heinemann LV, Rahm B, Kaiser J, Gaese BH, Altmann CF. Repetition enhancement for frequency-modulated but not unmodulated sounds: a human MEG study. PLoS One 2010; 5:e15548. [PMID: 21217825 PMCID: PMC3013102 DOI: 10.1371/journal.pone.0015548] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2010] [Accepted: 11/11/2010] [Indexed: 11/18/2022] Open
Abstract
Background Decoding of frequency-modulated (FM) sounds is essential for phoneme identification. This study investigates selectivity to FM direction in the human auditory system. Methodology/Principal Findings Magnetoencephalography was recorded in 10 adults during a two-tone adaptation paradigm with a 200-ms interstimulus-interval. Stimuli were pairs of either same or different frequency modulation direction. To control that FM repetition effects cannot be accounted for by their on- and offset properties, we additionally assessed responses to pairs of unmodulated tones with either same or different frequency composition. For the FM sweeps, N1m event-related magnetic field components were found at 103 and 130 ms after onset of the first (S1) and second stimulus (S2), respectively. This was followed by a sustained component starting at about 200 ms after S2. The sustained response was significantly stronger for stimulation with the same compared to different FM direction. This effect was not observed for the non-modulated control stimuli. Conclusions/Significance Low-level processing of FM sounds was characterized by repetition enhancement to stimulus pairs with same versus different FM directions. This effect was FM-specific; it did not occur for unmodulated tones. The present findings may reflect specific interactions between frequency separation and temporal distance in the processing of consecutive FM sweeps.
Collapse
Affiliation(s)
- Linda V Heinemann
- Institute of Medical Psychology, Goethe University, Frankfurt am Main, Germany.
| | | | | | | | | |
Collapse
|
25
|
Hemispheric differences in specificity effects in talker identification. Atten Percept Psychophys 2010; 72:2265-73. [PMID: 21097868 DOI: 10.3758/bf03196700] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the visual domain, Marsolek and colleagues (1999, 2008) have found support for two dissociable and parallel neural subsystems underlying object and shape recognition: an abstract-category subsystem that operates more effectively in the left cerebral hemisphere (LH), and a specific-exemplar subsystem that operates more effectively in the right cerebral hemisphere (RH). Evidence of this asymmetry has been observed in priming specificity for linguistic (words, pseudoword forms) and nonlinguistic (objects) stimuli. In the auditory domain, the authors previously found hemispheric asymmetries in priming effects for linguistic (spoken words) and nonlinguistic (environmental sounds) stimuli. In the present study, the same asymmetrical pattern was observed in talker identification by means of two long-term repetition-priming experiments. Both experiments consisted of a familiarization phase and a final talker identification test phase, using sentences as stimuli. The results showed that specificity effects (an advantage for same-sentence priming, relative to different-sentence priming) emerged when the target stimuli were presented to the left ear (RH), but not when the target stimuli were presented to the right ear (LH). Taken together, this consistent asymmetrical pattern of data from both domains-visual and auditory-may be indicative of a more general property of the human perceptual processing system. Theoretical implications are discussed.
Collapse
|
26
|
Deike S, Scheich H, Brechmann A. Active stream segregation specifically involves the left human auditory cortex. Hear Res 2010; 265:30-7. [PMID: 20233603 DOI: 10.1016/j.heares.2010.03.005] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2009] [Revised: 02/15/2010] [Accepted: 03/11/2010] [Indexed: 11/27/2022]
Abstract
An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.
Collapse
Affiliation(s)
- Susann Deike
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany.
| | | | | |
Collapse
|
27
|
Abstract
PURPOSE OF REVIEW This review summarizes recent advances in functional magnetic resonance imaging that reveal similarities in the organization of human auditory cortex (HAC) and auditory cortex of nonhuman primates. RECENT FINDINGS Functional magnetic resonance imaging studies have shown that HAC is a compact region that covers less than 8% of the total cortical surface. HAC is subdivided into more than a dozen distinct auditory cortical fields (ACFs) that surround Heschl's gyri on the superior temporal plane. Recent advances that permit the visualization of the results of functional magnetic imaging experiments directly on the cortical surface have provided new insights into the organization of human ACFs. Evidence suggests that medial regions of HAC are organized in a manner similar to the auditory cortex of other primate species with a set of tonotopically organized core ACFs surrounded by belt ACFs that often share tonotopic organization with the core. Although influenced by attention, responses in HAC core and belt fields are largely determined by the acoustic properties of stimuli, including their frequency, intensity, and location. In contrast, lateral regions of HAC contain parabelt fields that are little influenced by simple acoustic features but rather respond to behaviorally relevant complex sounds such as speech and are strongly modulated by attention. SUMMARY HAC conserves the basic structural and functional organization of auditory cortex as seen in old world primate species. A central challenge to future research is to understand how this basic primate plan has evolved to support uniquely human abilities such as music and language.
Collapse
|
28
|
Should spikes be treated with equal weightings in the generation of spectro-temporal receptive fields? ACTA ACUST UNITED AC 2009; 104:215-22. [PMID: 19941954 DOI: 10.1016/j.jphysparis.2009.11.026] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Knowledge on the trigger features of central auditory neurons is important in the understanding of speech processing. Spectro-temporal receptive fields (STRFs) obtained using random stimuli and spike-triggered averaging allow visualization of trigger features which often appear blurry in the time-versus-frequency plot. For a clearer visualization we have previously developed a dejittering algorithm to sharpen trigger features in the STRF of FM-sensitive cells. Here we extended this algorithm to segregate spikes, based on their dejitter values, into two groups: normal and outlying, and to construct their STRF separately. We found that while the STRF of the normal jitter group resembled full trigger feature in the original STRF, those of the outlying jitter group resembled a different or partial trigger feature. This algorithm allowed the extraction of other weaker trigger features. Due to the presence of different trigger features in a given cell, we proposed that in the generation of STRF, the evoked spikes should not be treated indiscriminately with equal weightings.
Collapse
|
29
|
Woods DL, Stecker GC, Rinne T, Herron TJ, Cate AD, Yund EW, Liao I, Kang X. Functional maps of human auditory cortex: effects of acoustic features and attention. PLoS One 2009; 4:e5183. [PMID: 19365552 PMCID: PMC2664477 DOI: 10.1371/journal.pone.0005183] [Citation(s) in RCA: 122] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2008] [Accepted: 01/27/2009] [Indexed: 11/18/2022] Open
Abstract
Background While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs), little is known about how processing in these fields is modulated by other acoustic features or by attention. Methodology/Principal Findings We used functional magnetic resonance imaging (fMRI) and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended) activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs) were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses, ARMs increased in amplitude throughout stimulus blocks. Conclusions/Significance The results are consistent with the view that medial regions of human auditory cortex contain tonotopically organized core and belt fields that map the basic acoustic features of sounds while surrounding higher-order parabelt regions are tuned to more abstract stimulus attributes. Intermodal selective attention enhances processing in neuronal populations that are partially distinct from those activated by unattended stimuli.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, VANCHCS, Martinez, California, United States of America.
| | | | | | | | | | | | | | | |
Collapse
|
30
|
Foxton JM, Weisz N, Bauchet-Lecaignard F, Delpuech C, Bertrand O. The neural bases underlying pitch processing difficulties. Neuroimage 2009; 45:1305-13. [PMID: 19349242 DOI: 10.1016/j.neuroimage.2008.10.068] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2008] [Revised: 09/30/2008] [Accepted: 10/10/2008] [Indexed: 11/26/2022] Open
Abstract
Normal listeners are often surprisingly poor at processing pitch changes. The neural bases of this difficulty were explored using magnetoencephalography (MEG) by comparing participants who obtained poor thresholds on a pitch-direction task with those who obtained good thresholds. Source-space projected data revealed that during an active listening task, the poor threshold group displayed greater activity in the left auditory cortical region when determining the direction of small pitch glides, whereas there was no difference in the good threshold group. In a passive listening task, a mismatch response (MMNm) was identified for pitch-glide direction deviants, with a tendency to be smaller in the poor listeners. The results imply that the difficulties in pitch processing are already apparent during automatic sound processing, and furthermore suggest that left hemisphere auditory regions are used by these listeners to consciously determine the direction of a pitch change. This is in line with evidence that the left hemisphere has a poor frequency resolution, and implies that normal listeners may use the sub-optimal hemisphere to process pitch changes.
Collapse
Affiliation(s)
- Jessica M Foxton
- INSERM U821, Lyon 1 University, Brain Dynamics and Cognition laboratory, Lyon, F-69500, France
| | | | | | | | | |
Collapse
|
31
|
Abstract
In experimental settings, feedback is mostly used to inform about the correctness of a participant's response. Such feedback, however, also provides the information that a response was registered, which is highly significant in any dialog situation. The present functional MRI study investigated the involvement of brain areas in the processing of neutral, verbal feedback. We used an auditory discrimination task with verbal feedback, which immediately informed the participants that their response was registered. We found an increased activation in the left dorsal striatum compared with temporally uncorrelated feedback and the no-feedback condition. Several studies using evaluative feedback suggest that this area participates in reward processing. The present result suggests that it may already be involved in more basic aspects of feedback processing.
Collapse
|
32
|
Dos Santos Sequeira S, Specht K, Hämäläinen H, Hugdahl K. The effects of background noise on dichotic listening to consonant-vowel syllables. BRAIN AND LANGUAGE 2008; 107:11-15. [PMID: 18602155 DOI: 10.1016/j.bandl.2008.06.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2007] [Revised: 05/26/2008] [Accepted: 06/02/2008] [Indexed: 05/26/2023]
Abstract
Lateralization of verbal processing is frequently studied with the dichotic listening technique, yielding a so called right ear advantage (REA) to consonant-vowel (CV) syllables. However, little is known about how background noise affects the REA. To address this issue, we presented CV-syllables either in silence or with traffic background noise vs. 'babble'. Both 'babble' and traffic noise resulted in a smaller REA compared to the silent condition. The traffic noise, moreover, had a significantly greater negative effect on the REA than the 'babble', caused both by a decreased right ear response as well as an increased left ear response. The results are discussed in terms of alertness and attentional factors.
Collapse
Affiliation(s)
- Sarah Dos Santos Sequeira
- Department of Biological and Medical Psychology, University of Bergen, Jonas Lies vei 91, N-5009 Bergen, Norway.
| | | | | | | |
Collapse
|
33
|
Özdamar Ö, Bohórquez J. Suppression of the Pb (P1) component of the auditory middle latency response with contralateral masking. Clin Neurophysiol 2008; 119:1870-1880. [DOI: 10.1016/j.clinph.2008.03.023] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2007] [Revised: 03/20/2008] [Accepted: 03/26/2008] [Indexed: 10/22/2022]
|
34
|
List A, Justus T. Auditory priming of frequency and temporal information: effects of lateralised presentation. Laterality 2007; 12:507-35. [PMID: 17852702 PMCID: PMC2582062 DOI: 10.1080/13576500701566727] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralised. In vision, lateralised local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli were developed. In the high-low stimulus set we manipulate frequency information, and in the fast-slow stimulus set we manipulate temporal information. The fast-slow stimuli additionally mimic visual hierarchical stimulus structure, in which the arrangement of local patterns determines the global pattern. Unlike previous auditory stimuli, the current stimulus sets contain the experimental flexibility of visual local-global hierarchical stimuli allowing independent manipulation of structural levels. Previous findings of frequency and temporal range priming were replicated. Additionally, by presenting stimuli monaurally, we found that priming of frequency ranges (but not temporal ranges) was found to vary by ear, supporting the contention that the hemispheres asymmetrically retain traces of prior frequency processing. These results contribute to the extensive literature revealing cerebral asymmetries for the processing of frequency information, and extend those results to the realm of priming.
Collapse
|
35
|
Stefanatos GA, Joe WQ, Aguirre GK, Detre JA, Wetmore G. Activation of human auditory cortex during speech perception: effects of monaural, binaural, and dichotic presentation. Neuropsychologia 2007; 46:301-15. [PMID: 18023460 DOI: 10.1016/j.neuropsychologia.2007.07.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2006] [Revised: 05/30/2007] [Accepted: 07/11/2007] [Indexed: 11/16/2022]
Abstract
We used a rapid event-related functional magnetic resonance imaging (fMRI) paradigm to compare cortical activation to speech tokens presented monaurally to each ear, binaurally, and dichotically. Two forms of dichotic conditions were examined: one presented consonant-vowel (CV) syllables simultaneously to each ear while the other paired a CV syllable with a non-speech stimulus (band-limited noise). Right-handed adults were asked to differentially respond to serially presented target and distractor CV syllables. Activations were localized with reference to anatomic segmentation algorithms that allowed us to distinguish between activity in primary (PAC) and non-primary auditory cortex (NPAC). Monaural CV syllables presented to the right ear (CVR) produced highly asymmetric activations in left PAC and NPAC. A similar but reduced left hemisphere (LH) bias was evident in binaural presentation, when monaural syllables were paired with contra-aural noise, and in dichotic CV-CV presentations. However, LH activation was two times larger to CVR than any other condition, while RH activation to CVR was insubstantial. By contrast, a small rightward asymmetry in PAC activation was observed from monaural left ear (CVL) presentation. In all conditions except CVL, magnitude of response favored left PAC and NPAC. CV processing across different listening conditions disclosed complex interactions in activation. Our results confirm the superiority of left NPAC in speech processing and suggest comparable left lateralization in PAC. The findings suggest that monaural CV presentation may be more useful than previously anticipated. The paradigm developed here may hold some promise in investigations where abnormal hemispheric balance of speech processing is suspected.
Collapse
Affiliation(s)
- Gerry A Stefanatos
- Moss Rehabilitation Research Institute, Albert Einstein Medical Center, 1200 West Tabor Road, Philadelphia, PA 19141, USA
| | | | | | | | | |
Collapse
|
36
|
Hwang JH, Li CW, Wu CW, Chen JH, Liu TC. Aging effects on the activation of the auditory cortex during binaural speech listening in white noise: an fMRI study. Audiol Neurootol 2007; 12:285-94. [PMID: 17536197 DOI: 10.1159/000103209] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2006] [Accepted: 03/21/2007] [Indexed: 11/19/2022] Open
Abstract
The functional significance of age-related pathology of the auditory cortex is not well established. The purpose of this study was to elucidate the activation pattern of the auditory cortex in aged subjects in response to speech signals. Functional magnetic resonance imaging was performed on 12 elderly subjects with normal hearing acuity during selective listening with both ears to speech sounds in quiet and in white noise. Twelve young, normal-hearing subjects served as controls. Our results showed that activation of the auditory cortex during selective listening to speech decreased in elderly subjects compared to young subjects, especially in noise. Reduced activation occurred in the anterior and posterior regions of the bilateral superior temporal gyrus (STG), but mainly in the posterior part of the left STG. In addition, background noise had a greater masking effect on speech perception in the elderly subjects than in the young ones. These findings suggest that early functional changes associated with central presbycusis occur mainly in the posterior part of the left STG.
Collapse
Affiliation(s)
- Juen-Haur Hwang
- Graduate Institute of Clinical Medicine, National Taiwan University, Taipei, Taiwan
| | | | | | | | | |
Collapse
|
37
|
Brechmann A, Gaschler-Markefski B, Sohr M, Yoneda K, Kaulisch T, Scheich H. Working memory specific activity in auditory cortex: potential correlates of sequential processing and maintenance. Cereb Cortex 2007; 17:2544-52. [PMID: 17204817 DOI: 10.1093/cercor/bhl160] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Working memory (WM) tasks involve several interrelated processes during which past information must be transiently maintained, recalled, and compared with test items according to previously instructed rules. It is not clear whether the rule-specific comparisons of perceptual with memorized items are only performed in previously identified frontal and parietal WM areas or whether these areas orchestrate such comparisons by feedback to sensory cortex. We tested the latter hypothesis by focusing on auditory cortex (AC) areas with low-noise functional magnetic resonance imaging in a 2-back WM task involving frequency-modulated (FM) tones. The control condition was a 0-back task on the same stimuli. Analysis of the group data identified an area on right planum temporale equally activated by both tasks and an area on the left planum temporale specifically involved in the 2-back task. A region of interest analysis in each individual revealed that activation on the left planum temporale in the 2-back task positively correlated with the task performance of the subjects. This strongly suggests a prominent role of the AC in 2-back WM tasks. In conjunction with previous findings on FM processing, the left lateralized effect presumably reflects the complex sequential processing demand of the 2-back matching to sample task.
Collapse
Affiliation(s)
- André Brechmann
- Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, D-39118, Magdeburg, Germany.
| | | | | | | | | | | |
Collapse
|
38
|
Behne N, Wendt B, Scheich H, Brechmann A. Contralateral White Noise Selectively Changes Left Human Auditory Cortex Activity in a Lexical Decision Task. J Neurophysiol 2006; 95:2630-7. [PMID: 16436478 DOI: 10.1152/jn.01201.2005] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.
Collapse
Affiliation(s)
- Nicole Behne
- Leibniz Institute for Neurobiology, Magdeburg, Germany.
| | | | | | | |
Collapse
|
39
|
Gaese BH, King I, Felsheim C, Ostwald J, von der Behrens W. Discrimination of direction in fast frequency-modulated tones by rats. J Assoc Res Otolaryngol 2006; 7:48-58. [PMID: 16411160 PMCID: PMC2504587 DOI: 10.1007/s10162-005-0022-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2005] [Accepted: 11/16/2005] [Indexed: 11/26/2022] Open
Abstract
Fast frequency modulations (FM) are an essential part of species-specific auditory signals in animals as well as in human speech. Major parameters characterizing non-periodic frequency modulations are the direction of frequency change in the FM sweep (upward/downward) and the sweep speed, i.e., the speed of frequency change. While it is well established that both parameters are represented in the mammalian central auditory pathway, their importance at the perceptual level in animals is unclear. We determined the ability of rats to discriminate between upward and downward modulated FM-tones as a function of sweep speed in a two-alternative-forced-choice-paradigm. Directional discrimination in logarithmic FM-sweeps was reduced with increasing sweep speed between 20 and 1,000 octaves/s following a psychometric function. Average threshold sweep speed for FM directional discrimination was 96 octaves/s. This upper limit of perceptual FM discrimination fits well the upper limit of preferred sweep speeds in auditory neurons and the upper limit of neuronal direction selectivity in the rat auditory cortex and midbrain, as it is found in the literature. Influences of additional stimulus parameters on FM discrimination were determined using an adaptive testing-procedure for efficient threshold estimation based on a maximum likelihood approach. Directional discrimination improved with extended FM sweep range between two and five octaves. Discrimination performance declined with increasing lower frequency boundary of FM sweeps, showing an especially strong deterioration when the boundary was raised from 2 to 4 kHz. This deterioration corresponds to a frequency-dependent decline in direction selectivity of FM-encoding neurons in the rat auditory cortex, as described in the literature. Taken together, by investigating directional discrimination of FM sweeps in the rat we found characteristics at the perceptual level that can be related to several aspects of FM encoding in the central auditory pathway.
Collapse
Affiliation(s)
- Bernhard H Gaese
- Institut für Biologie II, RWTH Aachen, Kopernikusstr. 16, D-52074, Aachen, Germany.
| | | | | | | | | |
Collapse
|
40
|
Ohl FW, Scheich H. Learning-induced plasticity in animal and human auditory cortex. Curr Opin Neurobiol 2005; 15:470-7. [PMID: 16009546 DOI: 10.1016/j.conb.2005.07.002] [Citation(s) in RCA: 163] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2005] [Accepted: 07/01/2005] [Indexed: 11/18/2022]
Abstract
Recent data on learning-related changes in animal and human auditory cortex indicate functions beyond mere stimulus representation and simple recognition memory for stimuli. Rather, auditory cortex seems to process and represent stimuli in a task-dependent fashion. This implies plasticity in neural processing, which can be observed at the level of single neuron firing and the level of spatiotemporal activity patterns in cortical areas. Auditory cortex is a structure in which behaviorally relevant aspects of stimulus processing are highly developed because of the fugitive nature of auditory stimuli.
Collapse
Affiliation(s)
- Frank W Ohl
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, D-39118 Magdeburg, Germany.
| | | |
Collapse
|