1
|
Kadowaki S, Morimoto T, Okamoto H. Auditory steady state responses elicited by silent gaps embedded within a broadband noise. BMC Neurosci 2022; 23:27. [PMID: 35524192 PMCID: PMC9074354 DOI: 10.1186/s12868-022-00712-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 04/24/2022] [Indexed: 11/15/2022] Open
Abstract
Background Auditory temporal processing plays an important role in speech comprehension. Usually, behavioral tests that require subjects to detect silent gaps embedded within a continuous sound are used to assess the ability of auditory temporal processing in humans. To evaluate auditory temporal processing objectively, the present study aimed to measure the auditory steady state responses (ASSRs) elicited by silent gaps of different lengths embedded within a broadband noise. We presented a broadband noise with 40-Hz silent gaps of 3.125, 6.25, and 12.5 ms. Results The 40-Hz silent gaps of 3.125, 6.25, and 12.5 ms elicited clear ASSRs. Longer silent gaps elicited larger ASSR amplitudes and ASSR phases significantly differed between conditions. Conclusion The 40 Hz gap-evoked ASSR contributes to our understanding of the neural mechanisms underlying auditory temporal processing and may lead to the development of objective measures of auditory temporal acuity in humans. Supplementary Information The online version contains supplementary material available at 10.1186/s12868-022-00712-0.
Collapse
Affiliation(s)
- Seiichi Kadowaki
- Department of Physiology, International University of Health and Welfare Faculty of Medicine Graduate School of Medicine, 4-3 Kozunomori, Narita, 286-8686, Japan
| | - Takashi Morimoto
- Department of Audiological Engineering, RION Co., Ltd., Tokyo, 185-8533, Japan
| | - Hidehiko Okamoto
- Department of Physiology, International University of Health and Welfare Faculty of Medicine Graduate School of Medicine, 4-3 Kozunomori, Narita, 286-8686, Japan.
| |
Collapse
|
2
|
Herrmann B, Maess B, Johnsrude IS. A neural signature of regularity in sound is reduced in older adults. Neurobiol Aging 2021; 109:1-10. [PMID: 34634748 DOI: 10.1016/j.neurobiolaging.2021.09.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/03/2021] [Accepted: 09/08/2021] [Indexed: 01/21/2023]
Abstract
Sensitivity to repetitions in sound amplitude and frequency is crucial for sound perception. As with other aspects of sound processing, sensitivity to such patterns may change with age, and may help explain some age-related changes in hearing such as segregating speech from background sound. We recorded magnetoencephalography to characterize differences in the processing of sound patterns between younger and older adults. We presented tone sequences that either contained a pattern (made of a repeated set of tones) or did not contain a pattern. We show that auditory cortex in older, compared to younger, adults is hyperresponsive to sound onsets, but that sustained neural activity in auditory cortex, indexing the processing of a sound pattern, is reduced. Hence, the sensitivity of neural populations in auditory cortex fundamentally differs between younger and older individuals, overresponding to sound onsets, while underresponding to patterns in sounds. This may help to explain some age-related changes in hearing such as increased sensitivity to distracting sounds and difficulties tracking speech in the presence of other sound.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; Rotman Research Institute, Baycrest, North York, ON, Canada; Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Burkhard Maess
- Brain Networks Unit, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
3
|
Andermann M, Günther M, Patterson RD, Rupp A. Early cortical processing of pitch height and the role of adaptation and musicality. Neuroimage 2020; 225:117501. [PMID: 33169697 DOI: 10.1016/j.neuroimage.2020.117501] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 02/06/2023] Open
Abstract
Pitch is an important perceptual feature; however, it is poorly understood how its cortical correlates are shaped by absolute vs relative fundamental frequency (f0), and by neural adaptation. In this study, we assessed transient and sustained auditory evoked fields (AEFs) at the onset, progression, and offset of short pitch height sequences, taking into account the listener's musicality. We show that neuromagnetic activity reflects absolute f0 at pitch onset and offset, and relative f0 at transitions within pitch sequences; further, sequences with fixed f0 lead to larger response suppression than sequences with variable f0 contour, and to enhanced offset activity. Musical listeners exhibit stronger f0-related AEFs and larger differences between their responses to fixed vs variable sequences, both within sequences and at pitch offset. The results resemble prominent psychoacoustic phenomena in the perception of pitch contours; moreover, they suggest a strong influence of adaptive mechanisms on cortical pitch processing which, in turn, might be modulated by a listener's musical expertise.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany.
| | - Melanie Günther
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, CB2 3EG, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| |
Collapse
|
4
|
Andermann M, Patterson RD, Rupp A. Transient and sustained processing of musical consonance in auditory cortex and the effect of musicality. J Neurophysiol 2020; 123:1320-1331. [DOI: 10.1152/jn.00876.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In recent years, electroencephalography and magnetoencephalography (MEG) have both been used to investigate the response in human auditory cortex to musical sounds that are perceived as consonant or dissonant. These studies have typically focused on the transient components of the physiological activity at sound onset, specifically, the N1 wave of the auditory evoked potential and the auditory evoked field, respectively. Unfortunately, the morphology of the N1 wave is confounded by the prominent neural response to energy onset at stimulus onset. It is also the case that the perception of pitch is not limited to sound onset; the perception lasts as long as the note producing it. This suggests that consonance studies should also consider the sustained activity that appears after the transient components die away. The current MEG study shows how energy-balanced sounds can focus the response waves on the consonance-dissonance distinction rather than energy changes and how source modeling techniques can be used to measure the sustained field associated with extended consonant and dissonant sounds. The study shows that musical dyads evoke distinct transient and sustained neuromagnetic responses in auditory cortex. The form of the response depends on both whether the dyads are consonant or dissonant and whether the listeners are musical or nonmusical. The results also show that auditory cortex requires more time for the early transient processing of dissonant dyads than it does for consonant dyads and that the continuous representation of temporal regularity in auditory cortex might be modulated by processes beyond auditory cortex. NEW & NOTEWORTHY We report a magnetoencephalography (MEG) study on transient and sustained cortical consonance processing. Stimuli were long-duration, energy-balanced, musical dyads that were either consonant or dissonant. Spatiotemporal source analysis revealed specific transient and sustained neuromagnetic activity in response to the dyads; in particular, the morphology of the responses was shaped by the dyad’s consonance and the listener’s musicality. Our results also suggest that the sustained representation of stimulus regularity might be modulated by processes beyond auditory cortex.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Roy D. Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
5
|
Alho K, Żarnowiec K, Gorina-Careta N, Escera C. Phonological Task Enhances the Frequency-Following Response to Deviant Task-Irrelevant Speech Sounds. Front Hum Neurosci 2019; 13:245. [PMID: 31379540 PMCID: PMC6646721 DOI: 10.3389/fnhum.2019.00245] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Accepted: 07/01/2019] [Indexed: 11/13/2022] Open
Abstract
In electroencephalography (EEG) measurements, processing of periodic sounds in the ascending auditory pathway generates the frequency-following response (FFR) phase-locked to the fundamental frequency (F0) and its harmonics of a sound. We measured FFRs to the steady-state (vowel) part of syllables /ba/ and /aw/ occurring in binaural rapid streams of speech sounds as frequently repeating standard syllables or as infrequent (p = 0.2) deviant syllables among standard /wa/ syllables. Our aim was to study whether concurrent active phonological processing affects early processing of irrelevant speech sounds reflected by FFRs to these sounds. To this end, during syllable delivery, our healthy adult participants performed tasks involving written letters delivered on a computer screen in a rapid stream. The stream consisted of vowel letters written in red, infrequently occurring consonant letters written in the same color, and infrequently occurring vowel letters written in blue. In the phonological task, the participants were instructed to press a response key to the consonant letters differing phonologically but not in color from the frequently occurring red vowels, whereas in the non-phonological task, they were instructed to respond to the vowel letters written in blue differing only in color from the frequently occurring red vowels. We observed that the phonological task enhanced responses to deviant /ba/ syllables but not responses to deviant /aw/ syllables. This suggests that active phonological task performance may enhance processing of such small changes in irrelevant speech sounds as the 30-ms difference in the initial formant-transition time between the otherwise identical syllables /ba/ and /wa/ used in the present study.
Collapse
Affiliation(s)
- Kimmo Alho
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Institute of Biomedicine, Paris Descartes University, Paris, France
| | - Katarzyna Żarnowiec
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Natàlia Gorina-Careta
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Carles Escera
- Brainlab-Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| |
Collapse
|
6
|
Krishnan A, Suresh CH, Gandour JT. Tone language experience-dependent advantage in pitch representation in brainstem and auditory cortex is maintained under reverberation. Hear Res 2019; 377:61-71. [PMID: 30921642 DOI: 10.1016/j.heares.2019.03.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2018] [Revised: 02/10/2019] [Accepted: 03/13/2019] [Indexed: 10/27/2022]
Abstract
Long-term language and music experience enhances neural representation of temporal attributes of pitch in the brainstem and auditory cortex in favorable listening conditions. Herein we examine whether brainstem and cortical pitch mechanisms-shaped by long-term language experience-maintain this advantage in the presence of reverberation-induced degradation in pitch representation. Brainstem frequency following responses (FFR) and cortical pitch responses (CPR) were recorded concurrently from Chinese and English-speaking natives, using a Mandarin word exhibiting a high rising pitch (/yi2/). Stimuli were presented diotically in quiet (Dry), and in the presence of Slight, Mild, and Moderate reverberation conditions. Regardless of language group, the amplitude of both brainstem FFR (F0) and cortical CPR (NaPb) responses decreased with increases in reverberation. Response amplitude for Chinese, however, was larger than English in all reverberant conditions. The Chinese group also exhibited a robust rightward asymmetry at temporal electrode sites (T8 > T7) across stimulus conditions. Regardless of language group, direct comparison of brainstem and cortical responses revealed similar magnitude of change in response amplitude with increasing reverberation. These findings suggest that experience-dependent brainstem and cortical pitch mechanisms provide an enhanced and stable neural representation of pitch-relevant information that is maintained even in the presence of reverberation. Relatively greater degradative effects of reverberation on brainstem (FFR) compared to cortical (NaPb) responses suggest relatively stronger top-down influences on CPRs.
Collapse
Affiliation(s)
- Ananthanarayan Krishnan
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| | - Chandan H Suresh
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| | - Jackson T Gandour
- Purdue University, Department of Speech Language Hearing Sciences, Lyles-Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2122, USA.
| |
Collapse
|
7
|
Brainstem-cortical functional connectivity for speech is differentially challenged by noise and reverberation. Hear Res 2018; 367:149-160. [PMID: 29871826 DOI: 10.1016/j.heares.2018.05.018] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 05/18/2018] [Accepted: 05/23/2018] [Indexed: 11/21/2022]
Abstract
Everyday speech perception is challenged by external acoustic interferences that hinder verbal communication. Here, we directly compared how different levels of the auditory system (brainstem vs. cortex) code speech and how their neural representations are affected by two acoustic stressors: noise and reverberation. We recorded multichannel (64 ch) brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) simultaneously in normal hearing individuals to speech sounds presented in mild and moderate levels of noise and reverb. We matched signal-to-noise and direct-to-reverberant ratios to equate the severity between classes of interference. Electrode recordings were parsed into source waveforms to assess the relative contribution of region-specific brain areas [i.e., brainstem (BS), primary auditory cortex (A1), inferior frontal gyrus (IFG)]. Results showed that reverberation was less detrimental to (and in some cases facilitated) the neural encoding of speech compared to additive noise. Inter-regional correlations revealed associations between BS and A1 responses, suggesting subcortical speech representations influence higher auditory-cortical areas. Functional connectivity analyses further showed that directed signaling toward A1 in both feedforward cortico-collicular (BS→A1) and feedback cortico-cortical (IFG→A1) pathways were strong predictors of degraded speech perception and differentiated "good" vs. "poor" perceivers. Our findings demonstrate a functional interplay within the brain's speech network that depends on the form and severity of acoustic interference. We infer that in addition to the quality of neural representations within individual brain regions, listeners' success at the "cocktail party" is modulated based on how information is transferred among subcortical and cortical hubs of the auditory-linguistic network.
Collapse
|
8
|
Cortical Representations of Speech in a Multitalker Auditory Scene. J Neurosci 2017; 37:9189-9196. [PMID: 28821680 DOI: 10.1523/jneurosci.0938-17.2017] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 07/20/2017] [Accepted: 08/08/2017] [Indexed: 11/21/2022] Open
Abstract
The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects.
Collapse
|
9
|
Sekiya K, Takahashi M, Murakami S, Kakigi R, Okamoto H. Broadened population-level frequency tuning in the auditory cortex of tinnitus patients. J Neurophysiol 2017; 117:1379-1384. [PMID: 28053240 PMCID: PMC5350267 DOI: 10.1152/jn.00385.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Revised: 12/29/2016] [Accepted: 12/31/2016] [Indexed: 11/22/2022] Open
Abstract
Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus. Tinnitus is a phantom auditory perception without an external sound source and is one of the most common public health concerns that impair the quality of life of many individuals. However, its neural mechanisms remain unclear. We herein examined population-level frequency tuning in the auditory cortex of unilateral tinnitus patients with similar hearing levels in both ears using magnetoencephalography. We compared auditory-evoked neural activities elicited by a stimulation to the tinnitus and nontinnitus ears. Objective magnetoencephalographic data suggested that population-level frequency tuning corresponding to the tinnitus ear was significantly broader than that corresponding to the nontinnitus ear in the human auditory cortex. The results obtained support the hypothesis that pathological alterations in inhibitory neural networks play an important role in the perception of subjective tinnitus. NEW & NOTEWORTHY Although subjective tinnitus is one of the most common public health concerns that impair the quality of life of many individuals, no standard treatment or objective diagnostic method currently exists. We herein revealed that population-level frequency tuning was significantly broader in the tinnitus ear than in the nontinnitus ear. The results of the present study provide an insight into the development of an objective diagnostic method for subjective tinnitus.
Collapse
Affiliation(s)
- Kenichi Sekiya
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan.,Department of Otolaryngology, Head, and Neck Surgery, Nagoya City University Graduate School of Medical Sciences and Medical School, Nagoya, Japan; and
| | - Mariko Takahashi
- Department of Otolaryngology, Head, and Neck Surgery, Nagoya City University Graduate School of Medical Sciences and Medical School, Nagoya, Japan; and
| | - Shingo Murakami
- Department of Otolaryngology, Head, and Neck Surgery, Nagoya City University Graduate School of Medical Sciences and Medical School, Nagoya, Japan; and
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan.,The Graduate University for Advanced Studies (SOKENDAI), Hayama, Japan
| | - Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan; .,The Graduate University for Advanced Studies (SOKENDAI), Hayama, Japan
| |
Collapse
|
10
|
Presacco A, Simon JZ, Anderson S. Evidence of degraded representation of speech in noise, in the aging midbrain and cortex. J Neurophysiol 2016; 116:2346-2355. [PMID: 27535374 DOI: 10.1152/jn.00372.2016] [Citation(s) in RCA: 122] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Accepted: 08/12/2016] [Indexed: 01/28/2023] Open
Abstract
Humans have a remarkable ability to track and understand speech in unfavorable conditions, such as in background noise, but speech understanding in noise does deteriorate with age. Results from several studies have shown that in younger adults, low-frequency auditory cortical activity reliably synchronizes to the speech envelope, even when the background noise is considerably louder than the speech signal. However, cortical speech processing may be limited by age-related decreases in the precision of neural synchronization in the midbrain. To understand better the neural mechanisms contributing to impaired speech perception in older adults, we investigated how aging affects midbrain and cortical encoding of speech when presented in quiet and in the presence of a single-competing talker. Our results suggest that central auditory temporal processing deficits in older adults manifest in both the midbrain and in the cortex. Specifically, midbrain frequency following responses to a speech syllable are more degraded in noise in older adults than in younger adults. This suggests a failure of the midbrain auditory mechanisms needed to compensate for the presence of a competing talker. Similarly, in cortical responses, older adults show larger reductions than younger adults in their ability to encode the speech envelope when a competing talker is added. Interestingly, older adults showed an exaggerated cortical representation of speech in both quiet and noise conditions, suggesting a possible imbalance between inhibitory and excitatory processes, or diminished network connectivity that may impair their ability to encode speech efficiently.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland; .,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland.,Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.,Department of Biology, University of Maryland, College Park, Maryland; and.,Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland.,Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland
| |
Collapse
|
11
|
Tabas A, Siebert A, Supek S, Pressnitzer D, Balaguer-Ballester E, Rupp A. Insights on the Neuromagnetic Representation of Temporal Asymmetry in Human Auditory Cortex. PLoS One 2016; 11:e0153947. [PMID: 27096960 PMCID: PMC4838253 DOI: 10.1371/journal.pone.0153947] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 04/06/2016] [Indexed: 11/26/2022] Open
Abstract
Communication sounds are typically asymmetric in time and human listeners are highly sensitive to this short-term temporal asymmetry. Nevertheless, causal neurophysiological correlates of auditory perceptual asymmetry remain largely elusive to our current analyses and models. Auditory modelling and animal electrophysiological recordings suggest that perceptual asymmetry results from the presence of multiple time scales of temporal integration, central to the auditory periphery. To test this hypothesis we recorded auditory evoked fields (AEF) elicited by asymmetric sounds in humans. We found a strong correlation between perceived tonal salience of ramped and damped sinusoids and the AEFs, as quantified by the amplitude of the N100m dynamics. The N100m amplitude increased with stimulus half-life time, showing a maximum difference between the ramped and damped stimulus for a modulation half-life time of 4 ms which is greatly reduced at 0.5 ms and 32 ms. This behaviour of the N100m closely parallels psychophysical data in a manner that: i) longer half-life times are associated with a stronger tonal percept, and ii) perceptual differences between damped and ramped are maximal at 4 ms half-life time. Interestingly, differences in evoked fields were significantly stronger in the right hemisphere, indicating some degree of hemispheric specialisation. Furthermore, the N100m magnitude was successfully explained by a pitch perception model using multiple scales of temporal integration of auditory nerve activity patterns. This striking correlation between AEFs, perception, and model predictions suggests that the physiological mechanisms involved in the processing of pitch evoked by temporal asymmetric sounds are reflected in the N100m.
Collapse
Affiliation(s)
- Alejandro Tabas
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- * E-mail:
| | - Anita Siebert
- Institute of Pharmacology and Toxicology, University of Zurich, Zürich, Zürich, Switzerland
| | - Selma Supek
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| | - Daniel Pressnitzer
- Département d’Études Cognitives, École Normale Supérieure, Paris, France
| | - Emili Balaguer-Ballester
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- The Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Mannheim, Baden-Würtemberg, Germany
| | - André Rupp
- Department of Neurology, Heidelberg University, Heidelberg, Baden-Würtemberg, Germany
| |
Collapse
|
12
|
Engell A, Junghöfer M, Stein A, Lau P, Wunderlich R, Wollbrink A, Pantev C. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex. PLoS One 2016; 11:e0149933. [PMID: 26901149 PMCID: PMC4763022 DOI: 10.1371/journal.pone.0149933] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 02/06/2016] [Indexed: 12/04/2022] Open
Abstract
Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.
Collapse
Affiliation(s)
- Alva Engell
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Muenster, Muenster, Germany
| | - Markus Junghöfer
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Muenster, Muenster, Germany
| | - Alwina Stein
- Institute for Medical Psychology and Systems Neuroscience, University of Muenster, Muenster, Germany
| | - Pia Lau
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Muenster, Muenster, Germany
| | - Robert Wunderlich
- Institute for Physiological Psychology, University of Bielefeld, Bielefeld, Germany
| | - Andreas Wollbrink
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Muenster, Muenster, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University Hospital Muenster, Muenster, Germany
- * E-mail:
| |
Collapse
|
13
|
Han R, Takahashi T, Miyazaki A, Kadoya T, Kato S, Yokosawa K. Activity in the left auditory cortex is associated with individual impulsivity in time discounting. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:6646-9. [PMID: 26737817 DOI: 10.1109/embc.2015.7319917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Impulsivity dictates individual decision-making behavior. Therefore, it can reflect consumption behavior and risk of addiction and thus underlies social activities as well. Neuroscience has been applied to explain social activities; however, the brain function controlling impulsivity has remained unclear. It is known that impulsivity is related to individual time perception, i.e., a person who perceives a certain physical time as being longer is impulsive. Here we show that activity of the left auditory cortex is related to individual impulsivity. Individual impulsivity was evaluated by a self-answered questionnaire in twelve healthy right-handed adults, and activities of the auditory cortices of bilateral hemispheres when listening to continuous tones were recorded by magnetoencephalography. Sustained activity of the left auditory cortex was significantly correlated to impulsivity, that is, larger sustained activity indicated stronger impulsivity. The results suggest that the left auditory cortex represent time perception, probably because the area is involved in speech perception, and that it represents impulsivity indirectly.
Collapse
|
14
|
Previous exposure to intact speech increases intelligibility of its digitally degraded counterpart as a function of stimulus complexity. Neuroimage 2016; 125:131-143. [DOI: 10.1016/j.neuroimage.2015.10.029] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2015] [Revised: 09/01/2015] [Accepted: 10/10/2015] [Indexed: 11/22/2022] Open
|
15
|
Detecting tones in complex auditory scenes. Neuroimage 2015; 122:203-13. [DOI: 10.1016/j.neuroimage.2015.07.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Revised: 03/12/2015] [Accepted: 07/01/2015] [Indexed: 11/18/2022] Open
|
16
|
Bidelman GM, Alain C. Hierarchical neurocomputations underlying concurrent sound segregation: Connecting periphery to percept. Neuropsychologia 2015; 68:38-50. [DOI: 10.1016/j.neuropsychologia.2014.12.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 10/24/2022]
|
17
|
Riecke L, Scharke W, Valente G, Gutschalk A. Sustained selective attention to competing amplitude-modulations in human auditory cortex. PLoS One 2014; 9:e108045. [PMID: 25259525 PMCID: PMC4178064 DOI: 10.1371/journal.pone.0108045] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2014] [Accepted: 08/23/2014] [Indexed: 11/18/2022] Open
Abstract
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- * E-mail:
| | - Wolfgang Scharke
- Department of Child and Adolescent Psychiatry, Psychotherapy and Psychosomatics, University Hospital, RWTH Aachen University, Aachen, Germany
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
18
|
Bidelman GM, Weiss MW, Moreno S, Alain C. Coordinated plasticity in brainstem and auditory cortex contributes to enhanced categorical speech perception in musicians. Eur J Neurosci 2014; 40:2662-73. [PMID: 24890664 DOI: 10.1111/ejn.12627] [Citation(s) in RCA: 107] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Revised: 04/16/2014] [Accepted: 04/18/2014] [Indexed: 11/28/2022]
Abstract
Musicianship is associated with neuroplastic changes in brainstem and cortical structures, as well as improved acuity for behaviorally relevant sounds including speech. However, further advance in the field depends on characterizing how neuroplastic changes in brainstem and cortical speech processing relate to one another and to speech-listening behaviors. Here, we show that subcortical and cortical neural plasticity interact to yield the linguistic advantages observed with musicianship. We compared brainstem and cortical neuroelectric responses elicited by a series of vowels that differed along a categorical speech continuum in amateur musicians and non-musicians. Musicians obtained steeper identification functions and classified speech sounds more rapidly than non-musicians. Behavioral advantages coincided with more robust and temporally coherent brainstem phase-locking to salient speech cues (voice pitch and formant information) coupled with increased amplitude in cortical-evoked responses, implying an overall enhancement in the nervous system's responsiveness to speech. Musicians' subcortical and cortical neural enhancements (but not behavioral measures) were correlated with their years of formal music training. Associations between multi-level neural responses were also stronger in musically trained listeners, and were better predictors of speech perception than in non-musicians. Results suggest that musicianship modulates speech representations at multiple tiers of the auditory pathway, and strengthens the correspondence of processing between subcortical and cortical areas to allow neural activity to carry more behaviorally relevant information. We infer that musicians have a refined hierarchy of internalized representations for auditory objects at both pre-attentive and attentive levels that supplies more faithful phonemic templates to decision mechanisms governing linguistic operations.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, 807 Jefferson Ave. Memphis, TN, 38105, USA
| | | | | | | |
Collapse
|
19
|
Amemiya K, Karino S, Ishizu T, Yumoto M, Yamasoba T. Distinct neural mechanisms of tonal processing between musicians and non-musicians. Clin Neurophysiol 2014; 125:738-747. [DOI: 10.1016/j.clinph.2013.09.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Revised: 09/01/2013] [Accepted: 09/05/2013] [Indexed: 11/25/2022]
|
20
|
Auditory-cortex short-term plasticity induced by selective attention. Neural Plast 2014; 2014:216731. [PMID: 24551458 PMCID: PMC3914570 DOI: 10.1155/2014/216731] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 12/15/2013] [Indexed: 11/23/2022] Open
Abstract
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
Collapse
|
21
|
Okamoto H, Kakigi R. Neural adaptation to silence in the human auditory cortex: a magnetoencephalographic study. Brain Behav 2014; 4:858-66. [PMID: 25365810 PMCID: PMC4212114 DOI: 10.1002/brb3.290] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/23/2014] [Revised: 07/25/2014] [Accepted: 09/05/2014] [Indexed: 12/02/2022] Open
Abstract
INTRODUCTION Previous studies demonstrated that a decrement in the N1m response, a major deflection in the auditory evoked response, with sound repetition was mainly caused by bottom-up driven neural refractory periods following brain activation due to sound stimulations. However, it currently remains unknown whether this decrement occurs with a repetition of silences, which do not induce refractoriness. METHODS In the present study, we investigated decrements in N1m responses elicited by five repetitive silences in a continuous pure tone and by five repetitive pure tones in silence using magnetoencephalography. RESULTS Repetitive sound stimulation differentially affected the N1m decrement in a sound type-dependent manner; while the N1m amplitude decreased from the 1st to the 2nd pure tone and remained constant from the 2nd to the 5th pure tone in silence, a gradual decrement was observed in the N1m amplitude from the 1st to the 5th silence embedded in a continuous pure tone. CONCLUSIONS Our results suggest that neural refractoriness may mainly cause decrements in N1m responses elicited by trains of pure tones in silence, while habituation, which is a form of the implicit learning process, may play an important role in the N1m source strength decrements elicited by successive silences in a continuous pure tone.
Collapse
Affiliation(s)
- Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, The Graduate University for Advanced Studies Hayama, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences Okazaki, Japan ; Department of Physiological Sciences, The Graduate University for Advanced Studies Hayama, Japan
| |
Collapse
|
22
|
Okamoto H, Teismann H, Keceli S, Pantev C, Kakigi R. Differential effects of temporal regularity on auditory-evoked response amplitude: a decrease in silence and increase in noise. Behav Brain Funct 2013; 9:44. [PMID: 24299193 PMCID: PMC4220810 DOI: 10.1186/1744-9081-9-44] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2013] [Accepted: 11/23/2013] [Indexed: 11/10/2022] Open
Abstract
Background In daily life, we are continuously exposed to temporally regular and irregular sounds. Previous studies have demonstrated that the temporal regularity of sound sequences influences neural activity. However, it remains unresolved how temporal regularity affects neural activity in noisy environments, when attention of the listener is not focused on the sound input. Methods In the present study, using magnetoencephalography we investigated the effects of temporal regularity in sound signal sequencing (regular vs. irregular) in silent versus noisy environments during distracted listening. Results The results demonstrated that temporal regularity differentially affected the auditory-evoked N1m response depending on the background acoustic environment: the N1m amplitudes elicited by the temporally regular sounds were smaller in silence and larger in noise than those elicited by the temporally irregular sounds. Conclusions Our results indicate that the human auditory system is able to involuntarily utilize temporal regularity in sound signals to modulate the neural activity in the auditory cortex in accordance with the surrounding acoustic environment.
Collapse
Affiliation(s)
- Hidehiko Okamoto
- Department of Integrative Physiology, National Institute for Physiological Sciences, 38 Nishigo-Naka, Myodaiji, Okazaki 444-8585, JAPAN.
| | | | | | | | | |
Collapse
|
23
|
Bidelman GM, Moreno S, Alain C. Tracing the emergence of categorical speech perception in the human auditory system. Neuroimage 2013; 79:201-12. [DOI: 10.1016/j.neuroimage.2013.04.093] [Citation(s) in RCA: 134] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Revised: 04/16/2013] [Accepted: 04/21/2013] [Indexed: 11/26/2022] Open
|
24
|
Abstract
The challenge of understanding how the brain processes natural signals is compounded by the fact that such signals are often tied closely to specific natural behaviors and natural environments. This added complexity is especially true for auditory communication signals that can carry information at multiple hierarchical levels, and often occur in the context of other competing communication signals. Selective attention provides a mechanism to focus processing resources on specific components of auditory signals, and simultaneously suppress responses to unwanted signals or noise. Although selective auditory attention has been well-studied behaviorally, very little is known about how selective auditory attention shapes the processing on natural auditory signals, and how the mechanisms of auditory attention are implemented in single neurons or neural circuits. Here we review the role of selective attention in modulating auditory responses to complex natural stimuli in humans. We then suggest how the current understanding can be applied to the study of selective auditory attention in the context natural signal processing at the level of single neurons and populations in animal models amenable to invasive neuroscience techniques. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
25
|
Hierarchical Neural Encoding of Temporal Regularity in the Human Auditory Cortex. Brain Topogr 2013; 28:459-70. [DOI: 10.1007/s10548-013-0300-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2012] [Accepted: 06/11/2013] [Indexed: 10/26/2022]
|
26
|
Adaptive temporal encoding leads to a background-insensitive cortical representation of speech. J Neurosci 2013; 33:5728-35. [PMID: 23536086 DOI: 10.1523/jneurosci.5297-12.2013] [Citation(s) in RCA: 206] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.
Collapse
|
27
|
Kuchenbuch A, Paraskevopoulos E, Herholz SC, Pantev C. Effects of musical training and event probabilities on encoding of complex tone patterns. BMC Neurosci 2013; 14:51. [PMID: 23617597 PMCID: PMC3639196 DOI: 10.1186/1471-2202-14-51] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2012] [Accepted: 04/20/2013] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The human auditory cortex automatically encodes acoustic input from the environment and differentiates regular sound patterns from deviant ones in order to identify important, irregular events. The Mismatch Negativity (MMN) response is a neuronal marker for the detection of sounds that are unexpected, based on the encoded regularities. It is also elicited by violations of more complex regularities and musical expertise has been shown to have an effect on the processing of complex regularities. Using magnetoencephalography (MEG), we investigated the MMN response to salient or less salient deviants by varying the standard probability (70%, 50% and 35%) of a pattern oddball paradigm. To study the effects of musical expertise in the encoding of the patterns, we compared the responses of a group of non-musicians to those of musicians. RESULTS We observed significant MMN in all conditions, including the least salient condition (35% standards), in response to violations of the predominant tone pattern for both groups. The amplitude of MMN from the right hemisphere was influenced by the standard probability. This effect was modulated by long-term musical training: standard probability changes influenced MMN amplitude in the group of non-musicians only. CONCLUSION This study indicates that pattern violations are detected automatically, even if they are of very low salience, both in non-musicians and musicians, with salience having a stronger impact on processing in the right hemisphere of non-musicians. Long-term musical training influences this encoding, in that non-musicians benefit to a greater extent from a good signal-to-noise ratio (i.e. high probability of the standard pattern), while musicians are less dependent on the salience of an acoustic environment.
Collapse
Affiliation(s)
- Anja Kuchenbuch
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | | | | | | |
Collapse
|
28
|
Nardo D, Santangelo V, Macaluso E. Spatial orienting in complex audiovisual environments. Hum Brain Mapp 2013; 35:1597-614. [PMID: 23616340 DOI: 10.1002/hbm.22276] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Revised: 01/22/2013] [Accepted: 02/07/2013] [Indexed: 11/11/2022] Open
Abstract
Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality.
Collapse
Affiliation(s)
- Davide Nardo
- Neuroimaging Laboratory, Santa Lucia Foundation, Rome, Italy
| | | | | |
Collapse
|
29
|
Early visual and auditory processing rely on modality-specific attentional resources. Neuroimage 2013; 70:240-9. [DOI: 10.1016/j.neuroimage.2012.12.046] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2012] [Revised: 12/18/2012] [Accepted: 12/20/2012] [Indexed: 11/22/2022] Open
|
30
|
Emergence of neural encoding of auditory objects while listening to competing speakers. Proc Natl Acad Sci U S A 2012; 109:11854-9. [PMID: 22753470 DOI: 10.1073/pnas.1205381109] [Citation(s) in RCA: 457] [Impact Index Per Article: 38.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation.
Collapse
|
31
|
Diesch E, Andermann M, Rupp A. Is the effect of tinnitus on auditory steady-state response amplitude mediated by attention? Front Syst Neurosci 2012; 6:38. [PMID: 22661932 PMCID: PMC3357113 DOI: 10.3389/fnsys.2012.00038] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2011] [Accepted: 05/03/2012] [Indexed: 12/31/2022] Open
Abstract
Objectives: Auditory steady-state response (ASSR) amplitude enhancement effects have been reported in tinnitus patients. As ASSR amplitude is also enhanced by attention, the effect of tinnitus on ASSR amplitude could be interpreted as an effect of attention mediated by tinnitus. As N1 attention effects are significantly larger than those on the ASSR, if the effect of tinnitus on ASSR amplitude were due to attention, there should be similar amplitude enhancement effects in tinnitus for the N1 component of the auditory-evoked response. Methods: MEG recordings which were previously examined for the ASSR (Diesch et al., 2010a) were analyzed with respect to the N1m component. Like the ASSR previously, the N1m was analyzed in the source domain (source space projection). Stimuli were amplitude-modulated (AM) tones with one of three carrier frequencies matching the tinnitus frequency or a surrogate frequency 1½ octave above the audiometric edge frequency in controls, the audiometric edge frequency, and a frequency below the audiometric edge. Single AM-tones were presented in a single condition and superpositions of three AM-tones differing in carrier and modulation frequency in a composite condition. Results: In the earlier ASSR study (Diesch et al., 2010a), the ASSR amplitude in tinnitus patients, but not in controls, was significantly larger in the (surrogate) tinnitus condition than in the edge condition. Patients showed less evidence than controls of reciprocal inhibition of component ASSR responses in the composite condition. In the present study, N1m amplitudes elicited by stimuli located at the audiometric edge and at the (surrogate) tinnitus frequency were smaller than N1m amplitudes elicited by sub-edge tones both in patients and controls. The relationship of the N1m response in the composite condition to the N1m response in the single condition indicated that reciprocal inhibition among component N1m responses was reduced in patients compared against controls. Conclusions: In the present study, no evidence was found for an N1-amplitude enhancement effect in tinnitus. Compared to controls, reciprocal inhibition is reduced in tinnitus patients. Thus, as there is no effect on N1m that could potentially be attributed to attention, it seems unlikely that the enhancement effect of tinnitus on ASSR amplitude could be accounted for in terms of attention induced by tinnitus.
Collapse
Affiliation(s)
- Eugen Diesch
- Department of Clinical and Cognitive Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University Mannheim, Germany
| | | | | |
Collapse
|
32
|
Gutschalk A, Brandt T, Bartsch A, Jansen C. Comparison of auditory deficits associated with neglect and auditory cortex lesions. Neuropsychologia 2012; 50:926-38. [DOI: 10.1016/j.neuropsychologia.2012.01.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2011] [Revised: 01/23/2012] [Accepted: 01/27/2012] [Indexed: 10/14/2022]
|
33
|
Miettinen I, Alku P, Yrttiaho S, May PJ, Tiitinen H. Cortical processing of degraded speech sounds: Effects of distortion type and continuity. Neuroimage 2012; 60:1036-45. [DOI: 10.1016/j.neuroimage.2012.01.085] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2011] [Revised: 01/09/2012] [Accepted: 01/11/2012] [Indexed: 11/28/2022] Open
|
34
|
Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses. Neuroimage 2012; 60:1478-89. [PMID: 22305992 DOI: 10.1016/j.neuroimage.2012.01.114] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2011] [Revised: 12/22/2011] [Accepted: 01/22/2012] [Indexed: 11/23/2022] Open
Abstract
To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.
Collapse
|
35
|
Abstract
Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Collapse
|
36
|
Teismann H, Okamoto H, Pantev C. Short and intense tailor-made notched music training against tinnitus: the tinnitus frequency matters. PLoS One 2011; 6:e24685. [PMID: 21935438 PMCID: PMC3174191 DOI: 10.1371/journal.pone.0024685] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2011] [Accepted: 08/16/2011] [Indexed: 11/18/2022] Open
Abstract
Tinnitus is one of the most common diseases in industrialized countries. Here, we developed and evaluated a short-term (5 subsequent days) and intensive (6 hours/day) tailor-made notched music training (TMNMT) for patients suffering from chronic, tonal tinnitus. We evaluated (i) the TMNMT efficacy in terms of behavioral and magnetoencephalographic outcome measures for two matched patient groups with either low (≤8 kHz, N = 10) or high (>8 kHz, N = 10) tinnitus frequencies, and the (ii) persistency of the TMNMT effects over the course of a four weeks post-training phase. The results indicated that the short-term intensive TMNMT took effect in patients with tinnitus frequencies ≤8 kHz: subjective tinnitus loudness, tinnitus-related distress, and tinnitus-related auditory cortex evoked activity were significantly reduced after TMNMT completion. However, in the patients with tinnitus frequencies >8 kHz, significant changes were not observed. Interpreted in their entirety, the results also indicated that the induced changes in auditory cortex evoked neuronal activity and tinnitus loudness were not persistent, encouraging the application of the TMNMT as a longer-term training. The findings are essential in guiding the intended transfer of this neuro-scientific treatment approach into routine clinical practice.
Collapse
Affiliation(s)
- Henning Teismann
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany.
| | | | | |
Collapse
|
37
|
Emotion-associated tones attract enhanced attention at early auditory processing: magnetoencephalographic correlates. J Neurosci 2011; 31:7801-10. [PMID: 21613493 DOI: 10.1523/jneurosci.6236-10.2011] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Emotionally significant objects and events in our environment attract attention based on their motivational relevance for survival. Such kind of emotional attention is thought to lead to affect-specific amplified processing that closely resembles effects of directed attention. Although there has been extensive research on prioritized processing of visual emotional stimuli, the spatio-temporal dynamics of motivated attention mechanisms in auditory processing are less clearly understood. We investigated modulatory effects of emotional attention at early auditory processing stages using time-sensitive whole-head magnetoencephalography. A novel associative learning procedure involving multiple conditioned stimuli (CSs) per affective category was introduced to specifically test whether affect-specific modulation can proceed in a rapid and highly differentiating fashion in humans. Auditory evoked fields (AEFs) were recorded in response to 42 different ultrashort, click-like sounds before and after affective conditioning with pleasant, unpleasant, or neutral auditory scenes. As hypothesized, emotional attention affected neural click tone processing at time intervals of the P20-50m (20-50 ms) and the N1m (100-130 ms), two early AEF components sensitive to directed selective attention (Woldorff et al., 1993). Distributed source localization revealed amplified processing of tones associated with aversive or pleasant compared with neutral auditory scenes at auditory sensory, frontal and parietal cortex regions. Behavioral tests did not indicate any awareness for the contingent CS-UCS (unconditioned stimulus) relationships in the participants, suggesting affective associative learning in absence of contingency awareness. Our findings imply early and highly differentiating affect-specific modulation of auditory stimulus processing supported by neural mechanisms and circuitry comparable with those reported for directed auditory attention.
Collapse
|
38
|
Plasticity of human auditory-evoked fields induced by shock conditioning and contingency reversal. Proc Natl Acad Sci U S A 2011; 108:12545-50. [PMID: 21746922 DOI: 10.1073/pnas.1016124108] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We used magnetoencephalography (MEG) to assess plasticity of human auditory cortex induced by classical conditioning and contingency reversal. Participants listened to random sequences of high or low tones. A first baseline phase presented these without further associations. In phase 2, one of the frequencies (CS(+)) was paired with shock on half its occurrences, whereas the other frequency (CS(-)) was not. In phase 3, the contingency assigning CS(+) and CS(-) was reversed. Conditioned pupil dilation was observed in phase 2 but extinguished in phase 3. MEG revealed that, during phase-2 initial conditioning, the P1m, N1m, and P2m auditory components, measured from sensors over auditory temporal cortex, came to distinguish between CS(+) and CS(-). After contingency reversal in phase 3, the later P2m component rapidly reversed its selectivity (unlike the pupil response) but the earlier P1m did not, whereas N1m showed some new learning but not reversal. These results confirm plasticity of human auditory responses due to classical conditioning, but go further in revealing distinct constraints on different levels of the auditory hierarchy. The later P2m component can reverse affiliation immediately in accord with an updated expectancy after contingency reversal, whereas the earlier auditory components cannot. These findings indicate distinct cognitive and emotional influences on auditory processing.
Collapse
|
39
|
Weisz N, Lecaignard F, Müller N, Bertrand O. The modulatory influence of a predictive cue on the auditory steady-state response. Hum Brain Mapp 2011; 33:1417-30. [PMID: 21538704 DOI: 10.1002/hbm.21294] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Accepted: 02/02/2011] [Indexed: 11/12/2022] Open
Abstract
Whether attention exerts its impact already on primary sensory levels is still a matter of debate. Particularly in the auditory domain the amount of empirical evidence is scarce. Recently noninvasive and invasive studies have shown attentional modulations of the auditory Steady-State Response (aSSR). This evoked oscillatory brain response is of importance to the issue, because the main generators have been shown to be located in primary auditory cortex. So far, the issue whether the aSSR is sensitive to the predictive value of a cue preceding a target has not been investigated. Participants in the present study had to indicate on which ear the faster amplitude modulated (AM) sound of a compound sound (42 and 19 Hz AM frequencies) was presented. A preceding auditory cue was either informative (75%) or uninformative (50%) with regards to the location of the target. Behaviorally we could confirm that typical attentional modulations of performance were present in case of a preceding informative cue. With regards to the aSSR we found differences between the informative and uninformative condition only when the cue/target combination was presented to the right ear. Source analysis indicated this difference to be generated by a reduced 42 Hz aSSR in right primary auditory cortex. Our and previous data by others show a default tendency of "40 Hz" AM sounds to be processed by the right auditory cortex. We interpret our results as active suppression of this automatic response pattern, when attention needs to be allocated to right ear input.
Collapse
Affiliation(s)
- Nathan Weisz
- Department of Psychology, University of Konstanz, Konstanz, Germany.
| | | | | | | |
Collapse
|