1
|
Stephen EP, Li Y, Metzger S, Oganian Y, Chang EF. Latent neural dynamics encode temporal context in speech. Hear Res 2023; 437:108838. [PMID: 37441880 PMCID: PMC11182421 DOI: 10.1016/j.heares.2023.108838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 06/15/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Direct neural recordings from human auditory cortex have demonstrated encoding for acoustic-phonetic features of consonants and vowels. Neural responses also encode distinct acoustic amplitude cues related to timing, such as those that occur at the onset of a sentence after a silent period or the onset of the vowel in each syllable. Here, we used a group reduced rank regression model to show that distributed cortical responses support a low-dimensional latent state representation of temporal context in speech. The timing cues each capture more unique variance than all other phonetic features and exhibit rotational or cyclical dynamics in latent space from activity that is widespread over the superior temporal gyrus. We propose that these spatially distributed timing signals could serve to provide temporal context for, and possibly bind across time, the concurrent processing of individual phonetic features, to compose higher-order phonological (e.g. word-level) representations.
Collapse
Affiliation(s)
- Emily P Stephen
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Department of Mathematics and Statistics, Boston University, Boston, MA 02215, United States
| | - Yuanning Li
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Sean Metzger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States
| | - Yulia Oganian
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States; Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | - Edward F Chang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States.
| |
Collapse
|
2
|
Cucu MO, Kazanina N, Houghton C. Syllable-Initial Phonemes Affect Neural Entrainment to Consonant-Vowel Syllables. Front Neurosci 2022; 16:826105. [PMID: 35774556 PMCID: PMC9237462 DOI: 10.3389/fnins.2022.826105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 05/10/2021] [Indexed: 11/23/2022] Open
Abstract
Neural entrainment to speech appears to rely on syllabic features, especially those pertaining to the acoustic envelope of the stimuli. It has been proposed that the neural tracking of speech depends on the phoneme features. In the present electroencephalography experiment, we examined data from 25 participants to investigate neural entrainment to near-isochronous stimuli comprising syllables beginning with different phonemes. We measured the inter-trial phase coherence of neural responses to these stimuli and assessed the relationship between this coherence and acoustic properties of the stimuli designed to quantify their "edginess." We found that entrainment was different across different classes of the syllable-initial phoneme and that entrainment depended on the amount of "edge" in the sound envelope. In particular, the best edge marker and predictor of entrainment was the latency of the maximum derivative of each syllable.
Collapse
Affiliation(s)
- M. Oana Cucu
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
- School of Psychological Sciences, University of Bristol, Bristol, United Kingdom
| | - Nina Kazanina
- School of Psychological Sciences, University of Bristol, Bristol, United Kingdom
- International Laboratory of Social Neurobiology, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, HSE University, Moscow, Russia
| | - Conor Houghton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
3
|
Fan Y, Fang K, Sun R, Shen D, Yang J, Tang Y, Fang G. Hierarchical auditory perception for species discrimination and individual recognition in the music frog. Curr Zool 2021; 68:581-591. [DOI: 10.1093/cz/zoab085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/01/2021] [Indexed: 11/12/2022] Open
Abstract
Abstract
The ability to discriminate species and recognize individuals is crucial for reproductive success and/or survival in most animals. However, the temporal order and neural localization of these decision-making processes has remained unclear. In this study, event-related potentials (ERPs) were measured in the telencephalon, diencephalon, and mesencephalon of the music frog Nidirana daunchina. These ERPs were elicited by calls from 1 group of heterospecifics (recorded from a sympatric anuran species) and 2 groups of conspecifics that differed in their fundamental frequencies. In terms of the polarity and position within the ERP waveform, auditory ERPs generally consist of 4 main components that link to selective attention (N1), stimulus evaluation (P2), identification (N2), and classification (P3). These occur around 100, 200, 250, and 300 ms after stimulus onset, respectively. Our results show that the N1 amplitudes differed significantly between the heterospecific and conspecific calls, but not between the 2 groups of conspecific calls that differed in fundamental frequency. On the other hand, the N2 amplitudes were significantly different between the 2 groups of conspecific calls, suggesting that the music frogs discriminated the species first, followed by individual identification, since N1 and N2 relate to selective attention and stimuli identification, respectively. Moreover, the P2 amplitudes evoked in females were significantly greater than those in males, indicating the existence of sexual dimorphism in auditory discrimination. In addition, both the N1 amplitudes in the left diencephalon and the P2 amplitudes in the left telencephalon were greater than in other brain areas, suggesting left hemispheric dominance in auditory perception. Taken together, our results support the hypothesis that species discrimination and identification of individual characteristics are accomplished sequentially, and that auditory perception exhibits differences between sexes and in spatial dominance.
Collapse
Affiliation(s)
- Yanzhu Fan
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ke Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- School of Life Science, Anhui University, Hefei 230601, China
| | - Ruolei Sun
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- School of Life Science, Anhui University, Hefei 230601, China
| | - Di Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jing Yang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yezhong Tang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guangzhan Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, Chengdu 610041, China
- College of Life Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
4
|
Hajizadeh A, Matysiak A, Brechmann A, König R, May PJC. Why do humans have unique auditory event-related fields? Evidence from computational modeling and MEG experiments. Psychophysiology 2021; 58:e13769. [PMID: 33475173 DOI: 10.1111/psyp.13769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/04/2020] [Accepted: 12/20/2020] [Indexed: 11/28/2022]
Abstract
Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g., P1m, N1m, and P2m) can be identified in most subjects. Here, we explore the reason for this subject-specificity through a combination of MEG measurements and computational modeling of auditory cortex. We test whether ERF subject-specificity can predominantly be explained in terms of each subject having an individual cortical gross anatomy, which modulates the MEG signal, or whether individual cortical dynamics is also at play. To our knowledge, this is the first time that tools to address this question are being presented. The effects of anatomical and dynamical variation on the MEG signal is simulated in a model describing the core-belt-parabelt structure of the auditory cortex, and with the dynamics based on the leaky-integrator neuron model. The experimental and simulated ERFs are characterized in terms of the N1m amplitude, latency, and width. Also, we examine the waveform grand-averaged across subjects, and the standard deviation of this grand average. The results show that the intersubject variability of the ERF arises out of both the anatomy and the dynamics of auditory cortex being specific to each subject. Moreover, our results suggest that the latency variation of the N1m is largely related to subject-specific dynamics. The findings are discussed in terms of how learning, plasticity, and sound detection are reflected in the auditory ERFs. The notion of the grand-averaged ERF is critically evaluated.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Artur Matysiak
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Magdeburg, Germany
| | - Reinhard König
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Patrick J C May
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany.,Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
5
|
Shen D, Fang K, Fan Y, Shen J, Yang J, Cui J, Tang Y, Fang G. Sex differences in vocalization are reflected by event-related potential components in the music frog. Anim Cogn 2020; 23:477-490. [PMID: 32016618 DOI: 10.1007/s10071-020-01350-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 01/02/2020] [Accepted: 01/17/2020] [Indexed: 11/28/2022]
Abstract
Sex differences in vocalization have been commonly found in vocal animals. It remains unclear, however, how animals perceive and discriminate these differences. The amplitudes and latencies of event-related potentials (ERP) components can reflect the auditory processing efficiency and time course. We investigated the neural mechanisms of auditory processing in the Emei music frog (Nidirana daunchina) using an Oddball paradigm with ERP. We recorded and analyzed eletroencephalogram (EEG) signals from the forebrain and midbrain when the subjects listened to white noise (WN) and conspecific sex-specific vocalizations. We found that (1) both amplitudes and latencies of some ERP components evoked by conspecific calls were significantly higher than those by WN, suggesting the music frogs can discriminate conspecific vocalizations from background noise; (2) both amplitudes and latencies of most ERP components evoked by female calls were significantly higher or longer than those by male calls, implying that the ERP components can reflect sex differences in vocalization; and (3) there were significant differences in ERP amplitudes between male and female subjects, suggesting a sexual dimorphism in auditory perception. Together, the present results indicate that the music frog could discriminate conspecific calls from noise, male's calls from female's ones, and sexual dimorphism of auditory perception existed in this species.
Collapse
Affiliation(s)
- Di Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Ke Fang
- Institute of Bio-Inspired Structure and Surface Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, People's Republic of China
| | - Yanzhu Fan
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jiangyan Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jing Yang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jianguo Cui
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China
| | - Yezhong Tang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China
| | - Guangzhan Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.
| |
Collapse
|
6
|
Andermann M, Patterson RD, Rupp A. Transient and sustained processing of musical consonance in auditory cortex and the effect of musicality. J Neurophysiol 2020; 123:1320-1331. [DOI: 10.1152/jn.00876.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In recent years, electroencephalography and magnetoencephalography (MEG) have both been used to investigate the response in human auditory cortex to musical sounds that are perceived as consonant or dissonant. These studies have typically focused on the transient components of the physiological activity at sound onset, specifically, the N1 wave of the auditory evoked potential and the auditory evoked field, respectively. Unfortunately, the morphology of the N1 wave is confounded by the prominent neural response to energy onset at stimulus onset. It is also the case that the perception of pitch is not limited to sound onset; the perception lasts as long as the note producing it. This suggests that consonance studies should also consider the sustained activity that appears after the transient components die away. The current MEG study shows how energy-balanced sounds can focus the response waves on the consonance-dissonance distinction rather than energy changes and how source modeling techniques can be used to measure the sustained field associated with extended consonant and dissonant sounds. The study shows that musical dyads evoke distinct transient and sustained neuromagnetic responses in auditory cortex. The form of the response depends on both whether the dyads are consonant or dissonant and whether the listeners are musical or nonmusical. The results also show that auditory cortex requires more time for the early transient processing of dissonant dyads than it does for consonant dyads and that the continuous representation of temporal regularity in auditory cortex might be modulated by processes beyond auditory cortex. NEW & NOTEWORTHY We report a magnetoencephalography (MEG) study on transient and sustained cortical consonance processing. Stimuli were long-duration, energy-balanced, musical dyads that were either consonant or dissonant. Spatiotemporal source analysis revealed specific transient and sustained neuromagnetic activity in response to the dyads; in particular, the morphology of the responses was shaped by the dyad’s consonance and the listener’s musicality. Our results also suggest that the sustained representation of stimulus regularity might be modulated by processes beyond auditory cortex.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| | - Roy D. Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
7
|
Oganian Y, Chang EF. A speech envelope landmark for syllable encoding in human superior temporal gyrus. SCIENCE ADVANCES 2019; 5:eaay6279. [PMID: 31976369 PMCID: PMC6957234 DOI: 10.1126/sciadv.aay6279] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 09/16/2019] [Indexed: 05/13/2023]
Abstract
The most salient acoustic features in speech are the modulations in its intensity, captured by the amplitude envelope. Perceptually, the envelope is necessary for speech comprehension. Yet, the neural computations that represent the envelope and their linguistic implications are heavily debated. We used high-density intracranial recordings, while participants listened to speech, to determine how the envelope is represented in human speech cortical areas on the superior temporal gyrus (STG). We found that a well-defined zone in middle STG detects acoustic onset edges (local maxima in the envelope rate of change). Acoustic analyses demonstrated that timing of acoustic onset edges cues syllabic nucleus onsets, while their slope cues syllabic stress. Synthesized amplitude-modulated tone stimuli showed that steeper slopes elicited greater responses, confirming cortical encoding of amplitude change, not absolute amplitude. Overall, STG encoding of the timing and magnitude of acoustic onset edges underlies the perception of speech temporal structure.
Collapse
|
8
|
Ungan P, Yagcioglu S, Ayik E. Event-related potentials to single-cycle binaural beats of a pure tone, a click train, and a noise. Exp Brain Res 2019; 237:2811-2828. [PMID: 31451833 DOI: 10.1007/s00221-019-05638-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Accepted: 08/19/2019] [Indexed: 12/12/2022]
Abstract
There are only few electrophysiological studies on a phenomenon called "binaural beats" (BBs), which is experienced when two tones with frequencies close to each other are dichotically presented to the ears. And, there is no study in which the electrical responses of the brain to BBs of complex sounds are recorded and analyzed. Owing to a recent method based on single-cycle BB stimulation with sub-threshold temporary monaural frequency shifts, we could record the event-related potentials (ERPs) to BBs of a 250-Hz tone as well as those to the BBs of a 250/s click train and to the BBs of a recurrent 4-ms Gaussian noise. Although fundamental components of the click train and noise stimuli were lower in intensity than the tonal stimuli in our experiments, the N1 responses to the BBs of the former two wide-spectrum sounds were recorded with significantly larger amplitudes and shorter latencies than those to the BBs of a tone, suggesting an across-frequency integration of directional information. During a BB cycle of a complex sound, the interaural time differences (ITDs) of the spectral components are all equal to each other at any time; whereas their interaural phase differences (IPDs) are all different. The ITD rather than the IPD should, therefore, be the cue that is relied upon by the binaural mechanism coding the perceived lateral shifts of the sound caused by BBs. This is in line with across-frequency models of human auditory lateralization based on a common ITD, fulfilling a straightness criterion.
Collapse
Affiliation(s)
- Pekcan Ungan
- Department of Biophysics, School of Medicine, Koc University, Istanbul, Turkey.
| | - Suha Yagcioglu
- Department of Biophysics, Faculty of Medicine, Hacettepe University, Ankara, Turkey
| | - Ece Ayik
- Graduate School of Science and Engineering, Koc University, Istanbul, Turkey
| |
Collapse
|
9
|
Fan Y, Yue X, Yang J, Shen J, Shen D, Tang Y, Fang G. Preference of spectral features in auditory processing for advertisement calls in the music frogs. Front Zool 2019; 16:13. [PMID: 31168310 PMCID: PMC6509768 DOI: 10.1186/s12983-019-0314-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/22/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Animal vocal signals encode very important information for communication during which the importance of temporal and spectral characteristics of vocalizations is always asymmetrical and species-specific. However, it is still unknown how auditory system represents this asymmetrical and species-specific patterns. In this study, auditory event related potential (ERP) changes were evaluated in the Emei music frog (Babina daunchina) to assess the differences in eliciting neural responses of both temporal and spectral features for the telencephalon, diencephalon and mesencephalon respectively. To do this, an acoustic playback experiment using an oddball paradigm design was conducted, in which an original advertisement call (OC), its spectral feature preserved version (SC) and temporal feature preserved version (TC) were used as deviant stimuli with synthesized white noise as standard stimulus. RESULTS The present results show that 1) compared with TC, more similar ERP components were evoked by OC and SC; and 2) the P3a amplitudes in the forebrain evoked by OC were significantly higher in males than in females. CONCLUSIONS Together, the results provide evidence for suggesting neural processing for conspecific vocalization may prefer to the spectral features in the music frog, prompting speculation that the spectral features may play more important roles in auditory object perception or vocal communication in this species. In addition, the neural processing for auditory perception is sexually dimorphic.
Collapse
Affiliation(s)
- Yanzhu Fan
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Xizi Yue
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| | - Jing Yang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Jiangyan Shen
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Di Shen
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Yezhong Tang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| | - Guangzhan Fang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| |
Collapse
|
10
|
Fan Y, Yue X, Xue F, Cui J, Brauth SE, Tang Y, Fang G. Auditory perception exhibits sexual dimorphism and left telencephalic dominance in Xenopus laevis. Biol Open 2018; 7:7/12/bio035956. [PMID: 30509903 PMCID: PMC6310876 DOI: 10.1242/bio.035956] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Sex differences in both vocalization and auditory processing have been commonly found in vocal animals, although the underlying neural mechanisms associated with sexual dimorphism of auditory processing are not well understood. In this study we investigated whether auditory perception exhibits sexual dimorphism in Xenopus laevis. To do this we measured event-related potentials (ERPs) evoked by white noise (WN) and conspecific calls in the telencephalon, diencephalon and mesencephalon respectively. Results showed that (1) the N1 amplitudes evoked in the right telencephalon and right diencephalon of males by WN are significantly different from those evoked in females; (2) in males the N1 amplitudes evoked by conspecific calls are significantly different from those evoked by WN; (3) in females the N1 amplitude for the left mesencephalon was significantly lower than for other brain areas, while the P2 and P3 amplitudes for the right mesencephalon were the smallest; in contrast these amplitudes for the left mesencephalon were the smallest in males. These results suggest auditory perception is sexually dimorphic. Moreover, the amplitude of each ERP component (N1, P2 and P3) for the left telencephalon was the largest in females and/or males, suggesting that left telencephalic dominance exists for auditory perception in Xenopus. Summary: Investigation of auditory neural mechanisms in the South African clawed frog (Xenopus laevis) indicates that auditory perception exhibits sexual dimorphism and left telencephalic advantage.
Collapse
Affiliation(s)
- Yanzhu Fan
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Xizi Yue
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Fei Xue
- Sichuan Key Laboratory of Conservation Biology for Endangered Wildlife, Chengdu Research Base of Giant Panda Breeding, 26 Panda Road, Northern Suburb, Chengdu, Sichuan 610081, People's Republic of China
| | - Jianguo Cui
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Steven E Brauth
- Department of Psychology, University of Maryland, College Park, MD20742, USA
| | - Yezhong Tang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Guangzhan Fang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| |
Collapse
|
11
|
Starzynski C, Gutschalk A. Context-dependent role of selective attention for change detection in multi-speaker scenes. Hum Brain Mapp 2018; 39:4623-4632. [PMID: 29999565 PMCID: PMC6866511 DOI: 10.1002/hbm.24310] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 06/25/2018] [Accepted: 06/26/2018] [Indexed: 11/12/2022] Open
Abstract
Disappearance of a voice or other sound source may often go unnoticed when the auditory scene is crowded. We explored the role of selective attention for this change deafness with magnetoencephalography in multi-speaker scenes. Each scene was presented two times in direct succession, and one target speaker was frequently omitted in Scene 2. When listeners were previously cued to the target speaker, activity in auditory cortex time locked to the target speaker's sound envelope was selectively enhanced in Scene 1, as was determined by a cross-correlation analysis. Moreover, the response was stronger for hit trials than for miss trials, confirming that selective attention played a role for subsequent change detection. If selective attention to the streams where the change occurred was generally required for successful change detection, neural enhancement of this stream would also be expected without cue in hit compared to miss trials. However, when listeners were not previously cued to the target, no enhanced activity for the target speaker was observed for hit trials, and there was no significant difference between hit and miss trials. These results, first, confirm a role for attention in change detection for situations where the target source is known. Second, they suggest that the omission of a speaker, or more generally an auditory stream, can alternatively be detected without selective attentional enhancement of the target stream. Several models and strategies could be envisaged for change detection in this case, including global comparison of the subsequent scenes.
Collapse
Affiliation(s)
| | - Alexander Gutschalk
- Department of NeurologyRuprecht‐Karls‐Universität HeidelbergHeidelbergGermany
| |
Collapse
|
12
|
Auditory sensitivity exhibits sexual dimorphism and seasonal plasticity in music frogs. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2018; 204:1029-1044. [DOI: 10.1007/s00359-018-1301-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 09/18/2018] [Accepted: 10/21/2018] [Indexed: 12/26/2022]
|
13
|
The First Call Note Plays a Crucial Role in Frog Vocal Communication. Sci Rep 2017; 7:10128. [PMID: 28860503 PMCID: PMC5579009 DOI: 10.1038/s41598-017-09870-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Accepted: 08/01/2017] [Indexed: 11/25/2022] Open
Abstract
Vocal Communication plays a crucial role in survival and reproductive success in most amphibian species. Although amphibian communication sounds are often complex consisting of many temporal features, we know little about the biological significance of each temporal component. The present study examined the biological significance of notes of the male advertisement calls of the Emei music frog (Babina daunchina) using the optimized electroencephalogram (EEG) paradigm of mismatch negativity (MMN). Music frog calls generally contain four to six notes separated approximately by 150 millisecond intervals. A standard stimulus (white noise) and five deviant stimuli (five notes from one advertisement call) were played back to each subject while simultaneously recording multi-channel EEG signals. The results showed that the MMN amplitude for the first call note was significantly larger than for that of the others. Moreover, the MMN amplitudes evoked from the left forebrain and midbrain were typically larger than those from the right counterpart. These results are consistent with the ideas that the first call note conveys more information than the others for auditory recognition and that there is left-hemisphere dominance for processing information derived from conspecific calls in frogs.
Collapse
|
14
|
Integrating speech in time depends on temporal expectancies and attention. Cortex 2017; 93:28-40. [PMID: 28609683 DOI: 10.1016/j.cortex.2017.05.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Revised: 01/11/2017] [Accepted: 05/01/2017] [Indexed: 10/19/2022]
Abstract
Sensory information that unfolds in time, such as in speech perception, relies on efficient chunking mechanisms in order to yield optimally-sized units for further processing. Whether or not two successive acoustic events receive a one-unit or a two-unit interpretation seems to depend on the fit between their temporal extent and a stipulated temporal window of integration. However, there is ongoing debate on how flexible this temporal window of integration should be, especially for the processing of speech sounds. Furthermore, there is no direct evidence of whether attention may modulate the temporal constraints on the integration window. For this reason, we here examine how different word durations, which lead to different temporal separations of sound onsets, interact with attention. In an Electroencephalography (EEG) study, participants actively and passively listened to words where word-final consonants were occasionally omitted. Words had either a natural duration or were artificially prolonged in order to increase the separation of speech sound onsets. Omission responses to incomplete speech input, originating in left temporal cortex, decreased when the critical speech sound was separated from previous sounds by more than 250 msec, i.e., when the separation was larger than the stipulated temporal window of integration (125-150 msec). Attention, on the other hand, only increased omission responses for stimuli with natural durations. We complemented the event-related potential (ERP) analyses by a frequency-domain analysis on the stimulus presentation rate. Notably, the power of stimulation frequency showed the same duration and attention effects than the omission responses. We interpret these findings on the background of existing research on temporal integration windows and further suggest that our findings may be accounted for within the framework of predictive coding.
Collapse
|
15
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
16
|
Perceptual Temporal Asymmetry Associated with Distinct ON and OFF Responses to Time-Varying Sounds with Rising versus Falling Intensity: A Magnetoencephalography Study. Brain Sci 2016; 6:brainsci6030027. [PMID: 27527227 PMCID: PMC5039456 DOI: 10.3390/brainsci6030027] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2016] [Revised: 07/26/2016] [Accepted: 07/29/2016] [Indexed: 11/29/2022] Open
Abstract
This magnetoencephalography (MEG) study investigated evoked ON and OFF responses to ramped and damped sounds in normal-hearing human adults. Two pairs of stimuli that differed in spectral complexity were used in a passive listening task; each pair contained identical acoustical properties except for the intensity envelope. Behavioral duration judgment was conducted in separate sessions, which replicated the perceptual bias in favour of the ramped sounds and the effect of spectral complexity on perceived duration asymmetry. MEG results showed similar cortical sites for the ON and OFF responses. There was a dominant ON response with stronger phase-locking factor (PLF) in the alpha (8–14 Hz) and theta (4–8 Hz) bands for the damped sounds. In contrast, the OFF response for sounds with rising intensity was associated with stronger PLF in the gamma band (30–70 Hz). Exploratory correlation analysis showed that the OFF response in the left auditory cortex was a good predictor of the perceived temporal asymmetry for the spectrally simpler pair. The results indicate distinct asymmetry in ON and OFF responses and neural oscillation patterns associated with the dynamic intensity changes, which provides important preliminary data for future studies to examine how the auditory system develops such an asymmetry as a function of age and learning experience and whether the absence of asymmetry or abnormal ON and OFF responses can be taken as a biomarker for certain neurological conditions associated with auditory processing deficits.
Collapse
|
17
|
Tabas A, Siebert A, Supek S, Pressnitzer D, Balaguer-Ballester E, Rupp A. Insights on the Neuromagnetic Representation of Temporal Asymmetry in Human Auditory Cortex. PLoS One 2016; 11:e0153947. [PMID: 27096960 PMCID: PMC4838253 DOI: 10.1371/journal.pone.0153947] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 04/06/2016] [Indexed: 11/26/2022] Open
Abstract
Communication sounds are typically asymmetric in time and human listeners are highly sensitive to this short-term temporal asymmetry. Nevertheless, causal neurophysiological correlates of auditory perceptual asymmetry remain largely elusive to our current analyses and models. Auditory modelling and animal electrophysiological recordings suggest that perceptual asymmetry results from the presence of multiple time scales of temporal integration, central to the auditory periphery. To test this hypothesis we recorded auditory evoked fields (AEF) elicited by asymmetric sounds in humans. We found a strong correlation between perceived tonal salience of ramped and damped sinusoids and the AEFs, as quantified by the amplitude of the N100m dynamics. The N100m amplitude increased with stimulus half-life time, showing a maximum difference between the ramped and damped stimulus for a modulation half-life time of 4 ms which is greatly reduced at 0.5 ms and 32 ms. This behaviour of the N100m closely parallels psychophysical data in a manner that: i) longer half-life times are associated with a stronger tonal percept, and ii) perceptual differences between damped and ramped are maximal at 4 ms half-life time. Interestingly, differences in evoked fields were significantly stronger in the right hemisphere, indicating some degree of hemispheric specialisation. Furthermore, the N100m magnitude was successfully explained by a pitch perception model using multiple scales of temporal integration of auditory nerve activity patterns. This striking correlation between AEFs, perception, and model predictions suggests that the physiological mechanisms involved in the processing of pitch evoked by temporal asymmetric sounds are reflected in the N100m.
Collapse
Affiliation(s)
- Alejandro Tabas
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- * E-mail:
| | - Anita Siebert
- Institute of Pharmacology and Toxicology, University of Zurich, Zürich, Zürich, Switzerland
| | - Selma Supek
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| | - Daniel Pressnitzer
- Département d’Études Cognitives, École Normale Supérieure, Paris, France
| | - Emili Balaguer-Ballester
- Faculty of Science and Technology, Bournemouth University, Bournemouth, England, United Kingdom
- The Bernstein Center for Computational Neuroscience Heidelberg-Mannheim, Mannheim, Baden-Würtemberg, Germany
| | - André Rupp
- Department of Neurology, Heidelberg University, Heidelberg, Baden-Würtemberg, Germany
| |
Collapse
|
18
|
Andermann M, Patterson RD, Geldhauser M, Sieroka N, Rupp A. Duifhuis pitch: neuromagnetic representation and auditory modeling. J Neurophysiol 2014; 112:2616-27. [DOI: 10.1152/jn.00898.2013] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When a high harmonic is removed from a cosine-phase harmonic complex, we hear a sine tone pop out of the perception; the sine tone has the pitch of the high harmonic, while the tone complex has the pitch of its fundamental frequency, f0. This phenomenon is commonly referred to as Duifhuis Pitch (DP). This paper describes, for the first time, the cortical representation of DP observed with magnetoencephalography. In experiment 1, conditions that produce the perception of a DP were observed to elicit a classic onset response in auditory cortex (P1m, N1m, P2m), and an increment in the sustained field (SF) established in response to the tone complex. Experiment 2 examined the effect of the phase spectrum of the complex tone on the DP activity: Schroeder-phase negative waves elicited a transient DP complex with a similar shape to that observed with cosine-phase waves but with much longer latencies. Following the transient DP activity, the responses of the negative and positive Schroeder-phase waves converged, and the increment in the SF slowly died away. In the absence of DP, the two Schroeder-phase conditions with low peak factors both produced larger SFs than cosine-phase waves with large peak factors. A model of the auditory periphery that includes coupling between adjacent frequency channels is used to explain the early neuromagnetic activity observed in auditory cortex.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, University Hospital of Heidelberg, Heidelberg, Germany
- Section of Experimental Psychopathology, Department of Psychiatry, University Hospital of Heidelberg, Heidelberg, Germany
| | - Roy D. Patterson
- Centre for the Neural Basis of Hearing, Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, United Kingdom; and
| | - Michael Geldhauser
- Section of Biomagnetism, Department of Neurology, University Hospital of Heidelberg, Heidelberg, Germany
| | - Norman Sieroka
- Swiss Federal Institute of Technology Zurich, Zurich, Switzerland
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, University Hospital of Heidelberg, Heidelberg, Germany
| |
Collapse
|
19
|
Altmann CF, Ono K, Callan A, Matsuhashi M, Mima T, Fukuyama H. Environmental reverberation affects processing of sound intensity in right temporal cortex. Eur J Neurosci 2013; 38:3210-20. [PMID: 23869792 DOI: 10.1111/ejn.12318] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Revised: 06/13/2013] [Accepted: 06/20/2013] [Indexed: 11/28/2022]
Abstract
Although sound reverberation is considered a nuisance variable in most studies investigating auditory processing, it can serve as a cue for loudness constancy, a phenomenon describing constant loudness perception in spite of changing sound source distance. In this study, we manipulated room reverberation characteristics to test their effect on psychophysical loudness constancy and we tested with magnetoencephalography on human subjects for neural responses reflecting loudness constancy. Psychophysically, we found that loudness constancy was present in strong, but not weak, reverberation conditions. In contrast, the dependence of sound distance judgment on actual distance was similar across conditions. We observed brain activity reflecting behavioral loudness constancy, i.e. inverse scaling of the evoked magnetic fields with distance for weak reverberation but constant responses across distance for strong reverberation from ~210 to 270 ms after stimulus onset. Distributed magnetoencephalography source reconstruction revealed underlying neural generators within the right middle temporal and right inferior anterior temporal lobe. Our data suggest a dissociation of loudness constancy and distance perception, implying a direct usage of reverberation cues for constructing constant loudness across distance. Furthermore, our magnetoencephalography data suggest involvement of auditory association areas in the right middle and right inferior anterior temporal cortex in this process.
Collapse
Affiliation(s)
- Christian F Altmann
- Graduate School of Medicine, Human Brain Research Center, Kyoto University, Kyoto, Japan; Career-Path Promotion Unit for Young Life Scientists, Kyoto University, Kyoto, Japan
| | | | | | | | | | | |
Collapse
|
20
|
Human decision making based on variations in internal noise: an EEG study. PLoS One 2013; 8:e68928. [PMID: 23840904 PMCID: PMC3698081 DOI: 10.1371/journal.pone.0068928] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2013] [Accepted: 06/03/2013] [Indexed: 11/25/2022] Open
Abstract
Perceptual decision making is prone to errors, especially near threshold. Physiological, behavioural and modeling studies suggest this is due to the intrinsic or ‘internal’ noise in neural systems, which derives from a mixture of bottom-up and top-down sources. We show here that internal noise can form the basis of perceptual decision making when the external signal lacks the required information for the decision. We recorded electroencephalographic (EEG) activity in listeners attempting to discriminate between identical tones. Since the acoustic signal was constant, bottom-up and top-down influences were under experimental control. We found that early cortical responses to the identical stimuli varied in global field power and topography according to the perceptual decision made, and activity preceding stimulus presentation could predict both later activity and behavioural decision. Our results suggest that activity variations induced by internal noise of both sensory and cognitive origin are sufficient to drive discrimination judgments.
Collapse
|
21
|
Miyazaki T, Thompson J, Fujioka T, Ross B. Sound envelope encoding in the auditory cortex revealed by neuromagnetic responses in the theta to gamma frequency bands. Brain Res 2013; 1506:64-75. [PMID: 23399682 DOI: 10.1016/j.brainres.2013.01.047] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2012] [Revised: 01/22/2013] [Accepted: 01/27/2013] [Indexed: 11/24/2022]
Abstract
Amplitude fluctuations of natural sounds carry multiple types of information represented at different time scales, such as syllables and voice pitch in speech. However, it is not well understood how such amplitude fluctuations at different time scales are processed in the brain. In the present study we investigated the effect of the stimulus rate on the cortical evoked responses using magnetoencephalography (MEG). We used a two-tone complex sound, whose envelope fluctuated at the difference frequency and induced an acoustic beat sensation. When the beat rate was continuously swept between 3Hz and 60Hz, auditory evoked response showed distinct transient waves at slow rates, while at fast rates continuous sinusoidal oscillations similar to the auditory steady-state response (ASSR) were observed. We further derived temporal modulation transfer functions (TMTF) from amplitudes of the transient responses and from the ASSR. The results identified two critical rates of 12.5Hz and 25Hz, at which consecutive transient responses overlapped with each other. These stimulus rates roughly corresponded to the rates at which the perceptual quality of the sound envelope is known to change. Low rates (> 10Hz) are perceived as loudness fluctuation, medium rates as acoustical flutter, and rates above 25Hz as roughness. We conclude that these results reflect cortical processes that integrate successive acoustic events at different time scales for extracting complex features of natural sound.
Collapse
Affiliation(s)
- Takahiro Miyazaki
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada M6A 2E1
| | | | | | | |
Collapse
|
22
|
Wojtczak M, Beim JA, Micheyl C, Oxenham AJ. Effects of temporal stimulus properties on the perception of across-frequency asynchrony. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:982-997. [PMID: 23363115 PMCID: PMC3574076 DOI: 10.1121/1.4773350] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2012] [Revised: 11/24/2012] [Accepted: 12/11/2012] [Indexed: 06/01/2023]
Abstract
The role of temporal stimulus parameters in the perception of across-frequency synchrony and asynchrony was investigated using pairs of 500-ms tones consisting of a 250-Hz tone and a tone with a higher frequency of 1, 2, 4, or 6 kHz. Subjective judgments suggested veridical perception of across-frequency synchrony but with greater sensitivity to changes in asynchrony for pairs in which the lower-frequency tone was leading than for pairs in which it was lagging. Consistent with the subjective judgments, thresholds for the detection of asynchrony measured in a three-alternative forced-choice task were lower when the signal interval contained a pair with the low-frequency tone leading than a pair with a high-frequency tone leading. A similar asymmetry was observed for asynchrony discrimination when the standard asynchrony was relatively small (≤20 ms) but not for larger standard asynchronies. Independent manipulation of onset and offset ramp durations indicated a dominant role of onsets in the perception of across-frequency asynchrony. A physiologically inspired model, involving broadly tuned monaural coincidence detectors that receive inputs from frequency-selective onset detectors, was able to accurately reproduce the asymmetric distributions of synchrony judgments. The model provides testable predictions for future physiological investigations of responses to broadband stimuli with across-frequency delays.
Collapse
Affiliation(s)
- Magdalena Wojtczak
- Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, Minnesota 55455, USA.
| | | | | | | |
Collapse
|
23
|
Francart T, Lenssen A, Wouters J. The effect of interaural differences in envelope shape on the perceived location of sounds (L). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:611-4. [PMID: 22894182 DOI: 10.1121/1.4733557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Users of bilateral cochlear implants and a cochlear implant combined with a contralateral hearing aid are sensitive to interaural time differences (ITDs). The way cochlear implant speech processors work and differences between modalities may result in interaural differences in shape of the temporal envelope presented to the binaural system. The effect of interaural differences in envelope shape on ITD sensitivity was investigated with normal-hearing listeners using a 4 kHz pure tone modulated with a periodic envelope with a trapezoid shape in each cycle. In one ear the onset segment of the trapezoid was transformed by a power function. No effect on the just noticeable difference in ITD was found with an interaural difference in envelope shape, but the ITD for a centered percept was significantly different across envelope shape conditions.
Collapse
Affiliation(s)
- Tom Francart
- ExpORL, Department of Neurosciences, KU Leuven, O & N 2, Herestraat 49 bus 721, B-3000 Leuven, Belgium.
| | | | | |
Collapse
|
24
|
Soeta Y, Nakagawa S. Auditory evoked responses in human auditory cortex to the variation of sound intensity in an ongoing tone. Hear Res 2012; 287:67-75. [DOI: 10.1016/j.heares.2012.03.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2011] [Revised: 03/08/2012] [Accepted: 03/16/2012] [Indexed: 10/28/2022]
|
25
|
When and where of auditory spatial processing in cortex: a novel approach using electrotomography. PLoS One 2011; 6:e25146. [PMID: 21949873 PMCID: PMC3176323 DOI: 10.1371/journal.pone.0025146] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 08/29/2011] [Indexed: 11/19/2022] Open
Abstract
The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1) in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2) in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Collapse
|
26
|
Andermann M, van Dinther R, Patterson RD, Rupp A. Neuromagnetic representation of musical register information in human auditory cortex. Neuroimage 2011; 57:1499-506. [DOI: 10.1016/j.neuroimage.2011.05.049] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2011] [Revised: 04/25/2011] [Accepted: 05/17/2011] [Indexed: 11/25/2022] Open
|
27
|
Hämäläinen JA, Fosker T, Szücs D, Goswami U. N1, P2 and T-complex of the auditory brain event-related potentials to tones with varying rise times in adults with and without dyslexia. Int J Psychophysiol 2011; 81:51-9. [DOI: 10.1016/j.ijpsycho.2011.04.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2010] [Revised: 04/13/2011] [Accepted: 04/21/2011] [Indexed: 10/18/2022]
|
28
|
Lütkenhöner B. Auditory signal detection appears to depend on temporal integration of subthreshold activity in auditory cortex. Brain Res 2011; 1385:206-16. [PMID: 21316353 DOI: 10.1016/j.brainres.2011.02.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2010] [Revised: 11/05/2010] [Accepted: 02/03/2011] [Indexed: 11/19/2022]
Abstract
The threshold of hearing decreases with increasing sound duration up to a limit of a few hundred milliseconds, whereas other auditory time constants are orders of magnitude shorter. A possible solution to this resolution-integration paradox is that temporal integration occurs more centrally than computations depending on high temporal resolution. But this would require information about subthreshold events in the periphery to reach higher centers. Here we show that this prerequisite is fulfilled. The auditory evoked response to a just perceptible pulse series does basically not depend on whether single pulses are below or above behavioral threshold. The failure to find evidence of temporal integration up to response latencies of 30 ms suggests that the integrator is located more centrally than primary auditory cortex. By using noise to its advantage, the auditory system apparently has established a central integration mechanism that is about as efficient as the peripheral one in the visual system.
Collapse
Affiliation(s)
- Bernd Lütkenhöner
- Section of Experimental Audiology, ENT Clinic, Münster University Hospital, Münster, Germany.
| |
Collapse
|
29
|
Zacharias N, Sielużycki C, Kordecki W, König R, Heil P. The M100 component of evoked magnetic fields differs by scaling factors: implications for signal averaging. Psychophysiology 2011; 48:1069-82. [PMID: 21342204 DOI: 10.1111/j.1469-8986.2011.01183.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
MEG and EEG studies of event-related responses often involve comparisons of grand averages, requiring homogeneity of the variances. Here, we examine the possibility, implied by the nature of neural sources and the measuring principles involved, that the M100 component of auditory-evoked magnetic fields of different subjects, hemispheres, to different stimuli, and at different sensors differs by scaling factors. Such a multiplicative model predicts a linear increase in the standard deviation with the mean, and thus would have important implications for averaging and comparing such data. Our analyses, at the sensor and the source level, clearly show that the multiplicative model applies. We therefore propose geometric, rather than arithmetic, averaging of the M100 component across subjects and suggest a novel and superior normalization procedure. Our results question the justification of the common practice of subtracting arithmetic grand averages.
Collapse
Affiliation(s)
- Norman Zacharias
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
30
|
Miettinen I, Alku P, Salminen N, May PJ, Tiitinen H. Responsiveness of the human auditory cortex to degraded speech sounds: Reduction of amplitude resolution vs. additive noise. Brain Res 2011; 1367:298-309. [DOI: 10.1016/j.brainres.2010.10.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Revised: 10/07/2010] [Accepted: 10/12/2010] [Indexed: 11/15/2022]
|
31
|
Prendergast G, Johnson SR, Green GGR. Temporal dynamics of sinusoidal and non-sinusoidal amplitude modulation. Eur J Neurosci 2010; 32:1599-607. [PMID: 21039961 DOI: 10.1111/j.1460-9568.2010.07423.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Previous behavioural studies in human subjects have demonstrated the importance of amplitude modulations to the process of intelligible speech perception. In functional neuroimaging studies of amplitude modulation processing, the inherent assumption is that all sounds are decomposed into simple building blocks, i.e. sinusoidal modulations. The encoding of complex and dynamic stimuli is often modelled to be the linear addition of a number of sinusoidal modulations and so, by investigating the response of the cortex to sinusoidal modulation, an experimenter can probe the same mechanisms used to encode speech. The experiment described in this paper used magnetoencephalography to measure the auditory steady-state response produced by six sounds, all modulated in amplitude at the same frequency but which formed a continuum from sinusoidal to pulsatile modulation. Analysis of the evoked response shows that the magnitude of the envelope-following response is highly non-linear, with sinusoidal amplitude modulation producing the weakest steady-state response. Conversely, the phase of the steady-state response was related to the shape of the modulation waveform, with the sinusoidal amplitude modulation producing the shortest latency relative to the other stimuli. It is shown that a point in auditory cortex produces a strong envelope following response to all stimuli on the continuum, but the timing of this response is related to the shape of the modulation waveform. The results suggest that steady-state response characteristics are determined by features of the waveform outside of the modulation domain and that the use of purely sinusoidal amplitude modulations may be misleading, especially in the context of speech encoding.
Collapse
Affiliation(s)
- Garreth Prendergast
- York Neuroimaging Centre, University of York, The Biocentre, York Science Park, Heslington, York YO10 5DG, UK.
| | | | | |
Collapse
|
32
|
Auditory Sensitivity, Speech Perception, and Reading Development and Impairment. EDUCATIONAL PSYCHOLOGY REVIEW 2010. [DOI: 10.1007/s10648-010-9137-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
33
|
Beal DS, Cheyne DO, Gracco VL, Quraan MA, Taylor MJ, De Nil LF. Auditory evoked fields to vocalization during passive listening and active generation in adults who stutter. Neuroimage 2010; 52:1645-53. [PMID: 20452437 DOI: 10.1016/j.neuroimage.2010.04.277] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2009] [Revised: 04/27/2010] [Accepted: 04/30/2010] [Indexed: 10/19/2022] Open
Abstract
We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering.
Collapse
Affiliation(s)
- Deryk S Beal
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | | | | | |
Collapse
|
34
|
Howard MF, Poeppel D. Hemispheric asymmetry in mid and long latency neuromagnetic responses to single clicks. Hear Res 2009; 257:41-52. [PMID: 19647788 DOI: 10.1016/j.heares.2009.07.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2009] [Revised: 07/01/2009] [Accepted: 07/27/2009] [Indexed: 10/20/2022]
Abstract
We examine lateralization in the evoked magnetic field response to a click stimulus, observing that lateralization effects previously demonstrated for tones, noise, frequency modulated sweeps and certain syllables are also observed for (acoustically simpler) clicks. These effects include a difference in the peak latency of the M100 component of the evoked field waveform such that the peak consistently appears earlier in the right hemisphere, as well as rightward lateralization of field amplitude during the rise of the M100 component. Our review of previous findings on M100 lateralization, taken together with our data on the click-evoked response, leads to the hypothesis that these lateralization effects are elicited by stimuli containing a sharp sound energy onset or acoustic transition rather than specific types of stimuli. We argue that both the latency and the amplitude lateralization effects have a common origin, namely, hemispheric asymmetry in the amplitude of the magnetic field generated by one or more sources active during the M100 rise. While anatomical asymmetry cannot be excluded as the cause of the amplitude difference, we propose that the difference reflects a rightward asymmetry in the processing of sound energy onsets that potentially underlies the lateralization of several functions.
Collapse
Affiliation(s)
- Mary F Howard
- Department of Linguistics, University of Maryland, College Park, MD 20742-7505, USA.
| | | |
Collapse
|
35
|
Tong Y, Melara RD, Rao A. P2 enhancement from auditory discrimination training is associated with improved reaction times. Brain Res 2009; 1297:80-8. [PMID: 19651109 DOI: 10.1016/j.brainres.2009.07.089] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2008] [Revised: 07/24/2009] [Accepted: 07/25/2009] [Indexed: 10/20/2022]
Abstract
This study examined the effects of training in a pure tone discrimination task on relations between behavioral performance and the magnitude of auditory event-related potentials (ERPs). Participants performed both passive (listening) and active (detecting) oddball tasks in a pretest and two posttests (1 and 9 weeks after training). Training produced a long-term benefit in both perceptual sensitivity and reaction times (RT). Training enhanced the amplitude of the P2 ERP component to both standards and deviants at both early and delayed posttests. Importantly, P2 enhancement was strongly associated with discrimination RT, suggesting that experience facilitates rapid, preattentive access to perceptual representations. Training also elevated the mismatch negativity, possibly due to the strengthening of acoustic traces. Finally, training enhanced the amplitude of the P3 component to deviants across posttests, indicating a long-lasting effect of discrimination training on stimulus salience.
Collapse
Affiliation(s)
- Yunxia Tong
- Gene, Cognition, and Psychosis Program, National Institute of Mental Health, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA
| | | | | |
Collapse
|
36
|
Michalewski HJ, Starr A, Zeng FG, Dimitrijevic A. N100 cortical potentials accompanying disrupted auditory nerve activity in auditory neuropathy (AN): effects of signal intensity and continuous noise. Clin Neurophysiol 2009; 120:1352-63. [PMID: 19535287 DOI: 10.1016/j.clinph.2009.05.013] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2009] [Revised: 04/17/2009] [Accepted: 05/16/2009] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Auditory temporal processes in quiet are impaired in auditory neuropathy (AN) similar to normal hearing subjects tested in noise. N100 latencies were measured from AN subjects at several tone intensities in quiet and noise for comparison with a group of normal hearing individuals. METHODS Subjects were tested with brief 100 ms tones (1.0 kHz, 100-40 dB SPL) in quiet and in continuous noise (90 dB SPL). N100 latency and amplitude were analyzed as a function of signal intensity and audibility. RESULTS N100 latency in AN in quiet was delayed and amplitude was reduced compared to the normal group; the extent of latency delay was related to psychoacoustic measures of gap detection threshold and speech recognition scores, but not to audibility. Noise in normal hearing subjects was accompanied by N100 latency delays and amplitude reductions paralleling those found in AN tested in quiet. Additional N100 latency delays and amplitude reductions occurred in AN with noise. CONCLUSIONS N100 latency to tones and performance on auditory temporal tasks were related in AN subjects. Noise masking in normal hearing subjects affected N100 latency to resemble AN in quiet. SIGNIFICANCE N100 latency to tones may serve as an objective measure of the efficiency of auditory temporal processes.
Collapse
Affiliation(s)
- Henry J Michalewski
- Department of Neurology, Med. Surge I, Room 150, University of California, Irvine, CA 92697-4290, USA.
| | | | | | | |
Collapse
|
37
|
Neubauer H, Heil P. A physiological model for the stimulus dependence of first-spike latency of auditory-nerve fibers. Brain Res 2008; 1220:208-23. [DOI: 10.1016/j.brainres.2007.08.081] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2007] [Revised: 08/29/2007] [Accepted: 08/29/2007] [Indexed: 10/22/2022]
|
38
|
König R, Sieluzycki C, Simserides C, Heil P, Scheich H. Effects of the task of categorizing FM direction on auditory evoked magnetic fields in the human auditory cortex. Brain Res 2008; 1220:102-17. [PMID: 18420183 DOI: 10.1016/j.brainres.2008.02.086] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2008] [Revised: 02/25/2008] [Accepted: 02/27/2008] [Indexed: 10/22/2022]
Abstract
We examined effects of the task of categorizing linear frequency-modulated (FM) sweeps into rising and falling on auditory evoked magnetic fields (AEFs) from the human auditory cortex, recorded by means of whole-head magnetoencephalography. AEFs in this task condition were compared with those in a passive condition where subjects had been asked to just passively listen to the same stimulus material. We found that the M100-peak latency was significantly shorter for the task condition than for the passive condition in the left but not in the right hemisphere. Furthermore, the M100-peak latency was significantly shorter in the right than in the left hemisphere for the passive and the task conditions. In contrast, the M100-peak amplitude did not differ significantly between conditions, nor between hemispheres. We also analyzed the activation strength derived from the integral of the absolute magnetic field over constant time windows between stimulus onset and 260 ms. We isolated an early, narrow time range between about 60 ms and 80 ms that showed larger values in the task condition, most prominently in the right hemisphere. These results add to other imaging and lesion studies which suggest a specific role of the right auditory cortex in identifying FM sweep direction and thus in categorizing FM sweeps into rising and falling.
Collapse
Affiliation(s)
- Reinhard König
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118 Magdeburg, Germany
| | | | | | | | | |
Collapse
|
39
|
Nelken I, Bizley JK, Nodal FR, Ahmed B, King AJ, Schnupp JWH. Responses of auditory cortex to complex stimuli: functional organization revealed using intrinsic optical signals. J Neurophysiol 2008; 99:1928-41. [PMID: 18272880 DOI: 10.1152/jn.00469.2007] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We used optical imaging of intrinsic signals to study the large-scale organization of ferret auditory cortex in response to complex sounds. Cortical responses were collected during continuous stimulation by sequences of sounds with varying frequency, period, or interaural level differences. We used a set of stimuli that differ in spectral structure, but have the same periodicity and therefore evoke the same pitch percept (click trains, sinusoidally amplitude modulated tones, and iterated ripple noise). These stimuli failed to reveal a consistent periodotopic map across the auditory fields imaged. Rather, gradients of period sensitivity differed for the different types of periodic stimuli. Binaural interactions were studied both with single contralateral, ipsilateral, and diotic broadband noise bursts and with sequences of broadband noise bursts with varying level presented contralaterally, ipsilaterally, or in opposite phase to both ears. Contralateral responses were generally largest and ipsilateral responses were smallest when using single noise bursts, but the extent of the activated area was large and comparable in all three aural configurations. Modulating the amplitude in counter phase to the two ears generally produced weaker modulation of the optical signals than the modulation produced by the monaural stimuli. These results suggest that binaural interactions seen in cortex are most likely predominantly due to subcortical processing. Thus our optical imaging data do not support the theory that the primary or nonprimary cortical fields imaged are topographically organized to form consistent maps of systematically varying sensitivity either to stimulus pitch or to simple binaural properties of the acoustic stimuli.
Collapse
Affiliation(s)
- Israel Nelken
- Department of Neurobiology, Interdisciplinary Center for Neural Computation, The Hebrew University, Jerusalem, Israel.
| | | | | | | | | | | |
Collapse
|
40
|
Hämäläinen JA, Leppänen PHT, Guttorm TK, Lyytinen H. Event-related potentials to pitch and rise time change in children with reading disabilities and typically reading children. Clin Neurophysiol 2008; 119:100-15. [PMID: 18320604 DOI: 10.1016/j.clinph.2007.09.064] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- J A Hämäläinen
- Department of Psychology, University of Jyväskylä, PO Box 35, Agora, 40014 Jyväskylä, Finland.
| | | | | | | |
Collapse
|
41
|
Hämäläinen JA, Leppänen PHT, Guttorm TK, Lyytinen H. N1 and P2 components of auditory event-related potentials in children with and without reading disabilities. Clin Neurophysiol 2007; 118:2263-75. [PMID: 17714985 DOI: 10.1016/j.clinph.2007.07.007] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2007] [Revised: 07/05/2007] [Accepted: 07/05/2007] [Indexed: 11/23/2022]
Abstract
OBJECTIVE The effects of within stimulus presentation rate and rise time on basic auditory processing were investigated in children with reading disabilities and typically reading children. METHODS Children with reading disabilities (RD; N=19) and control children (N=20) were studied using event-related potentials (ERPs). Paired stimuli were used with two different within-pair-intervals (WPI; 10 and 255 ms) and two different rise times (10 and 130 ms). Each stimulus was presented with equal probability and long between-pair inter-stimulus intervals (1-5s). The study focused on N1 and P2 components. RESULTS The P2 responses to the first tone in the pair showed differences between children with RD and control children. Also, children with RD had larger N1 response than control children to stimuli with short WPI and long rise time. CONCLUSIONS These results provide evidence for basic auditory processing abnormalities in children with RD. This processing difference could be related to extraction of stimulus features from sounds or to attentional mechanisms. SIGNIFICANCE Our results show support for behavioral findings that children with RD and control children process rise times differently. More than half of children with RD showed atypical auditory processing.
Collapse
Affiliation(s)
- J A Hämäläinen
- Department of Psychology, University of Jyväskylä, PO Box 35, Agora, 40014 Jyväskylä, Finland.
| | | | | | | |
Collapse
|
42
|
Altmann CF, Nakata H, Noguchi Y, Inui K, Hoshiyama M, Kaneoke Y, Kakigi R. Temporal Dynamics of Adaptation to Natural Sounds in the Human Auditory Cortex. Cereb Cortex 2007; 18:1350-60. [PMID: 17893422 DOI: 10.1093/cercor/bhm166] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
We aimed at testing the cortical representation of complex natural sounds within auditory cortex by conducting 2 human magnetoencephalography experiments. To this end, we employed an adaptation paradigm and presented subjects with pairs of complex stimuli, namely, animal vocalizations and spectrally matched noise. In Experiment 1, we presented stimulus pairs of same or different animal vocalizations and same or different noise. Our results suggest a 2-step process of adaptation effects: first, we observed a general item-unspecific reduction of the N1m peak amplitude at 100 ms, followed by an item-specific amplitude reduction of the P2m component at 200 ms after stimulus onset for both animal vocalizations and noise. Multiple dipole source modeling revealed the right lateral Heschl's gyrus and the bilateral superior temporal gyrus as sites of adaptation. In Experiment 2, we tested for cross-adaptation between animal vocalizations and spectrally matched noise sounds, by presenting pairs of an animal vocalization and its corresponding or a different noise sound. We observed cross-adaptation effects for the P2m component within bilateral superior temporal gyrus. Thus, our results suggest selectivity of the evoked magnetic field at 200 ms after stimulus onset in nonprimary auditory cortex for the spectral fine structure of complex sounds rather than their temporal dynamics.
Collapse
Affiliation(s)
- Christian F Altmann
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki 444-8585, Japan.
| | | | | | | | | | | | | |
Collapse
|
43
|
Corriveau K, Pasquini E, Goswami U. Basic auditory processing skills and specific language impairment: a new look at an old hypothesis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2007; 50:647-66. [PMID: 17538107 DOI: 10.1044/1092-4388(2007/046)] [Citation(s) in RCA: 103] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE To explore the sensitivity of children with specific language impairment (SLI) to amplitude-modulated and durational cues that are important for perceiving suprasegmental speech rhythm and stress patterns. METHOD Sixty-three children between 7 and 11 years of age were tested, 21 of whom had a diagnosis of SLI, 21 of whom were matched for chronological age to the SLI sample, and 21 of whom were matched for language age to the SLI sample. All children received a battery of nonspeech auditory processing tasks along with standardized measures of phonology and language. RESULTS As many as 70%-80% of children diagnosed with SLI were found to perform below the 5th percentile of age-matched controls in auditory processing tasks measuring sensitivity to amplitude envelope rise time and sound duration. Furthermore, individual differences in sensitivity to these cues predicted unique variance in language and literacy attainment, even when age, nonverbal IQ, and task-related (attentional) factors were controlled. CONCLUSION Many children with SLI have auditory processing difficulties, but for most children, these are not specific to brief, rapidly successive acoustic cues. Instead, sensitivity to durational and amplitude envelope cues appear to predict language and literacy outcomes more strongly. This finding now requires replication and exploration in languages other than English.
Collapse
Affiliation(s)
- Kathleen Corriveau
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, England
| | | | | |
Collapse
|
44
|
Lütkenhöner B, Klein JS. Auditory evoked field at threshold. Hear Res 2007; 228:188-200. [PMID: 17434696 DOI: 10.1016/j.heares.2007.02.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2006] [Revised: 02/22/2007] [Accepted: 02/22/2007] [Indexed: 11/22/2022]
Abstract
Auditory evoked responses are widely used for estimating electrophysiological thresholds, but the relationships to psychophysical thresholds are not necessarily straightforward. Among the aspects that are not well understood is the near-threshold intensity dependence of the evoked response. Here, we investigated wave N100m of the auditory evoked field. The stimulus was a 1-kHz tone with an effective duration of about 110 ms. Up to 10 dB above the psychophysical threshold, the level was varied in steps of 2dB; further measurements were done at 15, 20, 30, and 40 dB SL. Lower levels were presented with higher probability, to partially compensate for the expected signal-to-noise ratio reduction with decreasing level. The latency of the N100m could be characterized as a transmission delay and an integration time. The level dependence of the latter was consistent with the assumption of an almost perfectly operating sound-pressure integrator. The N100m amplitude increased roughly linearly with the level in dB (thus, as a logarithmic function of intensity), showing signs of saturation at higher levels.
Collapse
Affiliation(s)
- Bernd Lütkenhöner
- Section of Experimental Audiology, ENT Clinic, Münster University Hospital, Münster, Germany.
| | | |
Collapse
|
45
|
Kuriki S, Ohta K, Koyama S. Persistent responsiveness of long-latency auditory cortical activities in response to repeated stimuli of musical timbre and vowel sounds. ACTA ACUST UNITED AC 2007; 17:2725-32. [PMID: 17289776 DOI: 10.1093/cercor/bhl182] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.
Collapse
Affiliation(s)
- Shinya Kuriki
- Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | | | | |
Collapse
|
46
|
Seither-Preisler A, Patterson R, Krumbholz K, Seither S, Lütkenhöner B. Evidence of pitch processing in the N100m component of the auditory evoked field. Hear Res 2006; 213:88-98. [PMID: 16464550 DOI: 10.1016/j.heares.2006.01.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2005] [Revised: 12/23/2005] [Accepted: 01/02/2006] [Indexed: 11/19/2022]
Abstract
The latency of the N100m component of the auditory evoked field (AEF) is sensitive to the period and spectrum of a sound. However, little attention was paid so far to the wave shape at stimulus onset, which might have biased previous results. This problem was fixed in the present study by aligning the first major peaks in the acoustic waveforms. The stimuli were harmonic tones (spectral range: 800-5000 Hz) with periods corresponding to 100, 200, 400, and 800 Hz. The frequency components were in sine, alternating or random phase. Simulations with a computational model suggest that the auditory-nerve activity is strongly affected by both the period and the relative phase of the stimulus, whereas the output of the more central pitch processor only depends on the period. Our AEF data, recorded from the right hemisphere of seven subjects, are consistent with the latter prediction: The latency of the N100m depends on the period, but not on the relative phase of the stimulus components. This suggests that the N100m reflects temporal pitch extraction, not necessarily implying that the underlying generators are directly involved in this analysis.
Collapse
Affiliation(s)
- Annemarie Seither-Preisler
- Department of Experimental Audiology, ENT Clinic, Münster University Hospital, Kardinal von Galen-Ring 10, D-48129 Münster, Germany.
| | | | | | | | | |
Collapse
|
47
|
Sussman E, Steinschneider M. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex. Brain Res 2006; 1075:165-74. [PMID: 16460703 PMCID: PMC2846765 DOI: 10.1016/j.brainres.2005.12.074] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2005] [Revised: 12/08/2005] [Accepted: 12/10/2005] [Indexed: 10/25/2022]
Abstract
Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.
Collapse
Affiliation(s)
- Elyse Sussman
- Department of Neuroscience, Albert Einstein College of Medicine, 1410 Pelham Parkway South, NY 10461, USA.
| | | |
Collapse
|
48
|
Shahin A, Roberts LE, Pantev C, Trainor LJ, Ross B. Modulation of P2 auditory-evoked responses by the spectral complexity of musical sounds. Neuroreport 2005; 16:1781-5. [PMID: 16237326 DOI: 10.1097/01.wnr.0000185017.29316.63] [Citation(s) in RCA: 139] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We investigated whether N1 and P2 auditory-evoked responses are modulated by the spectral complexity of musical sounds in pianists and non-musicians. Study participants were presented with three variants of a C4 piano tone equated for temporal envelope but differing in the number of harmonics contained in the stimulus. A fourth tone was a pure tone matched to the fundamental frequency of the piano tones. A simultaneous electroencephalographic/magnetoencephalographic recording was made. P2 amplitude was larger in musicians and increased with spectral complexity preferentially in this group, but N1 did not. The results suggest that P2 reflects the specific features of acoustic stimuli experienced during musical practice and point to functional differences in P2 and N1 that relate to their underlying mechanisms.
Collapse
Affiliation(s)
- Antoine Shahin
- Department of Medical Physics and Applied Radiation Sciences, McMaster University, Hamilton, Ontario, Canada
| | | | | | | | | |
Collapse
|
49
|
Tiitinen H, Mäkelä AM, Mäkinen V, May PJC, Alku P. Disentangling the effects of phonation and articulation: hemispheric asymmetries in the auditory N1m response of the human brain. BMC Neurosci 2005; 6:62. [PMID: 16225699 PMCID: PMC1280927 DOI: 10.1186/1471-2202-6-62] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2005] [Accepted: 10/15/2005] [Indexed: 11/16/2022] Open
Abstract
Background The cortical activity underlying the perception of vowel identity has typically been addressed by manipulating the first and second formant frequency (F1 & F2) of the speech stimuli. These two values, originating from articulation, are already sufficient for the phonetic characterization of vowel category. In the present study, we investigated how the spectral cues caused by articulation are reflected in cortical speech processing when combined with phonation, the other major part of speech production manifested as the fundamental frequency (F0) and its harmonic integer multiples. To study the combined effects of articulation and phonation we presented vowels with either high (/a/) or low (/u/) formant frequencies which were driven by three different types of excitation: a natural periodic pulseform reflecting the vibration of the vocal folds, an aperiodic noise excitation, or a tonal waveform. The auditory N1m response was recorded with whole-head magnetoencephalography (MEG) from ten human subjects in order to resolve whether brain events reflecting articulation and phonation are specific to the left or right hemisphere of the human brain. Results The N1m responses for the six stimulus types displayed a considerable dynamic range of 115–135 ms, and were elicited faster (~10 ms) by the high-formant /a/ than by the low-formant /u/, indicating an effect of articulation. While excitation type had no effect on the latency of the right-hemispheric N1m, the left-hemispheric N1m elicited by the tonally excited /a/ was some 10 ms earlier than that elicited by the periodic and the aperiodic excitation. The amplitude of the N1m in both hemispheres was systematically stronger to stimulation with natural periodic excitation. Also, stimulus type had a marked (up to 7 mm) effect on the source location of the N1m, with periodic excitation resulting in more anterior sources than aperiodic and tonal excitation. Conclusion The auditory brain areas of the two hemispheres exhibit differential tuning to natural speech signals, observable already in the passive recording condition. The variations in the latency and strength of the auditory N1m response can be traced back to the spectral structure of the stimuli. More specifically, the combined effects of the harmonic comb structure originating from the natural voice excitation caused by the fluctuating vocal folds and the location of the formant frequencies originating from the vocal tract leads to asymmetric behaviour of the left and right hemisphere.
Collapse
Affiliation(s)
- Hannu Tiitinen
- Apperception & Cortical Dynamics (ACD), Department of Psychology, P.O.B. 9, FIN-00014 University of Helsinki, Finland
- BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Finland
| | - Anna Mari Mäkelä
- Apperception & Cortical Dynamics (ACD), Department of Psychology, P.O.B. 9, FIN-00014 University of Helsinki, Finland
- BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Finland
| | - Ville Mäkinen
- Apperception & Cortical Dynamics (ACD), Department of Psychology, P.O.B. 9, FIN-00014 University of Helsinki, Finland
- BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Finland
| | - Patrick JC May
- Apperception & Cortical Dynamics (ACD), Department of Psychology, P.O.B. 9, FIN-00014 University of Helsinki, Finland
- BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Finland
| | - Paavo Alku
- Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Espoo, Finland
| |
Collapse
|
50
|
Mäkelä AM, Alku P, May PJC, Mäkinen V, Tiitinen H. Left-hemispheric brain activity reflects formant transitions in speech sounds. Neuroreport 2005; 16:549-53. [PMID: 15812305 DOI: 10.1097/00001756-200504250-00006] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Connected speech is characterized by formant transitions whereby formant frequencies change over time. Here, using magneto-encephalography, we investigated the cortical activity in 10 participants in response to constant-formant vowels and diphthongs with formant transitions. All the stimuli elicited prominent auditory N100m responses, but the formant transitions resulted in latency modulations specific to the left hemisphere. Following the elicitation of the N100m, cortical activity shifted some 10 mm towards anterior brain areas. This late activity resembled the N400m, typically obtained with more complex utterances such as words and/or sentences. Thus, the present study demonstrates how magnetoencephalography can be used to investigate the spatiotemporal evolution in cortical activity related to the various stages of the processing of speech.
Collapse
Affiliation(s)
- Anna Mari Mäkelä
- Apperception & Cortical Dynamics, Department of Psychology, University of Helsinki, Helsinki, Finland.
| | | | | | | | | |
Collapse
|