1
|
Herrmann B, Maess B, Johnsrude IS. Sustained responses and neural synchronization to amplitude and frequency modulation in sound change with age. Hear Res 2023; 428:108677. [PMID: 36580732 DOI: 10.1016/j.heares.2022.108677] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/09/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022]
Abstract
Perception of speech requires sensitivity to features, such as amplitude and frequency modulations, that are often temporally regular. Previous work suggests age-related changes in neural responses to temporally regular features, but little work has focused on age differences for different types of modulations. We recorded magnetoencephalography in younger (21-33 years) and older adults (53-73 years) to investigate age differences in neural responses to slow (2-6 Hz sinusoidal and non-sinusoidal) modulations in amplitude, frequency, or combined amplitude and frequency. Audiometric pure-tone average thresholds were elevated in older compared to younger adults, indicating subclinical hearing impairment in the recruited older-adult sample. Neural responses to sound onset (independent of temporal modulations) were increased in magnitude in older compared to younger adults, suggesting hyperresponsivity and a loss of inhibition in the aged auditory system. Analyses of neural activity to modulations revealed greater neural synchronization with amplitude, frequency, and combined amplitude-frequency modulations for older compared to younger adults. This potentiated response generalized across different degrees of temporal regularity (sinusoidal and non-sinusoidal), although neural synchronization was generally lower for non-sinusoidal modulation. Despite greater synchronization, sustained neural activity was reduced in older compared to younger adults for sounds modulated both sinusoidally and non-sinusoidally in frequency. Our results suggest age differences in the sensitivity of the auditory system to features present in speech and other natural sounds.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, North York, ON M6A 2E1, Canada; Department of Psychology, University of Toronto, Toronto, ON M5S 1A1, Canada; Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada.
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Brain Networks Unit, Leipzig 04103, Germany
| | - Ingrid S Johnsrude
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON N6A 3K7, Canada; School of Communication Sciences & Disorders, The University of Western Ontario, London, ON N6A 5B7, Canada
| |
Collapse
|
2
|
Yin B, Shi Z, Wang Y, Meck WH. Oscillation/Coincidence-Detection Models of Reward-Related Timing in Corticostriatal Circuits. TIMING & TIME PERCEPTION 2022. [DOI: 10.1163/22134468-bja10057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
The major tenets of beat-frequency/coincidence-detection models of reward-related timing are reviewed in light of recent behavioral and neurobiological findings. This includes the emphasis on a core timing network embedded in the motor system that is comprised of a corticothalamic-basal ganglia circuit. Therein, a central hub provides timing pulses (i.e., predictive signals) to the entire brain, including a set of distributed satellite regions in the cerebellum, cortex, amygdala, and hippocampus that are selectively engaged in timing in a manner that is more dependent upon the specific sensory, behavioral, and contextual requirements of the task. Oscillation/coincidence-detection models also emphasize the importance of a tuned ‘perception’ learning and memory system whereby target durations are detected by striatal networks of medium spiny neurons (MSNs) through the coincidental activation of different neural populations, typically utilizing patterns of oscillatory input from the cortex and thalamus or derivations thereof (e.g., population coding) as a time base. The measure of success of beat-frequency/coincidence-detection accounts, such as the Striatal Beat-Frequency model of reward-related timing (SBF), is their ability to accommodate new experimental findings while maintaining their original framework, thereby making testable experimental predictions concerning diagnosis and treatment of issues related to a variety of dopamine-dependent basal ganglia disorders, including Huntington’s and Parkinson’s disease.
Collapse
Affiliation(s)
- Bin Yin
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
- School of Psychology, Fujian Normal University, Fuzhou, 350117, Fujian, China
| | - Zhuanghua Shi
- Department of Psychology, Ludwig Maximilian University of Munich, 80802 Munich, Germany
| | - Yaxin Wang
- School of Psychology, Fujian Normal University, Fuzhou, 350117, Fujian, China
| | - Warren H. Meck
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| |
Collapse
|
3
|
Ten Oever S, Martin AE. An oscillating computational model can track pseudo-rhythmic speech by using linguistic predictions. eLife 2021; 10:68066. [PMID: 34338196 PMCID: PMC8328513 DOI: 10.7554/elife.68066] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/16/2021] [Indexed: 11/19/2022] Open
Abstract
Neuronal oscillations putatively track speech in order to optimize sensory processing. However, it is unclear how isochronous brain oscillations can track pseudo-rhythmic speech input. Here we propose that oscillations can track pseudo-rhythmic speech when considering that speech time is dependent on content-based predictions flowing from internal language models. We show that temporal dynamics of speech are dependent on the predictability of words in a sentence. A computational model including oscillations, feedback, and inhibition is able to track pseudo-rhythmic speech input. As the model processes, it generates temporal phase codes, which are a candidate mechanism for carrying information forward in time. The model is optimally sensitive to the natural temporal speech dynamics and can explain empirical data on temporal speech illusions. Our results suggest that speech tracking does not have to rely only on the acoustics but could also exploit ongoing interactions between oscillations and constraints flowing from internal language models.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Andrea E Martin
- Language and Computation in Neural Systems group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
4
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
5
|
Modulation Spectra Capture EEG Responses to Speech Signals and Drive Distinct Temporal Response Functions. eNeuro 2021; 8:ENEURO.0399-20.2020. [PMID: 33272971 PMCID: PMC7810259 DOI: 10.1523/eneuro.0399-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/08/2020] [Accepted: 11/14/2020] [Indexed: 11/26/2022] Open
Abstract
Speech signals have a unique shape of long-term modulation spectrum that is distinct from environmental noise, music, and non-speech vocalizations. Does the human auditory system adapt to the speech long-term modulation spectrum and efficiently extract critical information from speech signals? To answer this question, we tested whether neural responses to speech signals can be captured by specific modulation spectra of non-speech acoustic stimuli. We generated amplitude modulated (AM) noise with the speech modulation spectrum and 1/f modulation spectra of different exponents to imitate temporal dynamics of different natural sounds. We presented these AM stimuli and a 10-min piece of natural speech to 19 human participants undergoing electroencephalography (EEG) recording. We derived temporal response functions (TRFs) to the AM stimuli of different spectrum shapes and found distinct neural dynamics for each type of TRFs. We then used the TRFs of AM stimuli to predict neural responses to the speech signals, and found that (1) the TRFs of AM modulation spectra of exponents 1, 1.5, and 2 preferably captured EEG responses to speech signals in the δ band and (2) the θ neural band of speech neural responses can be captured by the AM stimuli of an exponent of 0.75. Our results suggest that the human auditory system shows specificity to the long-term modulation spectrum and is equipped with characteristic neural algorithms tailored to extract critical acoustic information from speech signals.
Collapse
|
6
|
A novel approach to investigate subcortical and cortical sensitivity to temporal structure simultaneously. Hear Res 2020; 398:108080. [PMID: 33038827 DOI: 10.1016/j.heares.2020.108080] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 09/11/2020] [Accepted: 09/20/2020] [Indexed: 11/24/2022]
Abstract
Hearing loss is associated with changes at the peripheral, subcortical, and cortical auditory stages. Research often focuses on these stages in isolation, but peripheral damage has cascading effects on central processing, and different stages are interconnected through extensive feedforward and feedback projections. Accordingly, assessment of the entire auditory system is needed to understand auditory pathology. Using a novel stimulus paired with electroencephalography in young, normal-hearing adults, we assess neural function at multiple stages of the auditory pathway simultaneously. We employ click trains that repeatedly accelerate then decelerate (3.5 Hz click-rate-modulation) introducing varying inter-click-intervals (4 to 40 ms). We measured the amplitude of cortical potentials, and the latencies and amplitudes of Waves III and V of the auditory brainstem response (ABR), to clicks as a function of preceding inter-click-interval. This allowed us to assess cortical processing of click-rate-modulation, as well as adaptation and neural recovery time in subcortical structures (probably cochlear nuclei and inferior colliculi). Subcortical adaptation to inter-click intervals was reflected in longer latencies. Cortical responses to the 3.5 Hz modulation included phase-locking, probably originating from auditory cortex, and sustained activity likely originating from higher-level cortices. We did not observe any correlations between subcortical and cortical responses. By recording neural responses from different stages of the auditory system simultaneously, we can study functional relationships among levels of the auditory system, which may provide a new and helpful window on hearing and hearing impairment.
Collapse
|
7
|
Li L, Ito S, Yotsumoto Y. Effect of change saliency and neural entrainment on flicker-induced time dilation. J Vis 2020; 20:15. [PMID: 32574359 PMCID: PMC7416891 DOI: 10.1167/jov.20.6.15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
When a visual stimulus flickers periodically and rhythmically, the perceived duration tends to exceed its physical duration in the peri-second range. Although flicker-induced time dilation is a robust time illusion, its underlying neural mechanisms remain inconclusive. The neural entrainment account proposes that neural entrainment of the exogenous visual stimulus, marked by steady-state visual evoked potentials (SSVEPs) over the visual cortex, is the cause of time dilation. By contrast, the saliency account argues that the conscious perception of flicker changes is indispensable. In the current study, we examined these two accounts separately. The first two experiments manipulated the level of saliency around the critical fusion threshold (CFF) in a duration discrimination task to probe the effect of change saliency. The amount of dilation correlated with the level of change saliency. The next two experiments investigated whether neural entrainment alone could also induce perceived dilation. To preclude change saliency, we utilized a combination of two high-frequency flickers above the CFF, whereas their beat frequency still theoretically aroused neural entrainment at a low frequency. Results revealed a moderate time dilation induced by combinative high-frequency flickers. Although behavioral results suggested neural entrainment engagement, electroencephalography showed neither larger power nor inter-trial coherence (ITC) at the beat. In summary, change saliency was the most critical factor determining the perception and strength of time dilation, whereas neural entrainment had a moderate influence. These results highlight the influence of higher-level visual processing on time perception.
Collapse
|
8
|
Abstract
Natural sounds contain acoustic dynamics ranging from tens to hundreds of milliseconds. How does the human auditory system encode acoustic information over wide-ranging timescales to achieve sound recognition? Previous work (Teng et al. 2017) demonstrated a temporal coding preference for the theta and gamma ranges, but it remains unclear how acoustic dynamics between these two ranges are coded. Here, we generated artificial sounds with temporal structures over timescales from ~200 to ~30 ms and investigated temporal coding on different timescales. Participants discriminated sounds with temporal structures at different timescales while undergoing magnetoencephalography recording. Although considerable intertrial phase coherence can be induced by acoustic dynamics of all the timescales, classification analyses reveal that the acoustic information of all timescales is preferentially differentiated through the theta and gamma bands, but not through the alpha and beta bands; stimulus reconstruction shows that the acoustic dynamics in the theta and gamma ranges are preferentially coded. We demonstrate that the theta and gamma bands show the generality of temporal coding with comparable capacity. Our findings provide a novel perspective-acoustic information of all timescales is discretised into two discrete temporal chunks for further perceptual analysis.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt, Germany
- Department of Psychology, New York University, New York, NY 10003, USA
| |
Collapse
|
9
|
Auksztulewicz R, Myers NE, Schnupp JW, Nobre AC. Rhythmic Temporal Expectation Boosts Neural Activity by Increasing Neural Gain. J Neurosci 2019; 39:9806-9817. [PMID: 31662425 PMCID: PMC6891052 DOI: 10.1523/jneurosci.0925-19.2019] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Revised: 09/12/2019] [Accepted: 09/19/2019] [Indexed: 12/02/2022] Open
Abstract
Temporal orienting improves sensory processing, akin to other top-down biases. However, it is unknown whether these improvements reflect increased neural gain to any stimuli presented at expected time points, or specific tuning to task-relevant stimulus aspects. Furthermore, while other top-down biases are selective, the extent of trade-offs across time is less well characterized. Here, we tested whether gain and/or tuning of auditory frequency processing in humans is modulated by rhythmic temporal expectations, and whether these modulations are specific to time points relevant for task performance. Healthy participants (N = 23) of either sex performed an auditory discrimination task while their brain activity was measured using magnetoencephalography/electroencephalography (M/EEG). Acoustic stimulation consisted of sequences of brief distractors interspersed with targets, presented in a rhythmic or jittered way. Target rhythmicity not only improved behavioral discrimination accuracy and M/EEG-based decoding of targets, but also of irrelevant distractors preceding these targets. To explain this finding in terms of increased sensitivity and/or sharpened tuning to auditory frequency, we estimated tuning curves based on M/EEG decoding results, with separate parameters describing gain and sharpness. The effect of rhythmic expectation on distractor decoding was linked to gain increase only, suggesting increased neural sensitivity to any stimuli presented at relevant time points.SIGNIFICANCE STATEMENT Being able to predict when an event may happen can improve perception and action related to this event, likely due to the alignment of neural activity to the temporal structure of stimulus streams. However, it is unclear whether rhythmic increases in neural sensitivity are specific to task-relevant targets, and whether they competitively impair stimulus processing at unexpected time points. By combining magnetoencephalography and encephalographic recordings, neural decoding of auditory stimulus features, and modeling, we found that rhythmic expectation improved neural decoding of both relevant targets and irrelevant distractors presented and expected time points, but did not competitively impair stimulus processing at unexpected time points. Using a quantitative model, these results were linked to nonspecific neural gain increases due to rhythmic expectation.
Collapse
Affiliation(s)
- Ryszard Auksztulewicz
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong Special Administrative Region of the People's Republic of China,
- Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom, and
| | - Nicholas E Myers
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom, and
- Oxford Centre for Human Brain Activity, University of Oxford, Oxford OX3 7JX, United Kingdom
| | - Jan W Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong Special Administrative Region of the People's Republic of China
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom, and
- Oxford Centre for Human Brain Activity, University of Oxford, Oxford OX3 7JX, United Kingdom
| |
Collapse
|
10
|
Herrmann B, Buckland C, Johnsrude IS. Neural signatures of temporal regularity processing in sounds differ between younger and older adults. Neurobiol Aging 2019; 83:73-85. [DOI: 10.1016/j.neurobiolaging.2019.08.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 08/20/2019] [Accepted: 08/29/2019] [Indexed: 01/02/2023]
|
11
|
Teng X, Cogan GB, Poeppel D. Speech fine structure contains critical temporal cues to support speech segmentation. Neuroimage 2019; 202:116152. [PMID: 31484039 DOI: 10.1016/j.neuroimage.2019.116152] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 08/10/2019] [Accepted: 08/31/2019] [Indexed: 11/16/2022] Open
Abstract
Segmenting the continuous speech stream into units for further perceptual and linguistic analyses is fundamental to speech recognition. The speech amplitude envelope (SE) has long been considered a fundamental temporal cue for segmenting speech. Does the temporal fine structure (TFS), a significant part of speech signals often considered to contain primarily spectral information, contribute to speech segmentation? Using magnetoencephalography, we show that the TFS entrains cortical responses between 3 and 6 Hz and demonstrate, using mutual information analysis, that (i) the temporal information in the TFS can be reconstructed from a measure of frame-to-frame spectral change and correlates with the SE and (ii) that spectral resolution is key to the extraction of such temporal information. Furthermore, we show behavioural evidence that, when the SE is temporally distorted, the TFS provides cues for speech segmentation and aids speech recognition significantly. Our findings show that it is insufficient to investigate solely the SE to understand temporal speech segmentation, as the SE and the TFS derived from a band-filtering method convey comparable, if not inseparable, temporal information. We argue for a more synthetic view of speech segmentation - the auditory system groups speech signals coherently in both temporal and spectral domains.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, 60322, Germany.
| | - Gregory B Cogan
- Department of Neurosurgery, Duke University, Durham, NC, USA, 27710
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, 60322, Germany; Department of Psychology, New York University, New York, NY, USA, 10003
| |
Collapse
|
12
|
Roberts BM, Clarke A, Addante RJ, Ranganath C. Entrainment enhances theta oscillations and improves episodic memory. Cogn Neurosci 2019; 9:181-193. [PMID: 30198823 DOI: 10.1080/17588928.2018.1521386] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Neural oscillations in the theta band have been linked to episodic memory, but it is unclear whether activity patterns that give rise to theta play a causal role in episodic retrieval. Here, we used rhythmic auditory and visual stimulation to entrain neural oscillations to assess whether theta activity contributes to successful memory retrieval. In two separate experiments, human subjects studied words and were subsequently tested on memory for the words ('item recognition') and the context in which each had been previously studied ('source memory'). Between study and test, subjects in the entrainment groups were exposed to audiovisual stimuli designed to enhance activity at 5.5 Hz, whereas subjects in the control groups were exposed to white noise (Expt. 1) or 14 Hz entrainment (Expt. 2). Theta entrainment selectively increased source memory performance in both studies. Electroencephalography (EEG) data in Expt. 2 revealed that theta entrainment resulted in band-specific enhancement of theta power during the entrainment period and during post-entrainment memory retrieval. These results demonstrate a direct link between theta activity and episodic memory retrieval. Targeted manipulation of theta activity could be a promising new approach to enhance theta activity and memory performance in healthy individuals and in patients with memory disorders.
Collapse
Affiliation(s)
- Brooke M Roberts
- a Department of Psychology , University of California at Davis , Davis , CA , USA
| | - Alex Clarke
- b Department of Psychology , University of Cambridge , Cambridge , UK.,c Department of Psychology , Anglia Ruskin University , Cambridge , UK
| | - Richard J Addante
- d Department of Psychology , California State University , San Bernardino , CA , USA
| | - Charan Ranganath
- a Department of Psychology , University of California at Davis , Davis , CA , USA.,e Center for Neuroscience , University of California at Davis , Davis , CA , USA
| |
Collapse
|
13
|
Teng X, Tian X, Doelling K, Poeppel D. Theta band oscillations reflect more than entrainment: behavioral and neural evidence demonstrates an active chunking process. Eur J Neurosci 2018; 48:2770-2782. [PMID: 29044763 PMCID: PMC5904023 DOI: 10.1111/ejn.13742] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 08/16/2017] [Accepted: 09/28/2017] [Indexed: 11/29/2022]
Abstract
Parsing continuous acoustic streams into perceptual units is fundamental to auditory perception. Previous studies have uncovered a cortical entrainment mechanism in the delta and theta bands (~1-8 Hz) that correlates with formation of perceptual units in speech, music, and other quasi-rhythmic stimuli. Whether cortical oscillations in the delta-theta bands are passively entrained by regular acoustic patterns or play an active role in parsing the acoustic stream is debated. Here, we investigate cortical oscillations using novel stimuli with 1/f modulation spectra. These 1/f signals have no rhythmic structure but contain information over many timescales because of their broadband modulation characteristics. We chose 1/f modulation spectra with varying exponents of f, which simulate the dynamics of environmental noise, speech, vocalizations, and music. While undergoing magnetoencephalography (MEG) recording, participants listened to 1/f stimuli and detected embedded target tones. Tone detection performance varied across stimuli of different exponents and can be explained by local signal-to-noise ratio computed using a temporal window around 200 ms. Furthermore, theta band oscillations, surprisingly, were observed for all stimuli, but robust phase coherence was preferentially displayed by stimuli with exponents 1 and 1.5. We constructed an auditory processing model to quantify acoustic information on various timescales and correlated the model outputs with the neural results. We show that cortical oscillations reflect a chunking of segments, > 200 ms. These results suggest an active auditory segmentation mechanism, complementary to entrainment, operating on a timescale of ~200 ms to organize acoustic information.
Collapse
Affiliation(s)
| | - Xing Tian
- New York University Shanghai, Shanghai, China, 200122
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China 200062
| | - Keith Doelling
- Department of Psychology, New York University, New York, NY, USA 10003
- Center for Neural Science, New York University, New York, NY, USA 10003
| | - David Poeppel
- Max-Planck-Institute, 60322 Frankfurt, Germany
- Department of Psychology, New York University, New York, NY, USA 10003
| |
Collapse
|
14
|
Meyer L, Gumbert M. Synchronization of Electrophysiological Responses with Speech Benefits Syntactic Information Processing. J Cogn Neurosci 2018; 30:1066-1074. [DOI: 10.1162/jocn_a_01236] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In auditory neuroscience, electrophysiological synchronization to low-level acoustic and high-level linguistic features is well established—but its functional purpose for verbal information transmission is unclear. Based on prior evidence for a dependence of auditory task performance on delta-band oscillatory phase, we hypothesized that the synchronization of electrophysiological responses at delta-band frequency to the speech stimulus serves to implicitly align neural excitability with syntactic information. The experimental paradigm of our auditory EEG study uniformly distributed morphosyntactic violations across syntactic phrases of natural sentences, such that violations would occur at points differing in linguistic information content. In support of our hypothesis, we found behavioral responses to morphosyntactic violations to increase with decreasing syntactic information content—in significant correlation with delta-band phase, which had synchronized to our speech stimuli. Our findings indicate that rhythmic electrophysiological synchronization to the speech stream is a functional mechanism that may align neural excitability with linguistic information content, optimizing language comprehension.
Collapse
Affiliation(s)
- Lars Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Matthias Gumbert
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University of Trento
| |
Collapse
|
15
|
Perceptual Oscillation of Audiovisual Time Simultaneity. eNeuro 2018; 5:eN-NWR-0047-18. [PMID: 29845106 PMCID: PMC5969321 DOI: 10.1523/eneuro.0047-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 03/09/2018] [Accepted: 03/17/2018] [Indexed: 01/01/2023] Open
Abstract
Action and perception are tightly coupled systems requiring coordination and synchronization over time. How the brain achieves synchronization is still a matter of debate, but recent experiments suggest that brain oscillations may play an important role in this process. Brain oscillations have been also proposed to be fundamental in determining time perception. Here, we had subjects perform an audiovisual temporal order judgment task to investigate the fine dynamics of temporal bias and sensitivity before and after the execution of voluntary hand movement (button-press). The reported order of the audiovisual sequence was rhythmically biased as a function of delay from hand action execution. Importantly, we found that it oscillated at a theta range frequency, starting ∼500 ms before and persisting ∼250 ms after the button-press, with consistent phase-locking across participants. Our results show that the perception of cross-sensory simultaneity oscillates rhythmically in synchrony with the programming phase of a voluntary action, demonstrating a link between action preparation and bias in temporal perceptual judgments.
Collapse
|
16
|
Neural Signatures of the Processing of Temporal Patterns in Sound. J Neurosci 2018; 38:5466-5477. [PMID: 29773757 DOI: 10.1523/jneurosci.0346-18.2018] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 04/13/2018] [Accepted: 05/06/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to detect regularities in sound (i.e., recurring structure) is critical for effective perception, enabling, for example, change detection and prediction. Two seemingly unconnected lines of research concern the neural operations involved in processing regularities: one investigates how neural activity synchronizes with temporal regularities (e.g., frequency modulation; FM) in sounds, whereas the other focuses on increases in sustained activity during stimulation with repeating tone-frequency patterns. In three electroencephalography studies with male and female human participants, we investigated whether neural synchronization and sustained neural activity are dissociable, or whether they are functionally interdependent. Experiment I demonstrated that neural activity synchronizes with temporal regularity (FM) in sounds, and that sustained activity increases concomitantly. In Experiment II, phase coherence of FM in sounds was parametrically varied. Although neural synchronization was more sensitive to changes in FM coherence, such changes led to a systematic modulation of both neural synchronization and sustained activity, with magnitude increasing as coherence increased. In Experiment III, participants either performed a duration categorization task on the sounds, or a visual object tracking task to distract attention. Neural synchronization was observed regardless of task, whereas the sustained response was observed only when attention was on the auditory task, not under (visual) distraction. The results suggest that neural synchronization and sustained activity levels are functionally linked: both are sensitive to regularities in sounds. However, neural synchronization might reflect a more sensory-driven response to regularity, compared with sustained activity which may be influenced by attentional, contextual, or other experiential factors.SIGNIFICANCE STATEMENT Optimal perception requires that the auditory system detects regularities in sounds. Synchronized neural activity and increases in sustained neural activity both appear to index the detection of a regularity, but the functional interrelation of these two neural signatures is unknown. In three electroencephalography experiments, we measured both signatures concomitantly while listeners were presented with sounds containing frequency modulations that differed in their regularity. We observed that both neural signatures are sensitive to temporal regularity in sounds, although they functionally decouple when a listener is distracted by a demanding visual task. Our data suggest that neural synchronization reflects a more automatic response to regularity compared with sustained activity, which may be influenced by attentional, contextual, or other experiential factors.
Collapse
|
17
|
Puvvada KC, Summerfelt A, Du X, Krishna N, Kochunov P, Rowland LM, Simon JZ, Hong LE. Delta Vs Gamma Auditory Steady State Synchrony in Schizophrenia. Schizophr Bull 2018; 44:378-387. [PMID: 29036430 PMCID: PMC5814801 DOI: 10.1093/schbul/sbx078] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Background Delta band (1-4 Hz) neuronal responses support the precision and stability of auditory processing, and a deficit in delta band synchrony may be relevant to auditory domain symptoms in schizophrenia patients. Methods Delta band synchronization elicited by a 2.5 Hz auditory steady state response (ASSR) paradigm, along with those from theta (5 Hz), alpha (10 Hz), beta (20 Hz), gamma (40 Hz), and high gamma (80 Hz) frequency ASSR, were compared in 128 patients with schizophrenia, 108 healthy controls, and 55 first-degree relatives (FDR) of patients. Results Delta band synchronization was significantly impaired in patients compared with controls (F = 18.3, P < .001). There was a significant 2.5 Hz by 40 Hz ASSR interaction (P = .023), arising from a greater reduction of 2.5 Hz ASSR than of 40 Hz ASSR, in patients compared with controls. Greater deficit in delta ASSR was associated with auditory perceptual abnormality (P = .007) and reduced verbal working memory (P < .001). Gamma frequency ASSR impairment was also significant but more modest (F = 8.7, P = .004), and this deficit was also present in FDR (P = .022). Conclusions The ability to sustain delta band oscillation entrainment in the auditory pathway is significantly reduced in schizophrenia patients and appears to be clinically relevant.
Collapse
Affiliation(s)
- Krishna C Puvvada
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD
| | - Ann Summerfelt
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| | - Xiaoming Du
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| | - Nithin Krishna
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| | - Peter Kochunov
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| | - Laura M Rowland
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| | - Jonathan Z Simon
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD
- Department of Biology, University of Maryland, College Park, MD
- Institute for Systems Research, University of Maryland, College Park, MD
| | - L Elliot Hong
- Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, Baltimore, MD
| |
Collapse
|
18
|
Teng X, Tian X, Rowland J, Poeppel D. Concurrent temporal channels for auditory processing: Oscillatory neural entrainment reveals segregation of function at different scales. PLoS Biol 2017; 15:e2000812. [PMID: 29095816 PMCID: PMC5667736 DOI: 10.1371/journal.pbio.2000812] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Accepted: 10/10/2017] [Indexed: 11/18/2022] Open
Abstract
Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4-7 Hz) and gamma band ranges (31-45 Hz) but, contrary to expectation, not at the timescale corresponding to alpha (8-12 Hz), which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods) and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least) a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
Collapse
Affiliation(s)
| | - Xing Tian
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| | - Jess Rowland
- School of Visual Arts, New York, New York, United States of America
- Department of Psychology, New York University, New York, New York, United States of America
| | - David Poeppel
- Max-Planck-Institute, Frankfurt, Germany
- Department of Psychology, New York University, New York, New York, United States of America
| |
Collapse
|
19
|
Henry MJ, Herrmann B, Grahn JA. What can we learn about beat perception by comparing brain signals and stimulus envelopes? PLoS One 2017; 12:e0172454. [PMID: 28225796 PMCID: PMC5321456 DOI: 10.1371/journal.pone.0172454] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2015] [Accepted: 02/06/2017] [Indexed: 01/30/2023] Open
Abstract
Entrainment of neural oscillations on multiple time scales is important for the perception of speech. Musical rhythms, and in particular the perception of a regular beat in musical rhythms, is also likely to rely on entrainment of neural oscillations. One recently proposed approach to studying beat perception in the context of neural entrainment and resonance (the "frequency-tagging" approach) has received an enthusiastic response from the scientific community. A specific version of the approach involves comparing frequency-domain representations of acoustic rhythm stimuli to the frequency-domain representations of neural responses to those rhythms (measured by electroencephalography, EEG). The relative amplitudes at specific EEG frequencies are compared to the relative amplitudes at the same stimulus frequencies, and enhancements at beat-related frequencies in the EEG signal are interpreted as reflecting an internal representation of the beat. Here, we show that frequency-domain representations of rhythms are sensitive to the acoustic features of the tones making up the rhythms (tone duration, onset/offset ramp duration); in fact, relative amplitudes at beat-related frequencies can be completely reversed by manipulating tone acoustics. Crucially, we show that changes to these acoustic tone features, and in turn changes to the frequency-domain representations of rhythms, do not affect beat perception. Instead, beat perception depends on the pattern of onsets (i.e., whether a rhythm has a simple or complex metrical structure). Moreover, we show that beat perception can differ for rhythms that have numerically identical frequency-domain representations. Thus, frequency-domain representations of rhythms are dissociable from beat perception. For this reason, we suggest caution in interpreting direct comparisons of rhythms and brain signals in the frequency domain. Instead, we suggest that combining EEG measurements of neural signals with creative behavioral paradigms is of more benefit to our understanding of beat perception.
Collapse
Affiliation(s)
- Molly J. Henry
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| | - Björn Herrmann
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| | - Jessica A. Grahn
- Brain and Mind Institute, Department of Psychology The University of Western Ontario, London, ON, Canada
| |
Collapse
|
20
|
Herrmann B, Parthasarathy A, Bartlett EL. Ageing affects dual encoding of periodicity and envelope shape in rat inferior colliculus neurons. Eur J Neurosci 2017; 45:299-311. [PMID: 27813207 PMCID: PMC5247336 DOI: 10.1111/ejn.13463] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2016] [Revised: 10/19/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Extracting temporal periodicities and envelope shapes of sounds is important for listening within complex auditory scenes but declines behaviorally with age. Here, we recorded local field potentials (LFPs) and spikes to investigate how ageing affects the neural representations of different modulation rates and envelope shapes in the inferior colliculus of rats. We specifically aimed to explore the input-output (LFP-spike) response transformations of inferior colliculus neurons. Our results show that envelope shapes up to 256-Hz modulation rates are represented in the neural synchronisation phase lags in younger and older animals. Critically, ageing was associated with (i) an enhanced gain in onset response magnitude from LFPs to spikes; (ii) an enhanced gain in neural synchronisation strength from LFPs to spikes for a low modulation rate (45 Hz); (iii) a decrease in LFP synchronisation strength for higher modulation rates (128 and 256 Hz) and (iv) changes in neural synchronisation strength to different envelope shapes. The current age-related changes are discussed in the context of an altered excitation-inhibition balance accompanying ageing.
Collapse
Affiliation(s)
- Björn Herrmann
- Department of Psychology & Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Aravindakshan Parthasarathy
- Depts. of Biological Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47906, USA
- Dept. of Otology and Laryngology, Harvard Medical School, and Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114
| | - Edward L. Bartlett
- Depts. of Biological Sciences and Biomedical Engineering, Purdue University, West Lafayette, IN, 47906, USA
| |
Collapse
|
21
|
Perceived visual time depends on motor preparation and direction of hand movements. Sci Rep 2016; 6:27947. [PMID: 27283474 PMCID: PMC4901279 DOI: 10.1038/srep27947] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Accepted: 05/25/2016] [Indexed: 12/04/2022] Open
Abstract
Perceived time undergoes distortions when we prepare and perform movements, showing compression and/or expansion for visual, tactile and auditory stimuli. However, the actual motor system contribution to these time distortions is far from clear. In this study we investigated visual time perception during preparation of isometric contractions and real movements of the hand in two different directions (right/left). Comparable modulations of visual event-timing are found in the isometric and in the movement condition, excluding explanations based on movement-induced sensory masking or attenuation. Most importantly, and surprisingly, visual time depends on the movement direction, being expanded for hand movements pointing away from the body and compressed in the other direction. Furthermore, the effect of movement direction is not constant, but rather undergoes non-monotonic modulations in the brief moments preceding movement initiation. Our findings indicate that time distortions are strongly linked to the motor system, and they may be unavoidable consequences of the mechanisms subserving sensory-motor integration.
Collapse
|
22
|
Kulashekhar S, Pekkola J, Palva JM, Palva S. The role of cortical beta oscillations in time estimation. Hum Brain Mapp 2016; 37:3262-81. [PMID: 27168123 DOI: 10.1002/hbm.23239] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 03/24/2016] [Accepted: 04/19/2016] [Indexed: 11/06/2022] Open
Abstract
Estimation of time is central to perception, action, and cognition. Human functional magnetic resonance imaging (fMRI) and positron emission topography (PET) have revealed a positive correlation between the estimation of multi-second temporal durations and neuronal activity in a circuit of sensory and motor areas, prefrontal and temporal cortices, basal ganglia, and cerebellum. The systems-level mechanisms coordinating the collective neuronal activity in these areas have remained poorly understood. Synchronized oscillations regulate communication in neuronal networks and could hence serve such coordination, but their role in the estimation and maintenance of multi-second time intervals has remained largely unknown. We used source-reconstructed magnetoencephalography (MEG) to address the functional significance of local neuronal synchronization, as indexed by the amplitudes of cortical oscillations, in time-estimation. MEG was acquired during a working memory (WM) task where the subjects first estimated and then memorized the durations, or in the contrast condition, the colors of dynamic visual stimuli. Time estimation was associated with stronger beta (β, 14 - 30 Hz) band oscillations than color estimation in sensory regions and attentional cortical structures that earlier have been associated with time processing. In addition, the encoding of duration information was associated with strengthened gamma- (γ, 30 - 120 Hz), and the retrieval and maintenance with alpha- (α, 8 - 14 Hz) band oscillations. These data suggest that β oscillations may provide a mechanism for estimating short temporal durations, while γ and α oscillations support their encoding, retrieval, and maintenance in memory. Hum Brain Mapp 37:3262-3281, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Shrikanth Kulashekhar
- Neuroscience Center, University of Helsinki, Helsinki, Finland.,BioMag Laboratory, HUS Medical Imaging Center, Helsinki University Central Hospital, Helsinki, Finland
| | - Johanna Pekkola
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital and University of Helsinki, Helsinki, Finland
| | | | - Satu Palva
- Neuroscience Center, University of Helsinki, Helsinki, Finland
| |
Collapse
|
23
|
|
24
|
Abstract
Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8-12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1-5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener's enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content.
Collapse
|
25
|
Heimrath K, Fiene M, Rufener KS, Zaehle T. Modulating Human Auditory Processing by Transcranial Electrical Stimulation. Front Cell Neurosci 2016; 10:53. [PMID: 27013969 PMCID: PMC4779894 DOI: 10.3389/fncel.2016.00053] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 02/18/2016] [Indexed: 12/31/2022] Open
Abstract
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.
Collapse
Affiliation(s)
| | | | | | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke University MagdeburgMagdeburg, Germany
| |
Collapse
|
26
|
Tang H, Crain S, Johnson BW. Dual temporal encoding mechanisms in human auditory cortex: Evidence from MEG and EEG. Neuroimage 2016; 128:32-43. [DOI: 10.1016/j.neuroimage.2015.12.053] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 12/01/2015] [Accepted: 12/30/2015] [Indexed: 11/25/2022] Open
|
27
|
Temporal expectations and neural amplitude fluctuations in auditory cortex interactively influence perception. Neuroimage 2015; 124:487-497. [PMID: 26386347 DOI: 10.1016/j.neuroimage.2015.09.019] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 08/07/2015] [Accepted: 09/09/2015] [Indexed: 02/02/2023] Open
Abstract
Alignment of neural oscillations with temporally regular input allows listeners to generate temporal expectations. However, it remains unclear how behavior is governed in the context of temporal variability: What role do temporal expectations play, and how do they interact with the strength of neural oscillatory activity? Here, human participants detected near-threshold targets in temporally variable acoustic sequences. Temporal expectation strength was estimated using an oscillator model and pre-target neural amplitudes in auditory cortex were extracted from magnetoencephalography signals. Temporal expectations modulated target-detection performance, however, only when neural delta-band amplitudes were large. Thus, slow neural oscillations act to gate influences of temporal expectation on perception. Furthermore, slow amplitude fluctuations governed linear and quadratic influences of auditory alpha-band activity on performance. By fusing a model of temporal expectation with neural oscillatory dynamics, the current findings show that human perception in temporally variable contexts relies on complex interactions between multiple neural frequency bands.
Collapse
|
28
|
Nourski KV, Steinschneider M, Rhone AE, Oya H, Kawasaki H, Howard MA, McMurray B. Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings. BRAIN AND LANGUAGE 2015; 148:37-50. [PMID: 25819402 PMCID: PMC4556541 DOI: 10.1016/j.bandl.2015.03.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2014] [Revised: 02/05/2015] [Accepted: 03/03/2015] [Indexed: 06/01/2023]
Abstract
High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, NY 10461, USA; Department of Neuroscience, Albert Einstein College of Medicine, New York, NY 10461, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Psychology, The University of Iowa, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA 52242, USA; Department of Linguistics, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
29
|
Wilsch A, Henry MJ, Herrmann B, Maess B, Obleser J. Slow-delta phase concentration marks improved temporal expectations based on the passage of time. Psychophysiology 2015; 52:910-8. [DOI: 10.1111/psyp.12413] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Accepted: 12/10/2014] [Indexed: 11/30/2022]
Affiliation(s)
- Anna Wilsch
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
| | - Molly J. Henry
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
| | - Björn Herrmann
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
| | - Burkhard Maess
- MEG and Cortical Networks, Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences; Leipzig Germany
- Department of Psychology; University of Lübeck; Lübeck Germany
| |
Collapse
|
30
|
Nozaradan S. Exploring how musical rhythm entrains brain activity with electroencephalogram frequency-tagging. Philos Trans R Soc Lond B Biol Sci 2014; 369:20130393. [PMID: 25385771 PMCID: PMC4240960 DOI: 10.1098/rstb.2013.0393] [Citation(s) in RCA: 105] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The ability to perceive a regular beat in music and synchronize to this beat is a widespread human skill. Fundamental to musical behaviour, beat and meter refer to the perception of periodicities while listening to musical rhythms and often involve spontaneous entrainment to move on these periodicities. Here, we present a novel experimental approach inspired by the frequency-tagging approach to understand the perception and production of rhythmic inputs. This approach is illustrated here by recording the human electroencephalogram responses at beat and meter frequencies elicited in various contexts: mental imagery of meter, spontaneous induction of a beat from rhythmic patterns, multisensory integration and sensorimotor synchronization. Collectively, our observations support the view that entrainment and resonance phenomena subtend the processing of musical rhythms in the human brain. More generally, they highlight the potential of this approach to help us understand the link between the phenomenology of musical beat and meter and the bias towards periodicities arising under certain circumstances in the nervous system. Entrainment to music provides a highly valuable framework to explore general entrainment mechanisms as embodied in the human brain.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), 53, Avenue Mounier-UCL 53.75, Bruxelles 1200, Belgium International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada H3C 3J7
| |
Collapse
|
31
|
van Wassenhove V, Lecoutre L. Duration estimation entails predicting when. Neuroimage 2014; 106:272-83. [PMID: 25462792 DOI: 10.1016/j.neuroimage.2014.11.005] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Revised: 10/27/2014] [Accepted: 11/01/2014] [Indexed: 10/24/2022] Open
Abstract
The estimation of duration can be affected by context and surprise. Using MagnetoEncephaloGraphy (MEG), we tested whether increased neural activity during surprise and following neural suppression in two different contexts supported subjective time dilation (Eagleman and Pariyadath, 2009; Pariyadath and Eagleman, 2012). Sequences of three 300 ms frequency-modulated (FM, control) or pure tones (test) were presented and followed by a fourth FM varying in duration. In test, the last FM was perceived as significantly longer than veridical duration (Tse et al., 2004) but did not differ from the perceived duration in control. Several novel and distinct neural signatures were observed in duration estimation: first, neural suppression of standard stimuli was observed for the onset but not for the offset auditory evoked responses. Second, ramping activity increased with veridical duration in control whereas at the same latency in test, the amplitude of the midlatency response increased with the distance of deviant durations. Third, in both conditions, the amplitude of the offset auditory evoked responses accounted well for participants' performance: the longer the perceived duration, the larger the offset response. Fourth, neural duration demarcated by the peak latencies of the onset and ramping evoked activities indexed a systematic time compression that reliably predicted subjective time perception. Our findings suggest that interval timing undergoes time compression by capitalizing on the predicted offset of an auditory event.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- CEA, DSV/I(2)BM, NeuroSpin, INSERM, U992, Cognitive Neuroimaging Unit, Univ Paris-Sud, F-91191 Gif/Yvette, France.
| | - Lucille Lecoutre
- CEA, DSV/I(2)BM, NeuroSpin, INSERM, U992, Cognitive Neuroimaging Unit, Univ Paris-Sud, F-91191 Gif/Yvette, France
| |
Collapse
|
32
|
Herrmann B, Henry MJ, Scharinger M, Obleser J. Supplementary motor area activations predict individual differences in temporal-change sensitivity and its illusory distortions. Neuroimage 2014; 101:370-9. [DOI: 10.1016/j.neuroimage.2014.07.026] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 07/07/2014] [Accepted: 07/16/2014] [Indexed: 10/25/2022] Open
|
33
|
Entrained neural oscillations in multiple frequency bands comodulate behavior. Proc Natl Acad Sci U S A 2014; 111:14935-40. [PMID: 25267634 DOI: 10.1073/pnas.1408741111] [Citation(s) in RCA: 125] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Our sensory environment is teeming with complex rhythmic structure, to which neural oscillations can become synchronized. Neural synchronization to environmental rhythms (entrainment) is hypothesized to shape human perception, as rhythmic structure acts to temporally organize cortical excitability. In the current human electroencephalography study, we investigated how behavior is influenced by neural oscillatory dynamics when the rhythmic fluctuations in the sensory environment take on a naturalistic degree of complexity. Listeners detected near-threshold gaps in auditory stimuli that were simultaneously modulated in frequency (frequency modulation, 3.1 Hz) and amplitude (amplitude modulation, 5.075 Hz); modulation rates and types were chosen to mimic the complex rhythmic structure of natural speech. Neural oscillations were entrained by both the frequency modulation and amplitude modulation in the stimulation. Critically, listeners' target-detection accuracy depended on the specific phase-phase relationship between entrained neural oscillations in both the 3.1-Hz and 5.075-Hz frequency bands, with the best performance occurring when the respective troughs in both neural oscillations coincided. Neural-phase effects were specific to the frequency bands entrained by the rhythmic stimulation. Moreover, the degree of behavioral comodulation by neural phase in both frequency bands exceeded the degree of behavioral modulation by either frequency band alone. Our results elucidate how fluctuating excitability, within and across multiple entrained frequency bands, shapes the effective neural processing of environmental stimuli. More generally, the frequency-specific nature of behavioral comodulation effects suggests that environmental rhythms act to reduce the complexity of high-dimensional neural states.
Collapse
|
34
|
Flaig NK, Large EW. Dynamic musical communication of core affect. Front Psychol 2014; 5:72. [PMID: 24672492 PMCID: PMC3956121 DOI: 10.3389/fpsyg.2014.00072] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 01/19/2014] [Indexed: 12/02/2022] Open
Abstract
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified "scene" that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
Collapse
Affiliation(s)
- Nicole K Flaig
- Music Dynamics Lab, Department of Psychology, University of Connecticut Storrs, CT, USA
| | - Edward W Large
- Music Dynamics Lab, Department of Psychology, University of Connecticut Storrs, CT, USA
| |
Collapse
|
35
|
Henry MJ, Obleser J. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex. PLoS One 2013; 8:e78758. [PMID: 24205309 PMCID: PMC3812144 DOI: 10.1371/journal.pone.0078758] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Accepted: 09/20/2013] [Indexed: 11/19/2022] Open
Abstract
Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.
Collapse
Affiliation(s)
- Molly J. Henry
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|