1
|
Gallina J, Ronconi L, Marsicano G, Bertini C. Alpha and theta rhythm support perceptual and attentional sampling in vision. Cortex 2024; 177:84-99. [PMID: 38848652 DOI: 10.1016/j.cortex.2024.04.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 04/18/2024] [Accepted: 04/18/2024] [Indexed: 06/09/2024]
Abstract
The visual system operates rhythmically, through timely coordinated perceptual and attentional processes, involving coexisting patterns in the alpha range (7-13 Hz) at ∼10 Hz, and theta (3-6 Hz) range, respectively. Here we aimed to disambiguate whether variations in task requirements, in terms of attentional demand and side of target presentation, might influence the occurrence of either perceptual or attentional components in behavioral visual performance, also uncovering possible differences in the sampling mechanisms of the two cerebral hemispheres. To this aim, visuospatial performance was densely sampled in two versions of a visual detection task where the side of target presentation was fixed (Task 1), with participants monitoring one single hemifield, or randomly varying across trials, with participants monitoring both hemifields simultaneously (Task 2). Performance was analyzed through spectral decomposition, to reveal behavioral oscillatory patterns. For Task 1, when attentional resources where focused on one hemifield only, the results revealed an oscillatory pattern fluctuating at ∼10 Hz and ∼6-9 Hz, for stimuli presented to the left and the right hemifield, respectively, possibly representing a perceptual sampling mechanism with different efficiency within the left and the right hemispheres. For Task 2, when attentional resources were simultaneously deployed to the two hemifields, a ∼5 Hz rhythm emerged both for stimuli presented to the left and the right, reflecting an attentional sampling process, equally supported by the two hemispheres. Overall, the results suggest that distinct perceptual and attentional sampling mechanisms operate at different oscillatory frequencies and their prevalence and hemispheric lateralization depends on task requirements.
Collapse
Affiliation(s)
- Jessica Gallina
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena, Italy; Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Gianluca Marsicano
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena, Italy; Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, Italy
| | - Caterina Bertini
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena, Italy; Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna, Italy.
| |
Collapse
|
2
|
Gwilliams L, Marantz A, Poeppel D, King JR. Hierarchical dynamic coding coordinates speech comprehension in the brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.19.590280. [PMID: 38659750 PMCID: PMC11042271 DOI: 10.1101/2024.04.19.590280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Speech comprehension requires the human brain to transform an acoustic waveform into meaning. To do so, the brain generates a hierarchy of features that converts the sensory input into increasingly abstract language properties. However, little is known about how these hierarchical features are generated and continuously coordinated. Here, we propose that each linguistic feature is dynamically represented in the brain to simultaneously represent successive events. To test this 'Hierarchical Dynamic Coding' (HDC) hypothesis, we use time-resolved decoding of brain activity to track the construction, maintenance, and integration of a comprehensive hierarchy of language features spanning acoustic, phonetic, sub-lexical, lexical, syntactic and semantic representations. For this, we recorded 21 participants with magnetoencephalography (MEG), while they listened to two hours of short stories. Our analyses reveal three main findings. First, the brain incrementally represents and simultaneously maintains successive features. Second, the duration of these representations depend on their level in the language hierarchy. Third, each representation is maintained by a dynamic neural code, which evolves at a speed commensurate with its corresponding linguistic level. This HDC preserves the maintenance of information over time while limiting the interference between successive features. Overall, HDC reveals how the human brain continuously builds and maintains a language hierarchy during natural speech comprehension, thereby anchoring linguistic theories to their biological implementations.
Collapse
Affiliation(s)
- Laura Gwilliams
- Department of Psychology, Stanford University
- Department of Psychology, New York University
| | - Alec Marantz
- Department of Psychology, New York University
- Department of Linguistics, New York University
| | - David Poeppel
- Department of Psychology, New York University
- Ernst Strungman Institute
| | | |
Collapse
|
3
|
Ronconi L, Balestrieri E, Baldauf D, Melcher D. Distinct Cortical Networks Subserve Spatio-temporal Sampling in Vision through Different Oscillatory Rhythms. J Cogn Neurosci 2024; 36:572-589. [PMID: 37172123 DOI: 10.1162/jocn_a_02006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Although visual input arrives continuously, sensory information is segmented into (quasi-)discrete events. Here, we investigated the neural correlates of spatiotemporal binding in humans with magnetoencephalography using two tasks where separate flashes were presented on each trial but were perceived, in a bistable way, as either a single or two separate events. The first task (two-flash fusion) involved judging one versus two flashes, whereas the second task (apparent motion: AM) involved judging coherent motion versus two stationary flashes. Results indicate two different functional networks underlying two unique aspects of temporal binding. In two-flash fusion trials, involving an integration window of ∼50 msec, evoked responses differed as a function of perceptual interpretation by ∼25 msec after stimuli offset. Multivariate decoding of subjective perception based on prestimulus oscillatory phase was significant for alpha-band activity in the right medial temporal (V5/MT) area, with the strength of prestimulus connectivity between early visual areas and V5/MT being predictive of performance. In contrast, the longer integration window (∼130 msec) for AM showed evoked field differences only ∼250 msec after stimuli offset. Phase decoding of the perceptual outcome in AM trials was significant for theta-band activity in the right intraparietal sulcus. Prestimulus theta-band connectivity between V5/MT and intraparietal sulcus best predicted AM perceptual outcome. For both tasks, phase effects found could not be accounted by concomitant variations in power. These results show a strong relationship between specific spatiotemporal binding windows and specific oscillations, linked to the information flow between different areas of the where and when visual pathways.
Collapse
Affiliation(s)
- Luca Ronconi
- Vita-Salute San Raffaele University, Milan, Italy
- IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elio Balestrieri
- University of Münster, Germany
- Otto Creutzfeld Center for Cognitive and Behavioural Neuroscience, Münster, Germany
| | | | - David Melcher
- New York University Abu Dhabi, United Arab Emirates
- University of Trento, Rovereto, Italy
| |
Collapse
|
4
|
Naghibi N, Jahangiri N, Khosrowabadi R, Eickhoff CR, Eickhoff SB, Coull JT, Tahmasian M. Embodying Time in the Brain: A Multi-Dimensional Neuroimaging Meta-Analysis of 95 Duration Processing Studies. Neuropsychol Rev 2024; 34:277-298. [PMID: 36857010 PMCID: PMC10920454 DOI: 10.1007/s11065-023-09588-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 10/05/2022] [Indexed: 03/02/2023]
Abstract
Time is an omnipresent aspect of almost everything we experience internally or in the external world. The experience of time occurs through such an extensive set of contextual factors that, after decades of research, a unified understanding of its neural substrates is still elusive. In this study, following the recent best-practice guidelines, we conducted a coordinate-based meta-analysis of 95 carefully-selected neuroimaging papers of duration processing. We categorized the included papers into 14 classes of temporal features according to six categorical dimensions. Then, using the activation likelihood estimation (ALE) technique we investigated the convergent activation patterns of each class with a cluster-level family-wise error correction at p < 0.05. The regions most consistently activated across the various timing contexts were the pre-SMA and bilateral insula, consistent with an embodied theory of timing in which abstract representations of duration are rooted in sensorimotor and interoceptive experience, respectively. Moreover, class-specific patterns of activation could be roughly divided according to whether participants were timing auditory sequential stimuli, which additionally activated the dorsal striatum and SMA-proper, or visual single interval stimuli, which additionally activated the right middle frontal and inferior parietal cortices. We conclude that temporal cognition is so entangled with our everyday experience that timing stereotypically common combinations of stimulus characteristics reactivates the sensorimotor systems with which they were first experienced.
Collapse
Affiliation(s)
- Narges Naghibi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Nadia Jahangiri
- Faculty of Psychology & Education, Allameh Tabataba'i University, Tehran, Iran
| | - Reza Khosrowabadi
- Institute for Cognitive and Brain Sciences, Shahid Beheshti University, Tehran, Iran
| | - Claudia R Eickhoff
- Institute of Neuroscience and Medicine Research, Structural and functional organisation of the brain (INM-1), Jülich Research Center, Jülich, Germany
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, Düsseldorf, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany
| | - Jennifer T Coull
- Laboratoire de Neurosciences Cognitives (UMR 7291), Aix-Marseille Université & CNRS, Marseille, France
| | - Masoud Tahmasian
- Institute of Neuroscience and Medicine Research, Brain and Behaviour (INM-7), Jülich Research Center, Wilhelm-Johnen-Straße, Jülich, Germany.
- Institute for Systems Neuroscience, Medical Faculty, Heinrich-Heine University, Düsseldorf, Germany.
| |
Collapse
|
5
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
6
|
Assaneo MF, Orpella J. Rhythms in Speech. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:257-274. [PMID: 38918356 DOI: 10.1007/978-3-031-60183-5_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.
Collapse
Affiliation(s)
- M Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México, Santiago de Querétaro, Mexico.
| | - Joan Orpella
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
7
|
Coull JT, Korolczuk I, Morillon B. The Motor of Time: Coupling Action to Temporally Predictable Events Heightens Perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:199-213. [PMID: 38918353 DOI: 10.1007/978-3-031-60183-5_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Timing and motor function share neural circuits and dynamics, which underpin their close and synergistic relationship. For instance, the temporal predictability of a sensory event optimizes motor responses to that event. Knowing when an event is likely to occur lowers response thresholds, leading to faster and more efficient motor behavior though in situations of response conflict can induce impulsive and inappropriate responding. In turn, through a process of active sensing, coupling action to temporally predictable sensory input enhances perceptual processing. Action not only hones perception of the event's onset or duration, but also boosts sensory processing of its non-temporal features such as pitch or shape. The effects of temporal predictability on motor behavior and sensory processing involve motor and left parietal cortices and are mediated by changes in delta and beta oscillations in motor areas of the brain.
Collapse
Affiliation(s)
- Jennifer T Coull
- Centre for Research in Psychology and Neuroscience (UMR 7077), Aix-Marseille Université & CNRS, Marseille, France.
| | - Inga Korolczuk
- Department of Pathophysiology, Medical University of Lublin, Lublin, Poland
| | - Benjamin Morillon
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
8
|
Gunasekaran H, Azizi L, van Wassenhove V, Herbst SK. Characterizing endogenous delta oscillations in human MEG. Sci Rep 2023; 13:11031. [PMID: 37419933 PMCID: PMC10328979 DOI: 10.1038/s41598-023-37514-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 06/22/2023] [Indexed: 07/09/2023] Open
Abstract
Rhythmic activity in the delta frequency range (0.5-3 Hz) is a prominent feature of brain dynamics. Here, we examined whether spontaneous delta oscillations, as found in invasive recordings in awake animals, can be observed in non-invasive recordings performed in humans with magnetoencephalography (MEG). In humans, delta activity is commonly reported when processing rhythmic sensory inputs, with direct relationships to behaviour. However, rhythmic brain dynamics observed during rhythmic sensory stimulation cannot be interpreted as an endogenous oscillation. To test for endogenous delta oscillations we analysed human MEG data during rest. For comparison, we additionally analysed two conditions in which participants engaged in spontaneous finger tapping and silent counting, arguing that internally rhythmic behaviours could incite an otherwise silent neural oscillator. A novel set of analysis steps allowed us to show narrow spectral peaks in the delta frequency range in rest, and during overt and covert rhythmic activity. Additional analyses in the time domain revealed that only the resting state condition warranted an interpretation of these peaks as endogenously periodic neural dynamics. In sum, this work shows that using advanced signal processing techniques, it is possible to observe endogenous delta oscillations in non-invasive recordings of human brain dynamics.
Collapse
Affiliation(s)
- Harish Gunasekaran
- Cognitive Neuroimaging Unit, NeuroSpin, CEA, INSERM, CNRS, Université Paris-Saclay, 91191, Gif/Yvette, France
| | - Leila Azizi
- Cognitive Neuroimaging Unit, NeuroSpin, CEA, INSERM, CNRS, Université Paris-Saclay, 91191, Gif/Yvette, France
| | - Virginie van Wassenhove
- Cognitive Neuroimaging Unit, NeuroSpin, CEA, INSERM, CNRS, Université Paris-Saclay, 91191, Gif/Yvette, France
| | - Sophie K Herbst
- Cognitive Neuroimaging Unit, NeuroSpin, CEA, INSERM, CNRS, Université Paris-Saclay, 91191, Gif/Yvette, France.
| |
Collapse
|
9
|
Large EW, Roman I, Kim JC, Cannon J, Pazdera JK, Trainor LJ, Rinzel J, Bose A. Dynamic models for musical rhythm perception and coordination. Front Comput Neurosci 2023; 17:1151895. [PMID: 37265781 PMCID: PMC10229831 DOI: 10.3389/fncom.2023.1151895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023] Open
Abstract
Rhythmicity permeates large parts of human experience. Humans generate various motor and brain rhythms spanning a range of frequencies. We also experience and synchronize to externally imposed rhythmicity, for example from music and song or from the 24-h light-dark cycles of the sun. In the context of music, humans have the ability to perceive, generate, and anticipate rhythmic structures, for example, "the beat." Experimental and behavioral studies offer clues about the biophysical and neural mechanisms that underlie our rhythmic abilities, and about different brain areas that are involved but many open questions remain. In this paper, we review several theoretical and computational approaches, each centered at different levels of description, that address specific aspects of musical rhythmic generation, perception, attention, perception-action coordination, and learning. We survey methods and results from applications of dynamical systems theory, neuro-mechanistic modeling, and Bayesian inference. Some frameworks rely on synchronization of intrinsic brain rhythms that span the relevant frequency range; some formulations involve real-time adaptation schemes for error-correction to align the phase and frequency of a dedicated circuit; others involve learning and dynamically adjusting expectations to make rhythm tracking predictions. Each of the approaches, while initially designed to answer specific questions, offers the possibility of being integrated into a larger framework that provides insights into our ability to perceive and generate rhythmic patterns.
Collapse
Affiliation(s)
- Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Department of Physics, University of Connecticut, Mansfield, CT, United States
| | - Iran Roman
- Music and Audio Research Laboratory, New York University, New York, NY, United States
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
| | - Jonathan Cannon
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Jesse K. Pazdera
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Laurel J. Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - John Rinzel
- Center for Neural Science, New York University, New York, NY, United States
- Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| | - Amitabha Bose
- Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, NJ, United States
| |
Collapse
|
10
|
Gallina J, Marsicano G, Romei V, Bertini C. Electrophysiological and Behavioral Effects of Alpha-Band Sensory Entrainment: Neural Mechanisms and Clinical Applications. Biomedicines 2023; 11:biomedicines11051399. [PMID: 37239069 DOI: 10.3390/biomedicines11051399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 04/28/2023] [Accepted: 05/04/2023] [Indexed: 05/28/2023] Open
Abstract
Alpha-band (7-13 Hz) activity has been linked to visuo-attentional performance in healthy participants and to impaired functionality of the visual system in a variety of clinical populations including patients with acquired posterior brain lesion and neurodevelopmental and psychiatric disorders. Crucially, several studies suggested that short uni- and multi-sensory rhythmic stimulation (i.e., visual, auditory and audio-visual) administered in the alpha-band effectively induces transient changes in alpha oscillatory activity and improvements in visuo-attentional performance by synchronizing the intrinsic brain oscillations to the external stimulation (neural entrainment). The present review aims to address the current state of the art on the alpha-band sensory entrainment, outlining its potential functional effects and current limitations. Indeed, the results of the alpha-band entrainment studies are currently mixed, possibly due to the different stimulation modalities, task features and behavioral and physiological measures employed in the various paradigms. Furthermore, it is still unknown whether prolonged alpha-band sensory entrainment might lead to long-lasting effects at a neural and behavioral level. Overall, despite the limitations emerging from the current literature, alpha-band sensory entrainment may represent a promising and valuable tool, inducing functionally relevant changes in oscillatory activity, with potential rehabilitative applications in individuals characterized by impaired alpha activity.
Collapse
Affiliation(s)
- Jessica Gallina
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47521 Cesena, Italy
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121 Bologna, Italy
| | - Gianluca Marsicano
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47521 Cesena, Italy
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121 Bologna, Italy
| | - Vincenzo Romei
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47521 Cesena, Italy
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121 Bologna, Italy
| | - Caterina Bertini
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47521 Cesena, Italy
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121 Bologna, Italy
| |
Collapse
|
11
|
Foldal MD, Leske S, Blenkmann AO, Endestad T, Solbakk AK. Attentional modulation of beta-power aligns with the timing of behaviorally relevant rhythmic sounds. Cereb Cortex 2023; 33:1876-1894. [PMID: 35639957 PMCID: PMC9977362 DOI: 10.1093/cercor/bhac179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 04/05/2022] [Accepted: 04/06/2022] [Indexed: 11/12/2022] Open
Abstract
It is largely unknown how attention adapts to the timing of acoustic stimuli. To address this, we investigated how hemispheric lateralization of alpha (7-13 Hz) and beta (14-24 Hz) oscillations, reflecting voluntary allocation of auditory spatial attention, is influenced by tempo and predictability of sounds. We recorded electroencephalography while healthy adults listened to rhythmic sound streams with different tempos that were presented dichotically to separate ears, thus permitting manipulation of spatial-temporal attention. Participants responded to stimulus-onset-asynchrony (SOA) deviants (-90 ms) for given tones in the attended rhythm. Rhythm predictability was controlled via the probability of SOA deviants per block. First, the results revealed hemispheric lateralization of beta-power according to attention direction, reflected as ipsilateral enhancement and contralateral suppression, which was amplified in high- relative to low-predictability conditions. Second, fluctuations in the time-resolved beta-lateralization aligned more strongly with the attended than the unattended tempo. Finally, a trend-level association was found between the degree of beta-lateralization and improved ability to distinguish between SOA-deviants in the attended versus unattended ear. Differently from previous studies, we presented continuous rhythms in which task-relevant and irrelevant stimuli had different tempo, thereby demonstrating that temporal alignment of beta-lateralization with attended sounds reflects top-down attention to sound timing.
Collapse
Affiliation(s)
- Maja D Foldal
- Department of Psychology, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway
| | - Sabine Leske
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,Department of Musicology, University of Oslo, Sem Sælands vei 2, 0371 Oslo, Norway
| | - Alejandro O Blenkmann
- Department of Psychology, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway
| | - Tor Endestad
- Department of Psychology, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,Department of Neuropsychology, Helgeland Hospital, Skjervengan 17, 8657 Mosjøen, Norway
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Forskningsveien 3A, 0373 Oslo, Norway.,Department of Neuropsychology, Helgeland Hospital, Skjervengan 17, 8657 Mosjøen, Norway.,Department of Neurosurgery, Oslo University Hospital, Sognsvannsveien 20, 0372 Oslo, Norway
| |
Collapse
|
12
|
Ronconi L, Vitale A, Federici A, Mazzoni N, Battaglini L, Molteni M, Casartelli L. Neural dynamics driving audio-visual integration in autism. Cereb Cortex 2023; 33:543-556. [PMID: 35266994 DOI: 10.1093/cercor/bhac083] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/04/2022] [Indexed: 02/03/2023] Open
Abstract
Audio-visual (AV) integration plays a crucial role in supporting social functions and communication in autism spectrum disorder (ASD). However, behavioral findings remain mixed and, importantly, little is known about the underlying neurophysiological bases. Studies in neurotypical adults indicate that oscillatory brain activity in different frequencies subserves AV integration, pointing to a central role of (i) individual alpha frequency (IAF), which would determine the width of the cross-modal binding window; (ii) pre-/peri-stimulus theta oscillations, which would reflect the expectation of AV co-occurrence; (iii) post-stimulus oscillatory phase reset, which would temporally align the different unisensory signals. Here, we investigate the neural correlates of AV integration in children with ASD and typically developing (TD) peers, measuring electroencephalography during resting state and in an AV integration paradigm. As for neurotypical adults, AV integration dynamics in TD children could be predicted by the IAF measured at rest and by a modulation of anticipatory theta oscillations at single-trial level. Conversely, in ASD participants, AV integration/segregation was driven exclusively by the neural processing of the auditory stimulus and the consequent auditory-induced phase reset in visual regions, suggesting that a disproportionate elaboration of the auditory input could be the main factor characterizing atypical AV integration in autism.
Collapse
Affiliation(s)
- Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, 20132 Milan, Italy
| | - Andrea Vitale
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| | - Alessandra Federici
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy.,Sensory Experience Dependent (SEED) group, IMT School for Advanced Studies Lucca, 55100 Lucca, Italy
| | - Noemi Mazzoni
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy.,Laboratory for Autism and Neurodevelopmental Disorders, Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, 38068 Rovereto, Italy.,Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto, Italy
| | - Luca Battaglini
- Department of General Psychology, University of Padova, 35131 Padova, Italy.,Department of Physics and Astronomy "Galileo Galilei", University of Padova, 35131 Padova, Italy
| | - Massimo Molteni
- Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| | - Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Child Psychopathology Department, Scientific Institute IRCCS Eugenio Medea, 23842 Bosisio Parini, Italy
| |
Collapse
|
13
|
Gugnowska K, Novembre G, Kohler N, Villringer A, Keller PE, Sammler D. Endogenous sources of interbrain synchrony in duetting pianists. Cereb Cortex 2022; 32:4110-4127. [PMID: 35029645 PMCID: PMC9476614 DOI: 10.1093/cercor/bhab469] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 11/12/2022] Open
Abstract
When people interact with each other, their brains synchronize. However, it remains unclear whether interbrain synchrony (IBS) is functionally relevant for social interaction or stems from exposure of individual brains to identical sensorimotor information. To disentangle these views, the current dual-EEG study investigated amplitude-based IBS in pianists jointly performing duets containing a silent pause followed by a tempo change. First, we manipulated the similarity of the anticipated tempo change and measured IBS during the pause, hence, capturing the alignment of purely endogenous, temporal plans without sound or movement. Notably, right posterior gamma IBS was higher when partners planned similar tempi, it predicted whether partners' tempi matched after the pause, and it was modulated only in real, not in surrogate pairs. Second, we manipulated the familiarity with the partner's actions and measured IBS during joint performance with sound. Although sensorimotor information was similar across conditions, gamma IBS was higher when partners were unfamiliar with each other's part and had to attend more closely to the sound of the performance. These combined findings demonstrate that IBS is not merely an epiphenomenon of shared sensorimotor information but can also hinge on endogenous, cognitive processes crucial for behavioral synchrony and successful social interaction.
Collapse
Affiliation(s)
- Katarzyna Gugnowska
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Natalie Kohler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Daniela Sammler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
14
|
Kachlicka M, Laffere A, Dick F, Tierney A. Slow phase-locked modulations support selective attention to sound. Neuroimage 2022; 252:119024. [PMID: 35231629 PMCID: PMC9133470 DOI: 10.1016/j.neuroimage.2022.119024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 11/16/2022] Open
Abstract
To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England; Division of Psychology & Language Sciences, UCL, Gower Street, London WC1E 6BT, England
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England.
| |
Collapse
|
15
|
Creel SC. Haunting melodies: Specific memories distort beat perception. Cognition 2022; 225:105158. [PMID: 35568008 DOI: 10.1016/j.cognition.2022.105158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/29/2022] [Accepted: 04/30/2022] [Indexed: 11/03/2022]
Abstract
How much does specific previous experience shape immediate perception? Top-down perceptual inference occurs in ambiguous situations. However, similarity-based accounts such as exemplar theory suggest that similar memories resonate with the percept, predicting that detailed previous experiences can shape perception even when bottom-up cues are unambiguous. The current study tests whether specific musical memories influence beat perception only under ambiguity, or more pervasively-that is, even when clear bottom-up beat cues are present. Listeners were exposed to 16 melodies, half in one meter, half in another. Later, each listener's perception of a specific melody's beat pattern was tested when that melody occurred in either its original meter or another meter. Ratings of metrical probes were influenced not only by fit with the current (test) meter, but also by fit with the meter previously experienced with that melody. Findings suggest that perception is routinely influenced by detailed top-down perceptual imagery.
Collapse
Affiliation(s)
- Sarah C Creel
- Department of Cognitive Science, UC San Diego, 9500 Gilman Drive Mail Code 0515, La Jolla, CA 92093, United States of America.
| |
Collapse
|
16
|
Kunchulia M, Parkosadze K, Lomidze N, Tatishvili T, Thomaschke R. Children with developmental dyslexia show an increased variable foreperiod effect. JOURNAL OF COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1080/20445911.2022.2060989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Marina Kunchulia
- Institute of Cognitive Neurosciences, Free University of Tbilisi, Tbilisi, Georgia
- Laboratory of Vision Physiology, Ivane Beritashvili Centre of Experimental Biomedicine, Tbilisi, Georgia
| | - Khatuna Parkosadze
- Institute of Cognitive Neurosciences, Free University of Tbilisi, Tbilisi, Georgia
- Laboratory of Vision Physiology, Ivane Beritashvili Centre of Experimental Biomedicine, Tbilisi, Georgia
| | - Nino Lomidze
- Department of Psychology, McLain Association for Children Georgia, Tbilisi, Georgia
| | - Tamari Tatishvili
- Faculty of Psychology and Educational Sciences, Ivane Javakhishvili Tbilisi State University, Tbilisi, Georgia
| | - Roland Thomaschke
- Department of Psychology, Time, Interaction, and Self-determination Group, at the Cognition, Action and Sustainability Unit, University of Freiburg, Freiburg, Germany
| |
Collapse
|
17
|
Monahan PJ, Schertz J, Fu Z, Pérez A. Unified Coding of Spectral and Temporal Phonetic Cues: Electrophysiological Evidence for Abstract Phonological Features. J Cogn Neurosci 2022; 34:618-638. [DOI: 10.1162/jocn_a_01817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Spoken word recognition models and phonological theory propose that abstract features play a central role in speech processing. It remains unknown, however, whether auditory cortex encodes linguistic features in a manner beyond the phonetic properties of the speech sounds themselves. We took advantage of the fact that English phonology functionally codes stops and fricatives as voiced or voiceless with two distinct phonetic cues: Fricatives use a spectral cue, whereas stops use a temporal cue. Evidence that these cues can be grouped together would indicate the disjunctive coding of distinct phonetic cues into a functionally defined abstract phonological feature. In English, the voicing feature, which distinguishes the consonants [s] and [t] from [z] and [d], respectively, is hypothesized to be specified only for voiceless consonants (e.g., [s t]). Here, participants listened to syllables in a many-to-one oddball design, while their EEG was recorded. In one block, both voiceless stops and fricatives were the standards. In the other block, both voiced stops and fricatives were the standards. A critical design element was the presence of intercategory variation within the standards. Therefore, a many-to-one relationship, which is necessary to elicit an MMN, existed only if the stop and fricative standards were grouped together. In addition to the ERPs, event-related spectral power was also analyzed. Results showed an MMN effect in the voiceless standards block—an asymmetric MMN—in a time window consistent with processing in auditory cortex, as well as increased prestimulus beta-band oscillatory power to voiceless standards. These findings suggest that (i) there is an auditory memory trace of the standards based on the shared (voiceless) feature, which is only functionally defined; (ii) voiced consonants are underspecified; and (iii) features can serve as a basis for predictive processing. Taken together, these results point toward auditory cortex's ability to functionally code distinct phonetic cues together and suggest that abstract features can be used to parse the continuous acoustic signal.
Collapse
Affiliation(s)
| | | | - Zhanao Fu
- Cambridge University, United Kingdom
| | - Alejandro Pérez
- University of Toronto Scarborough, Ontario, Canada
- Cambridge University, United Kingdom
| |
Collapse
|
18
|
Moon J, Chau T, Orlandi S. A comparison and classification of oscillatory characteristics in speech perception and covert speech. Brain Res 2022; 1781:147778. [PMID: 35007548 DOI: 10.1016/j.brainres.2022.147778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/29/2021] [Accepted: 01/03/2022] [Indexed: 11/02/2022]
Abstract
Covert speech, the mental imagery of speaking, has been studied increasingly to understand and decode thoughts in the context of brain-computer interfaces. In studies of speech comprehension, neural oscillations are thought to play a key role in the temporal encoding of speech. However, little is known about the role of oscillations in covert speech. In this study, we investigated the oscillatory involvements in covert speech and speech perception. Data were collected from 10 participants with 64 channel EEG. Participants heard the words, 'blue' and 'orange', and subsequently mentally rehearsed them. First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes were conducted to determine statistical differences in frequency and time (t-CWT). Features were also extracted using t-CWT and subsequently classified using a support vector machine. θ and γ phase amplitude coupling (PAC) was also assessed within and between tasks. All binary classifications produced accuracies significantly greater (80-90%) than chance level, supporting the use of t-CWT in determining relative oscillatory involvements. While the perception task dynamically invoked all frequencies with more prominent θ and α activity, the covert task favoured higher frequencies with significantly higher γ activity than perception. Moreover, the perception condition produced significant θ-γ PAC, corroborating a reported linkage between syllabic and phonemic sampling. Although this coupling was found to be suppressed in the covert condition, we found significant cross-task coupling between perception θ and covert speech γ. Covert speech processing appears to be largely associated with higher frequencies of EEG. Importantly, the significant cross-task coupling between speech perception and covert speech, in the absence of within-task covert speech PAC, supports the notion that the γ- and θ-bands subserve, respectively, shared and unique encoding processes across tasks.
Collapse
Affiliation(s)
- Jaewoong Moon
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
19
|
Holmes E, Parr T, Griffiths TD, Friston KJ. Active inference, selective attention, and the cocktail party problem. Neurosci Biobehav Rev 2021; 131:1288-1304. [PMID: 34687699 DOI: 10.1016/j.neubiorev.2021.09.038] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 08/27/2021] [Accepted: 09/17/2021] [Indexed: 11/25/2022]
Abstract
In this paper, we introduce a new generative model for an active inference account of preparatory and selective attention, in the context of a classic 'cocktail party' paradigm. In this setup, pairs of words are presented simultaneously to the left and right ears and an instructive spatial cue directs attention to the left or right. We use this generative model to test competing hypotheses about the way that human listeners direct preparatory and selective attention. We show that assigning low precision to words at attended-relative to unattended-locations can explain why a listener reports words from a competing sentence. Under this model, temporal changes in sensory precision were not needed to account for faster reaction times with longer cue-target intervals, but were necessary to explain ramping effects on event-related potentials (ERPs)-resembling the contingent negative variation (CNV)-during the preparatory interval. These simulations reveal that different processes are likely to underlie the improvement in reaction times and the ramping of ERPs that are associated with spatial cueing.
Collapse
Affiliation(s)
- Emma Holmes
- Department of Speech Hearing and Phonetic Sciences, UCL, London, WC1N 1PF, UK; Wellcome Centre for Human Neuroimaging, UCL, London, WC1N 3AR, UK.
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, UCL, London, WC1N 3AR, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL, London, WC1N 3AR, UK; Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL, London, WC1N 3AR, UK
| |
Collapse
|
20
|
Differential contributions of synaptic and intrinsic inhibitory currents to speech segmentation via flexible phase-locking in neural oscillators. PLoS Comput Biol 2021; 17:e1008783. [PMID: 33852573 PMCID: PMC8104450 DOI: 10.1371/journal.pcbi.1008783] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 05/07/2021] [Accepted: 02/05/2021] [Indexed: 01/07/2023] Open
Abstract
Current hypotheses suggest that speech segmentation—the initial division and grouping of the speech stream into candidate phrases, syllables, and phonemes for further linguistic processing—is executed by a hierarchy of oscillators in auditory cortex. Theta (∼3-12 Hz) rhythms play a key role by phase-locking to recurring acoustic features marking syllable boundaries. Reliable synchronization to quasi-rhythmic inputs, whose variable frequency can dip below cortical theta frequencies (down to ∼1 Hz), requires “flexible” theta oscillators whose underlying neuronal mechanisms remain unknown. Using biophysical computational models, we found that the flexibility of phase-locking in neural oscillators depended on the types of hyperpolarizing currents that paced them. Simulated cortical theta oscillators flexibly phase-locked to slow inputs when these inputs caused both (i) spiking and (ii) the subsequent buildup of outward current sufficient to delay further spiking until the next input. The greatest flexibility in phase-locking arose from a synergistic interaction between intrinsic currents that was not replicated by synaptic currents at similar timescales. Flexibility in phase-locking enabled improved entrainment to speech input, optimal at mid-vocalic channels, which in turn supported syllabic-timescale segmentation through identification of vocalic nuclei. Our results suggest that synaptic and intrinsic inhibition contribute to frequency-restricted and -flexible phase-locking in neural oscillators, respectively. Their differential deployment may enable neural oscillators to play diverse roles, from reliable internal clocking to adaptive segmentation of quasi-regular sensory inputs like speech. Oscillatory activity in auditory cortex is believed to play an important role in auditory and speech processing. One suggested function of these rhythms is to divide the speech stream into candidate phonemes, syllables, words, and phrases, to be matched with learned linguistic templates. This requires brain rhythms to flexibly synchronize with regular acoustic features of the speech stream. How neuronal circuits implement this task remains unknown. In this study, we explored the contribution of inhibitory currents to flexible phase-locking in neuronal theta oscillators, believed to perform initial syllabic segmentation. We found that a combination of specific intrinsic inhibitory currents at multiple timescales, present in a large class of cortical neurons, enabled exceptionally flexible phase-locking, which could be used to precisely segment speech by identifying vowels at mid-syllable. This suggests that the cells exhibiting these currents are a key component in the brain’s auditory and speech processing architecture.
Collapse
|
21
|
Huang MX, Huang CW, Harrington DL, Nichols S, Robb-Swan A, Angeles-Quinto A, Le L, Rimmele C, Drake A, Song T, Huang JW, Clifford R, Ji Z, Cheng CK, Lerman I, Yurgil KA, Lee RR, Baker DG. Marked Increases in Resting-State MEG Gamma-Band Activity in Combat-Related Mild Traumatic Brain Injury. Cereb Cortex 2021; 30:283-295. [PMID: 31041986 DOI: 10.1093/cercor/bhz087] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Revised: 03/29/2019] [Accepted: 04/01/2019] [Indexed: 01/08/2023] Open
Abstract
Combat-related mild traumatic brain injury (mTBI) is a leading cause of sustained impairments in military service members and veterans. Recent animal studies show that GABA-ergic parvalbumin-positive interneurons are susceptible to brain injury, with damage causing abnormal increases in spontaneous gamma-band (30-80 Hz) activity. We investigated spontaneous gamma activity in individuals with mTBI using high-resolution resting-state magnetoencephalography source imaging. Participants included 25 symptomatic individuals with chronic combat-related blast mTBI and 35 healthy controls with similar combat experiences. Compared with controls, gamma activity was markedly elevated in mTBI participants throughout frontal, parietal, temporal, and occipital cortices, whereas gamma activity was reduced in ventromedial prefrontal cortex. Across groups, greater gamma activity correlated with poorer performances on tests of executive functioning and visuospatial processing. Many neurocognitive associations, however, were partly driven by the higher incidence of mTBI participants with both higher gamma activity and poorer cognition, suggesting that expansive upregulation of gamma has negative repercussions for cognition particularly in mTBI. This is the first human study to demonstrate abnormal resting-state gamma activity in mTBI. These novel findings suggest the possibility that abnormal gamma activities may be a proxy for GABA-ergic interneuron dysfunction and a promising neuroimaging marker of insidious mild head injuries.
Collapse
Affiliation(s)
- Ming-Xiong Huang
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Charles W Huang
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Deborah L Harrington
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Sharon Nichols
- Department of Neuroscience, University of California, San Diego, CA, USA
| | - Ashley Robb-Swan
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Annemarie Angeles-Quinto
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Lu Le
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, CA, USA
| | - Carl Rimmele
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, CA, USA
| | - Angela Drake
- Cedar Sinai Medical Group Chronic Pain Program, Beverly Hills, CA, USA
| | - Tao Song
- Department of Radiology, University of California, San Diego, CA, USA
| | - Jeffrey W Huang
- Department of Computer Science, Columbia University, New York, NY, USA
| | - Royce Clifford
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA
| | - Zhengwei Ji
- Department of Radiology, University of California, San Diego, CA, USA
| | - Chung-Kuan Cheng
- Department of Computer Science and Engineering, University of California, San Diego, CA, USA
| | - Imanuel Lerman
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA
| | - Kate A Yurgil
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA.,Department of Psychological Sciences, Loyola University, New Orleans, LA, USA
| | - Roland R Lee
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Radiology, University of California, San Diego, CA, USA
| | - Dewleen G Baker
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, CA, USA.,Department of Psychiatry, University of California, San Diego, CA, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, CA, USA
| |
Collapse
|
22
|
Hajizadeh A, Matysiak A, Brechmann A, König R, May PJC. Why do humans have unique auditory event-related fields? Evidence from computational modeling and MEG experiments. Psychophysiology 2021; 58:e13769. [PMID: 33475173 DOI: 10.1111/psyp.13769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/04/2020] [Accepted: 12/20/2020] [Indexed: 11/28/2022]
Abstract
Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g., P1m, N1m, and P2m) can be identified in most subjects. Here, we explore the reason for this subject-specificity through a combination of MEG measurements and computational modeling of auditory cortex. We test whether ERF subject-specificity can predominantly be explained in terms of each subject having an individual cortical gross anatomy, which modulates the MEG signal, or whether individual cortical dynamics is also at play. To our knowledge, this is the first time that tools to address this question are being presented. The effects of anatomical and dynamical variation on the MEG signal is simulated in a model describing the core-belt-parabelt structure of the auditory cortex, and with the dynamics based on the leaky-integrator neuron model. The experimental and simulated ERFs are characterized in terms of the N1m amplitude, latency, and width. Also, we examine the waveform grand-averaged across subjects, and the standard deviation of this grand average. The results show that the intersubject variability of the ERF arises out of both the anatomy and the dynamics of auditory cortex being specific to each subject. Moreover, our results suggest that the latency variation of the N1m is largely related to subject-specific dynamics. The findings are discussed in terms of how learning, plasticity, and sound detection are reflected in the auditory ERFs. The notion of the grand-averaged ERF is critically evaluated.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Artur Matysiak
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Magdeburg, Germany
| | - Reinhard König
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Patrick J C May
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany.,Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
23
|
Beier EJ, Chantavarin S, Rehrig G, Ferreira F, Miller LM. Cortical Tracking of Speech: Toward Collaboration between the Fields of Signal and Sentence Processing. J Cogn Neurosci 2021; 33:574-593. [PMID: 33475452 DOI: 10.1162/jocn_a_01676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In recent years, a growing number of studies have used cortical tracking methods to investigate auditory language processing. Although most studies that employ cortical tracking stem from the field of auditory signal processing, this approach should also be of interest to psycholinguistics-particularly the subfield of sentence processing-given its potential to provide insight into dynamic language comprehension processes. However, there has been limited collaboration between these fields, which we suggest is partly because of differences in theoretical background and methodological constraints, some mutually exclusive. In this paper, we first review the theories and methodological constraints that have historically been prioritized in each field and provide concrete examples of how some of these constraints may be reconciled. We then elaborate on how further collaboration between the two fields could be mutually beneficial. Specifically, we argue that the use of cortical tracking methods may help resolve long-standing debates in the field of sentence processing that commonly used behavioral and neural measures (e.g., ERPs) have failed to adjudicate. Similarly, signal processing researchers who use cortical tracking may be able to reduce noise in the neural data and broaden the impact of their results by controlling for linguistic features of their stimuli and by using simple comprehension tasks. Overall, we argue that a balance between the methodological constraints of the two fields will lead to an overall improved understanding of language processing as well as greater clarity on what mechanisms cortical tracking of speech reflects. Increased collaboration will help resolve debates in both fields and will lead to new and exciting avenues for research.
Collapse
|
24
|
Schultz BG, Brown RM, Kotz SA. Dynamic acoustic salience evokes motor responses. Cortex 2020; 134:320-332. [PMID: 33340879 DOI: 10.1016/j.cortex.2020.10.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 06/25/2020] [Accepted: 10/08/2020] [Indexed: 11/28/2022]
Abstract
Audio-motor integration is currently viewed as a predictive process in which the brain simulates upcoming sounds based on voluntary actions. This perspective does not consider how our auditory environment may trigger involuntary action in the absence of prediction. We address this issue by examining the relationship between acoustic salience and involuntary motor responses. We investigate how acoustic features in music contribute to the perception of salience, and whether those features trigger involuntary peripheral motor responses. Participants with little-to-no musical training listened to musical excerpts once while remaining still during the recording of their muscle activity with surface electromyography (sEMG), and again while they continuously rated perceived salience within the music using a slider. We show cross-correlations between 1) salience ratings and acoustic features, 2) acoustic features and spontaneous muscle activity, and 3) salience ratings and spontaneous muscle activity. Amplitude, intensity, and spectral centroid were perceived as the most salient features in music, and fluctuations in these features evoked involuntary peripheral muscle responses. Our results suggest an involuntary mechanism for audio-motor integration, which may rely on brainstem-spinal or brainstem-cerebellar-spinal pathways. Based on these results, we argue that a new framework is needed to explain the full range of human sensorimotor capabilities. This goal can be achieved by considering how predictive and reactive audio-motor integration mechanisms could operate independently or interactively to optimize human behavior.
Collapse
Affiliation(s)
- Benjamin G Schultz
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands
| | - Rachel M Brown
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands
| | - Sonja A Kotz
- Basic & Applied NeuroDynamics Laboratory, Faculty of Psychology & Neuroscience, Department of Neuropsychology & Psychopharmacology, Maastricht University, the Netherlands.
| |
Collapse
|
25
|
Hickey P, Merseal H, Patel AD, Race E. Memory in time: Neural tracking of low-frequency rhythm dynamically modulates memory formation. Neuroimage 2020; 213:116693. [DOI: 10.1016/j.neuroimage.2020.116693] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 02/18/2020] [Accepted: 02/26/2020] [Indexed: 12/12/2022] Open
|
26
|
Grasso PA, Gallina J, Bertini C. Shaping the visual system: cortical and subcortical plasticity in the intact and the lesioned brain. Neuropsychologia 2020; 142:107464. [PMID: 32289349 DOI: 10.1016/j.neuropsychologia.2020.107464] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 04/08/2020] [Indexed: 02/06/2023]
Abstract
Visual system is endowed with an incredibly complex organization composed of multiple visual pathway affording both hierarchical and parallel processing. Even if most of the visual information is conveyed by the retina to the lateral geniculate nucleus of the thalamus and then to primary visual cortex, a wealth of alternative subcortical pathways is present. This complex organization is experience dependent and retains plastic properties throughout the lifespan enabling the system with a continuous update of its functions in response to variable external needs. Changes can be induced by several factors including learning and experience but can also be promoted by the use non-invasive brain stimulation techniques. Furthermore, besides the astonishing ability of our visual system to spontaneously reorganize after injuries, we now know that the exposure to specific rehabilitative training can produce not only important functional modifications but also long-lasting changes within cortical and subcortical structures. The present review aims to update and address the current state of the art on these topics gathering studies that reported relevant modifications of visual functioning together with plastic changes within cortical and subcortical structures both in the healthy and in the lesioned visual system.
Collapse
Affiliation(s)
- Paolo A Grasso
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, 50135, Italy.
| | - Jessica Gallina
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Bologna, 40127, Italy; CsrNC, Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Cesena, 47521, Italy
| |
Collapse
|
27
|
Dikker S, Assaneo MF, Gwilliams L, Wang L, Kösem A. Magnetoencephalography and Language. Neuroimaging Clin N Am 2020; 30:229-238. [PMID: 32336409 DOI: 10.1016/j.nic.2020.01.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
This article provides an overview of research that uses magnetoencephalography to understand the brain basis of human language. The cognitive processes and brain networks that have been implicated in written and spoken language comprehension and production are discussed in relation to different methodologies: we review event-related brain responses, research on the coupling of neural oscillations to speech, oscillatory coupling between brain regions (eg, auditory-motor coupling), and neural decoding approaches in naturalistic language comprehension.
Collapse
Affiliation(s)
- Suzanne Dikker
- Department of Psychology, New York University, 6 Washington Place #275, New York, NY 10003, USA.
| | - M Florencia Assaneo
- Department of Psychology, New York University, 6 Washington Place #275, New York, NY 10003, USA
| | - Laura Gwilliams
- Department of Psychology, New York University, 6 Washington Place #275, New York, NY 10003, USA; New York University Abu Dhabi Research Institute, New York University Abu Dhabi, Saadiyat Island, Abu Dhabi, United Arab Emirates
| | - Lin Wang
- Department of Psychiatry, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 Thirteenth Street, #2306, Charlestown, MA 02129, USA
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), CH Le Vinatier Bâtiment 452, 95, BD Pinel, Bron, Lyon 69675, France
| |
Collapse
|
28
|
Laffere A, Dick F, Tierney A. Effects of auditory selective attention on neural phase: individual differences and short-term training. Neuroimage 2020; 213:116717. [PMID: 32165265 DOI: 10.1016/j.neuroimage.2020.116717] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 03/02/2020] [Accepted: 03/04/2020] [Indexed: 02/06/2023] Open
Abstract
How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.
Collapse
Affiliation(s)
- Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK; Division of Psychology & Language Sciences, UCL, Gower Street, London, WC1E 6BT, UK
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
29
|
Fontes RM, Marinho V, Carvalho V, Rocha K, Magalhães F, Moura I, Ribeiro P, Velasques B, Cagy M, Gupta DS, Bastos VH, Teles AS, Teixeira S. Time estimation exposure modifies cognitive aspects and cortical activity of attention deficit hyperactivity disorder adults. Int J Neurosci 2020; 130:999-1014. [DOI: 10.1080/00207454.2020.1715394] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Rhailana Medeiros Fontes
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
| | - Victor Marinho
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
| | - Valécia Carvalho
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
| | - Kaline Rocha
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
| | - Francisco Magalhães
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
| | - Iris Moura
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- Masters Programs in Biotechnology, Federal University of Piauí, Parnaíba, Brazil
| | - Pedro Ribeiro
- Brain Mapping and Sensory Motor Integration Laboratory, Institute of Psychiatry, Federal University of Rio De Janeiro, Rio De Janeiro, Brazil
| | - Bruna Velasques
- Brain Mapping and Sensory Motor Integration Laboratory, Institute of Psychiatry, Federal University of Rio De Janeiro, Rio De Janeiro, Brazil
| | - Mauricio Cagy
- Brain Mapping and Sensory Motor Integration Laboratory, Institute of Psychiatry, Federal University of Rio De Janeiro, Rio De Janeiro, Brazil
| | - Daya S. Gupta
- Department of Biology, Camden County College, Blackwood, NJ, USA
| | - Victor Hugo Bastos
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
- Masters Programs in Biotechnology, Federal University of Piauí, Parnaíba, Brazil
- Brain Mapping and Functionality Laboratory, Federal University of Piauí, Parnaíba, Brazil
| | - Ariel Soares Teles
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- Masters Programs in Biotechnology, Federal University of Piauí, Parnaíba, Brazil
- Federal Institute of Maranhão, Maranhão, Brazil
| | - Silmar Teixeira
- Neuro-Innovation Technology & Brain Mapping Laboratory, Federal University of Piauí, Parnaíba, Brazil
- The Northeast Biotechnology Network, Federal University of Piauí, Teresina, Brazil
- Masters Programs in Biotechnology, Federal University of Piauí, Parnaíba, Brazil
| |
Collapse
|
30
|
Abstract
Many animals can encode temporal intervals and use them to plan their actions, but only humans can flexibly extract a regular beat from complex patterns, such as musical rhythms. Beat-based timing is hypothesized to rely on the integration of sensory information with temporal information encoded in motor regions such as the medial premotor cortex (MPC), but how beat-based timing might be encoded in neuronal populations is mostly unknown. Gámez and colleagues show that the MPC encodes temporal information via a population code visible as circular trajectories in state space; these patterns may represent precursors to more-complex skills such as beat-based timing.
Collapse
Affiliation(s)
- Virginia B. Penhune
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
- Laboratory for Brain, Music and Sound Research–BRAMS, Montreal, Quebec, Canada
| | - Robert J. Zatorre
- Laboratory for Brain, Music and Sound Research–BRAMS, Montreal, Quebec, Canada
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
31
|
Maróti E, Honbolygó F, Weiss B. Neural entrainment to the beat in multiple frequency bands in 6-7-year-old children. Int J Psychophysiol 2019; 141:45-55. [PMID: 31078641 DOI: 10.1016/j.ijpsycho.2019.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 05/03/2019] [Accepted: 05/08/2019] [Indexed: 11/28/2022]
Abstract
Entrainment to periodic acoustic stimuli has been found to relate both to the auditory and motor cortices, and it could be influenced by the maturity of these brain regions. However, existing research in this topic provides data about different oscillatory brain activities in different age groups with different musical background. In order to obtain a more coherent picture and examine early manifestations of entrainment, we assessed brain oscillations at multiple time scales (beta: 15-25 Hz, gamma: 28-48 Hz) and in steady state evoked potentials (SS-EPs in short) in 6-7-year-old children with no musical background right at the start of primary school before they learnt to read. Our goal was to exclude the effect of music training and reading, since previous studies have shown that sensorimotor entrainment (movement synchronization to the beat) is related to musical and reading abilities. We found evidence for endogenous anticipatory processing in the gamma band related to meter perception, and stimulus-related frequency specific responses. However, we did not find evidence for an interaction between auditory and motor networks, which suggests that endogenous mechanisms related to auditory processing may mature earlier than those that underlie motor actions, such as sensorimotor synchronization.
Collapse
Affiliation(s)
- Emese Maróti
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary.
| | - Ferenc Honbolygó
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Institute of Psychology, Eötvös Loránd University, Budapest, Hungary
| | - Béla Weiss
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| |
Collapse
|
32
|
Schwartze M, Brown RM, Biau E, Kotz SA. Timing the "magical number seven": Presentation rate and regularity affect verbal working memory performance. INTERNATIONAL JOURNAL OF PSYCHOLOGY 2019; 55:342-346. [PMID: 31062352 PMCID: PMC7317781 DOI: 10.1002/ijop.12588] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 04/15/2019] [Indexed: 01/04/2023]
Abstract
The informative value of time and temporal structure often remains neglected in cognitive assessments. However, next to information about stimulus identity we can exploit temporal ordering principles, such as regularity, periodicity, or grouping to generate predictions about the timing of future events. Such predictions may improve cognitive performance by optimising adaptation to dynamic stimuli. Here, we investigated the influence of temporal structure on verbal working memory by assessing immediate recall performance for aurally presented digit sequences (forward digit span) as a function of standard (1000 ms stimulus-onset-asynchronies, SOAs), short (700 ms), long (1300 ms) and mixed (700-1300 ms) stimulus timing during the presentation phase. Participant's digit spans were lower for short and mixed SOA presentation relative to standard SOAs. This confirms an impact of temporal structure on the classic "magical number seven," suggesting that working memory performance can in part be regulated through the systematic application of temporal ordering principles.
Collapse
Affiliation(s)
- Michael Schwartze
- Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtThe Netherlands
| | - Rachel M. Brown
- Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtThe Netherlands
| | - Emmanuel Biau
- Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtThe Netherlands
| | - Sonja A. Kotz
- Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|
33
|
Oculomotor inhibition reflects temporal expectations. Neuroimage 2019; 184:279-292. [DOI: 10.1016/j.neuroimage.2018.09.026] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2018] [Revised: 08/07/2018] [Accepted: 09/10/2018] [Indexed: 11/21/2022] Open
|
34
|
Bowers A, Bowers LM, Hudock D, Ramsdell-Hudock HL. Phonological working memory in developmental stuttering: Potential insights from the neurobiology of language and cognition. JOURNAL OF FLUENCY DISORDERS 2018; 58:94-117. [PMID: 30224087 DOI: 10.1016/j.jfludis.2018.08.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 07/30/2018] [Accepted: 08/27/2018] [Indexed: 06/08/2023]
Abstract
The current review examines how neurobiological models of language and cognition could shed light on the role of phonological working memory (PWM) in developmental stuttering (DS). Toward that aim, we review Baddeley's influential multicomponent model of PWM and evidence for load-dependent differences between children and adults who stutter and typically fluent speakers in nonword repetition and dual-task paradigms. We suggest that, while nonword repetition and dual-task findings implicate processes related to PWM, it is unclear from behavioral studies alone what mechanisms are involved. To address how PWM could be related to speech output in DS, a third section reviews neurobiological models of language proposing that PWM is an emergent property of cyclic sensory and motor buffers in the dorsal stream critical for speech production. We propose that anomalous sensorimotor timing could potentially interrupt both fluent speech in DS and the emergent properties of PWM. To further address the role of attention and executive function in PWM and DS, we also review neurobiological models proposing that prefrontal cortex (PFC) and basal ganglia (BG) function to facilitate working memory under distracting conditions and neuroimaging evidence implicating the PFC and BG in stuttering. Finally, we argue that cognitive-behavioral differences in nonword repetition and dual-tasks are consistent with the involvement of neurocognitive networks related to executive function and sensorimotor integration in PWM. We suggest progress in understanding the relationship between stuttering and PWM may be accomplished using high-temporal resolution electromagnetic experimental approaches.
Collapse
Affiliation(s)
- Andrew Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, United States.
| | - Lisa M Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, United States.
| | - Daniel Hudock
- Idaho State University, 650 Memorial Dr. Bldg. 68, Pocatello, ID 83201, United States.
| | | |
Collapse
|
35
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
36
|
Neural Entrainment Determines the Words We Hear. Curr Biol 2018; 28:2867-2875.e3. [PMID: 30197083 DOI: 10.1016/j.cub.2018.07.023] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 06/25/2018] [Accepted: 07/09/2018] [Indexed: 11/21/2022]
Abstract
Low-frequency neural entrainment to rhythmic input has been hypothesized as a canonical mechanism that shapes sensory perception in time. Neural entrainment is deemed particularly relevant for speech analysis, as it would contribute to the extraction of discrete linguistic elements from continuous acoustic signals. However, its causal influence in speech perception has been difficult to establish. Here, we provide evidence that oscillations build temporal predictions about the duration of speech tokens that affect perception. Using magnetoencephalography (MEG), we studied neural dynamics during listening to sentences that changed in speech rate. We observed neural entrainment to preceding speech rhythms persisting for several cycles after the change in rate. The sustained entrainment was associated with changes in the perceived duration of the last word's vowel, resulting in the perception of words with different meanings. These findings support oscillatory models of speech processing, suggesting that neural oscillations actively shape speech perception.
Collapse
|
37
|
Brajot FX, Lawrence D. Delay-induced low-frequency modulation of the voice during sustained phonation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:282. [PMID: 30075671 DOI: 10.1121/1.5046092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 06/25/2018] [Indexed: 06/08/2023]
Abstract
An important property of negative feedback systems is the tendency to oscillate when feedback is delayed. This paper evaluated this phenomenon in a sustained phonation task, where subjects prolonged a vowel with 0-600 ms delays in auditory feedback. This resulted in a delay-dependent vocal wow: from 0.4 to 1 Hz fluctuations in fundamental frequency and intensity that increased in period and amplitude as the delay increased. A similar modulation in low-frequency oscillations was not observed in the first two formant frequencies, although some subjects did display increased variability. Results suggest that delayed auditory feedback enhances an existing periodic fluctuation in the voice, with a more complex, possibly indirect, influence on supraglottal articulation. These findings have important implications for understanding how speech may be affected by artificially applied or disease-based delays in sensory feedback.
Collapse
Affiliation(s)
- François-Xavier Brajot
- Communication Sciences and Disorders, Ohio University, Grover Center W221, Athens, Ohio 45701, USA
| | - Douglas Lawrence
- Electrical Engineering and Computer Science, Ohio University, Stocker Center 347, Athens, Ohio 45701, USA
| |
Collapse
|
38
|
Alexandrou AM, Saarinen T, Kujala J, Salmelin R. Cortical Tracking of Global and Local Variations of Speech Rhythm during Connected Natural Speech Perception. J Cogn Neurosci 2018; 30:1704-1719. [PMID: 29916785 DOI: 10.1162/jocn_a_01295] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
During natural speech perception, listeners must track the global speaking rate, that is, the overall rate of incoming linguistic information, as well as transient, local speaking rate variations occurring within the global speaking rate. Here, we address the hypothesis that this tracking mechanism is achieved through coupling of cortical signals to the amplitude envelope of the perceived acoustic speech signals. Cortical signals were recorded with magnetoencephalography (MEG) while participants perceived spontaneously produced speech stimuli at three global speaking rates (slow, normal/habitual, and fast). Inherently to spontaneously produced speech, these stimuli also featured local variations in speaking rate. The coupling between cortical and acoustic speech signals was evaluated using audio-MEG coherence. Modulations in audio-MEG coherence spatially differentiated between tracking of global speaking rate, highlighting the temporal cortex bilaterally and the right parietal cortex, and sensitivity to local speaking rate variations, emphasizing the left parietal cortex. Cortical tuning to the temporal structure of natural connected speech thus seems to require the joint contribution of both auditory and parietal regions. These findings suggest that cortical tuning to speech rhythm operates on two functionally distinct levels: one encoding the global rhythmic structure of speech and the other associated with online, rapidly evolving temporal predictions. Thus, it may be proposed that speech perception is shaped by evolutionary tuning, a preference for certain speaking rates, and predictive tuning, associated with cortical tracking of the constantly changing-rate of linguistic information in a speech stream.
Collapse
|
39
|
Ozernov-Palchik O, Patel AD. Musical rhythm and reading development: does beat processing matter? Ann N Y Acad Sci 2018; 1423:166-175. [PMID: 29781084 DOI: 10.1111/nyas.13853] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/13/2018] [Accepted: 04/23/2018] [Indexed: 01/24/2023]
Abstract
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, Massachusetts
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts
- Azrieli Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario, Canada
| |
Collapse
|
40
|
Trainor LJ, Chang A, Cairney J, Li Y. Is auditory perceptual timing a core deficit of developmental coordination disorder? Ann N Y Acad Sci 2018; 1423:30-39. [PMID: 29741273 PMCID: PMC6099217 DOI: 10.1111/nyas.13701] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 02/13/2018] [Accepted: 03/08/2018] [Indexed: 12/03/2022]
Abstract
Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training.
Collapse
Affiliation(s)
- Laurel J. Trainor
- Department of Psychology, Neuroscience and BehaviourMcMaster UniversityHamiltonOntarioCanada
- McMaster Institute for Music and the MindMcMaster UniversityHamiltonOntarioCanada
- Rotman Research InstituteBaycrest HospitalTorontoOntarioCanada
| | - Andrew Chang
- Department of Psychology, Neuroscience and BehaviourMcMaster UniversityHamiltonOntarioCanada
| | - John Cairney
- Infant and Child Health (INCH) Lab, Department of Family MedicineMcMaster UniversityHamiltonOntarioCanada
- Faculty of Kinesiology and Physical EducationUniversity of TorontoTorontoOntarioCanada
| | - Yao‐Chuen Li
- Infant and Child Health (INCH) Lab, Department of Family MedicineMcMaster UniversityHamiltonOntarioCanada
- Child Health Research Center, Institute of Population Health SciencesNational Health Research InstitutesMiaoliTaiwan
| |
Collapse
|
41
|
Gompf F, Pflug A, Laufs H, Kell CA. Non-linear Relationship between BOLD Activation and Amplitude of Beta Oscillations in the Supplementary Motor Area during Rhythmic Finger Tapping and Internal Timing. Front Hum Neurosci 2017; 11:582. [PMID: 29249950 PMCID: PMC5714933 DOI: 10.3389/fnhum.2017.00582] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 11/17/2017] [Indexed: 11/13/2022] Open
Abstract
Functional imaging studies using BOLD contrasts have consistently reported activation of the supplementary motor area (SMA) both during motor and internal timing tasks. Opposing findings, however, have been shown for the modulation of beta oscillations in the SMA. While movement suppresses beta oscillations in the SMA, motor and non-motor tasks that rely on internal timing increase the amplitude of beta oscillations in the SMA. These independent observations suggest that the relationship between beta oscillations and BOLD activation is more complex than previously thought. Here we set out to investigate this rapport by examining beta oscillations in the SMA during movement with varying degrees of internal timing demands. In a simultaneous EEG-fMRI experiment, 20 healthy right-handed subjects performed an auditory-paced finger-tapping task. Internal timing was operationalized by including conditions with taps on every fourth auditory beat, which necessitates generation of a slow internal rhythm, while tapping to every auditory beat reflected simple auditory-motor synchronization. In the SMA, BOLD activity increased and power in both the low and the high beta band decreased expectedly during each condition compared to baseline. Internal timing was associated with a reduced desynchronization of low beta oscillations compared to conditions without internal timing demands. In parallel with this relative beta power increase, internal timing activated the SMA more strongly in terms of BOLD. This documents a task-dependent non-linear relationship between BOLD and beta-oscillations in the SMA. We discuss different roles of beta synchronization and desynchronization in active processing within the same cortical region.
Collapse
Affiliation(s)
- Florian Gompf
- Cognitive Neuroscience Group, Department of Neurology, Brain Imaging Center, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Anja Pflug
- Cognitive Neuroscience Group, Department of Neurology, Brain Imaging Center, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Helmut Laufs
- Cognitive Neuroscience Group, Department of Neurology, Brain Imaging Center, Goethe University Frankfurt, Frankfurt am Main, Germany.,Department of Neurology, University Hospital Schleswig-Holstein, Campus Kiel, Christian-Albrechts- Universität zu Kiel, Kiel, Germany
| | - Christian A Kell
- Cognitive Neuroscience Group, Department of Neurology, Brain Imaging Center, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
42
|
The Role of Oscillatory Phase in Determining the Temporal Organization of Perception: Evidence from Sensory Entrainment. J Neurosci 2017; 37:10636-10644. [PMID: 28972130 PMCID: PMC5666584 DOI: 10.1523/jneurosci.1704-17.2017] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2017] [Revised: 08/17/2017] [Accepted: 08/21/2017] [Indexed: 11/21/2022] Open
Abstract
Recent behavioral, neuroimaging, and neurophysiological studies have renewed the idea that the information processing within different temporal windows is linked to the phase and/or frequency of the ongoing oscillations, predominantly in the theta/alpha band (∼4–7 and 8–12 Hz, respectively). However, being correlational in nature, this evidence might reflect a nonfunctional byproduct rather than having a causal role. A more direct link can be shown with methods that manipulate oscillatory activity. Here, we used audiovisual entrainment at different frequencies in the prestimulus period of a temporal integration/segregation task. We hypothesized that entrainment would align ongoing oscillations and drive them toward the stimulation frequency. To reveal behavioral oscillations in temporal perception after the entrainment, we sampled the segregation/integration performance densely in time. In Experiment 1, two groups of human participants (both males and females) received stimulation either at the lower or the upper boundary of the alpha band (∼8.5 vs 11.5 Hz). For both entrainment frequencies, we found a phase alignment of the perceptual oscillation across subjects, but with two different power spectra that peaked near the entrainment frequency. These results were confirmed when perceptual oscillations were characterized in the time domain with sinusoidal fittings. In Experiment 2, we replicated the findings in a within-subject design, extending the results for frequencies in the theta (∼6.5 Hz), but not in the beta (∼15 Hz), range. Overall, these findings show that temporal segregation can be modified by sensory entrainment, providing evidence for a critical role of ongoing oscillations in the temporal organization of perception. SIGNIFICANCE STATEMENT The continuous flow of sensory input is not processed in an analog fashion, but rather is grouped by the perceptual system over time. Recent studies pinpointed the phase and/or frequency of the neural oscillations in the theta/alpha band (∼4–12 Hz) as possible mechanisms underlying temporal windows in perception. Here, we combined two innovative methodologies to provide more direct support for this evidence. We used sensory entrainment to align neural oscillations to different frequencies and then characterized the resultant perceptual oscillation with a temporal dense sampling of the integration/segregation performance. Our results provide the first evidence that the frequency of temporal segregation can be modified by sensory entrainment, supporting a critical role of ongoing oscillations in the integration/segregation of information over time.
Collapse
|
43
|
Temporal expectancies driven by self- and externally generated rhythms. Neuroimage 2017; 156:352-362. [PMID: 28528848 DOI: 10.1016/j.neuroimage.2017.05.042] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Revised: 05/15/2017] [Accepted: 05/17/2017] [Indexed: 11/21/2022] Open
Abstract
The dynamic attending theory proposes that rhythms entrain periodic fluctuations of attention which modulate the gain of sensory input. However, temporal expectancies can also be driven by the mere passage of time (foreperiod effect). It is currently unknown how these two types of temporal expectancy relate to each other, i.e. whether they work in parallel and have distinguishable neural signatures. The current research addresses this issue. Participants either tapped a 1Hz rhythm (active task) or were passively presented with the same rhythm using tactile stimulators (passive task). Based on this rhythm an auditory target was then presented early, in synchrony, or late. Behavioural results were in line with the dynamic attending theory as RTs were faster for in- compared to out-of-synchrony targets. Electrophysiological results suggested self-generated and externally induced rhythms to entrain neural oscillations in the delta frequency band. Auditory ERPs showed evidence of two distinct temporal expectancy processes. Both tasks demonstrated a pattern which followed a linear foreperiod effect. In the active task, however, we also observed an ERP effect consistent with the dynamic attending theory. This study shows that temporal expectancies generated by a rhythm and expectancy generated by the mere passage of time can work in parallel and sheds light on how these mechanisms are implemented in the brain.
Collapse
|
44
|
Merchant H, Bartolo R. Primate beta oscillations and rhythmic behaviors. J Neural Transm (Vienna) 2017; 125:461-470. [DOI: 10.1007/s00702-017-1716-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Accepted: 03/19/2017] [Indexed: 11/24/2022]
|
45
|
Chang A, Bosnyak DJ, Trainor LJ. Unpredicted Pitch Modulates Beta Oscillatory Power during Rhythmic Entrainment to a Tone Sequence. Front Psychol 2016; 7:327. [PMID: 27014138 PMCID: PMC4782565 DOI: 10.3389/fpsyg.2016.00327] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 02/21/2016] [Indexed: 11/13/2022] Open
Abstract
Extracting temporal regularities in external stimuli in order to predict upcoming events is an essential aspect of perception. Fluctuations in induced power of beta band (15–25 Hz) oscillations in auditory cortex are involved in predictive timing during rhythmic entrainment, but whether such fluctuations are affected by prediction in the spectral (frequency/pitch) domain remains unclear. We tested whether unpredicted (i.e., unexpected) pitches in a rhythmic tone sequence modulate beta band activity by recording EEG while participants passively listened to isochronous auditory oddball sequences with occasional unpredicted deviant pitches at two different presentation rates. The results showed that the power in low-beta (15–20 Hz) was larger around 200–300 ms following deviant tones compared to standard tones, and this effect was larger when the deviant tones were less predicted. Our results suggest that the induced beta power activities in auditory cortex are consistent with a role in sensory prediction of both “when” (timing) upcoming sounds will occur as well as the prediction precision error of “what” (spectral content in this case). We suggest, further, that both timing and content predictions may co-modulate beta oscillations via attention. These findings extend earlier work on neural oscillations by investigating the functional significance of beta oscillations for sensory prediction. The findings help elucidate the functional significance of beta oscillations in perception.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - Dan J Bosnyak
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada; Rotman Research Institute, Baycrest HospitalToronto, ON, Canada
| |
Collapse
|
46
|
Scharinger M, Monahan PJ, Idsardi WJ. Linguistic category structure influences early auditory processing: Converging evidence from mismatch responses and cortical oscillations. Neuroimage 2016; 128:293-301. [PMID: 26780574 DOI: 10.1016/j.neuroimage.2016.01.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 12/30/2015] [Accepted: 01/02/2016] [Indexed: 10/22/2022] Open
Abstract
While previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in 'bet') exert less top-down effects than the high-vowels (as in 'bit') because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in 'bat'). We tested this assumption in a magnetoencephalography (MEG) study where we contrasted mid- and high-vowels, as well as the low- and high-vowels in a passive oddball paradigm. Overall, significant differences between deviants and standards indexed reliable mismatch negativity (MMN) responses between 200 and 300ms post-stimulus onset. MMN amplitudes differed in the mid/high-vowel contrasts and were significantly reduced when a mid-vowel standard was followed by a high-vowel deviant, extending previous findings. Furthermore, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18-26Hz), compared to high-vowel standards. We take this as converging evidence for linguistic category structure to exert top-down influences on auditory processing. The findings are interpreted within the linguistic model of underspecification and the neuropsychological predictive coding framework.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Linguistics, University of Maryland, College Park, MD, USA; Biological incl. Cognitive Psychology, Institute for Psychology, University of Leipzig, Germany.
| | - Philip J Monahan
- Centre for French and Linguistics, University of Toronto Scarborough, Canada; Department of Linguistics, University of Toronto, Canada
| | - William J Idsardi
- Department of Linguistics, University of Maryland, College Park, MD, USA
| |
Collapse
|
47
|
Zoefel B, VanRullen R. The Role of High-Level Processes for Oscillatory Phase Entrainment to Speech Sound. Front Hum Neurosci 2015; 9:651. [PMID: 26696863 PMCID: PMC4667100 DOI: 10.3389/fnhum.2015.00651] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 11/16/2015] [Indexed: 11/13/2022] Open
Abstract
Constantly bombarded with input, the brain has the need to filter out relevant information while ignoring the irrelevant rest. A powerful tool may be represented by neural oscillations which entrain their high-excitability phase to important input while their low-excitability phase attenuates irrelevant information. Indeed, the alignment between brain oscillations and speech improves intelligibility and helps dissociating speakers during a “cocktail party”. Although well-investigated, the contribution of low- and high-level processes to phase entrainment to speech sound has only recently begun to be understood. Here, we review those findings, and concentrate on three main results: (1) Phase entrainment to speech sound is modulated by attention or predictions, likely supported by top-down signals and indicating higher-level processes involved in the brain’s adjustment to speech. (2) As phase entrainment to speech can be observed without systematic fluctuations in sound amplitude or spectral content, it does not only reflect a passive steady-state “ringing” of the cochlea, but entails a higher-level process. (3) The role of intelligibility for phase entrainment is debated. Recent results suggest that intelligibility modulates the behavioral consequences of entrainment, rather than directly affecting the strength of entrainment in auditory regions. We conclude that phase entrainment to speech reflects a sophisticated mechanism: several high-level processes interact to optimally align neural oscillations with predicted events of high relevance, even when they are hidden in a continuous stream of background noise.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Université Paul Sabatier Toulouse, France ; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan Toulouse, France
| | - Rufin VanRullen
- Université Paul Sabatier Toulouse, France ; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan Toulouse, France
| |
Collapse
|
48
|
Gordon RL, Fehd HM, McCandliss BD. Does Music Training Enhance Literacy Skills? A Meta-Analysis. Front Psychol 2015; 6:1777. [PMID: 26648880 PMCID: PMC4664655 DOI: 10.3389/fpsyg.2015.01777] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Accepted: 11/05/2015] [Indexed: 11/24/2022] Open
Abstract
Children's engagement in music practice is associated with enhancements in literacy-related language skills, as demonstrated by multiple reports of correlation across these two domains. Training studies have tested whether engaging in music training directly transfers benefit to children's literacy skill development. Results of such studies, however, are mixed. Interpretation of these mixed results is made more complex by the fact that a wide range of literacy-related outcome measures are used across these studies. Here, we address these challenges via a meta-analytic approach. A comprehensive literature review of peer-reviewed music training studies was built around key criteria needed to test the direct transfer hypothesis, including: (a) inclusion of music training vs. control groups; (b) inclusion of pre- vs. post-comparison measures, and (c) indication that reading instruction was held constant across groups. Thirteen studies were identified (n = 901). Two classes of outcome measures emerged with sufficient overlap to support meta-analysis: phonological awareness and reading fluency. Hours of training, age, and type of control intervention were examined as potential moderators. Results supported the hypothesis that music training leads to gains in phonological awareness skills. The effect isolated by contrasting gains in music training vs. gains in control was small relative to the large variance in these skills (d = 0.2). Interestingly, analyses revealed that transfer effects for rhyming skills tended to grow stronger with increased hours of training. In contrast, no significant aggregate transfer effect emerged for reading fluency measures, despite some studies reporting large training effects. The potential influence of other study design factors were considered, including intervention design, IQ, and SES. Results are discussed in the context of emerging findings that music training may enhance literacy development via changes in brain mechanisms that support both music and language cognition.
Collapse
Affiliation(s)
- Reyna L Gordon
- Music Cognition Lab, Program for Music, Mind and Society, Department of Otolaryngology, Vanderbilt University Medical Center Nashville, TN, USA ; Vanderbilt Kennedy Center, Vanderbilt University Medical Center Nashville, TN, USA
| | - Hilda M Fehd
- Institute for Software Integrated Systems, School of Engineering, Vanderbilt University Nashville, TN, USA
| | - Bruce D McCandliss
- Department of Psychology, Graduate School of Education, Stanford University Stanford, CA, USA
| |
Collapse
|