1
|
Lee KM, Kang S, Hong SH, Moon IJ. Effects of Metrical Context on the P1 Component. J Audiol Otol 2024; 28:195-202. [PMID: 38685834 PMCID: PMC11273186 DOI: 10.7874/jao.2023.00262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 09/18/2023] [Accepted: 10/06/2023] [Indexed: 05/02/2024] Open
Abstract
BACKGROUND AND OBJECTIVES The temporal structure of sound, characterized by regular patterns, plays a crucial role in optimizing the processing of auditory information. The meter, representing a well-organized sequence of evenly spaced beats in music, exhibits a hierarchical arrangement, with stronger beats occupying higher metrical positions. Moreover, the meter has been shown to influence behavioral and neural processing, particularly the N1, P2, and mismatch negativity components. However, the role of the P1 component in the context of metrical hierarchy remains unexplored. This study aimed to investigate the effects of metrical hierarchy on the P1 component and compare the responses between musicians and non-musicians. SUBJECTS AND METHODS Thirty participants (15 musicians and 15 non-musicians) were enrolled in the study. Auditory stimuli consisted of a synthesized speech syllable presented together with a repeating series of four tones, establishing a quadruple meter. Electrophysiological recordings were performed to measure the P1 component. RESULTS The results revealed that metrical position had a significant effect on P1 amplitude, with the strongest beat showing the lowest amplitude. This contrasts with previous findings, in which enhanced P1 responses were typically observed at on-the-beat positions. The reduced P1 response on the strong beat can be interpreted within the framework of predictive coding and temporal prediction, where a higher predictability of pitch changes at the strong beat leads to a reduction in the P1 response. Furthermore, higher P1 amplitudes were observed in musicians compared to non-musicians, suggesting that musicians have enhanced sensory processing. CONCLUSIONS This study demonstrates the effects of metrical hierarchy on the P1 component, thereby enriching our understanding of auditory processing. The results suggest that predictive coding and temporal prediction play important roles in shaping sensory processing. Further, they suggest that musical training may enhance P1 responses.
Collapse
Affiliation(s)
- Kyung Myun Lee
- School of Digital Humanities & Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Korea
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Korea
- Center for Digital Humanities & Computational Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Korea
| | - Soojin Kang
- Center for Digital Humanities & Computational Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Korea
| | - Sung Hwa Hong
- Department of Otorhinolaryngology, Myongji Hospital, Hanyang University Medical Center, Goyang, Korea
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| |
Collapse
|
2
|
Jones A, Silas J, Anderson W, Ward EV. Null effects of temporal prediction on recognition memory but evidence for differential neural activity at encoding. A registered report. Cortex 2023; 169:130-145. [PMID: 37871519 DOI: 10.1016/j.cortex.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/21/2023] [Accepted: 09/26/2023] [Indexed: 10/25/2023]
Abstract
Previous research has demonstrated that rhythmic presentation of stimuli during encoding boosts subsequent recognition and is associated with distinct neural activity compared with when stimuli are presented in an arrhythmic manner. However, it is unclear whether the effect is driven by automatic entrainment to rhythm or non-rhythmic temporal prediction. This registered report presents an Electroencephalographic (EEG) study aimed at establishing the cognitive and neural mechanisms of the effect of temporal prediction on recognition. In a blocked design, stimulus onset during encoding was systematically manipulated in four conditions prior to recognition testing: rhythmic fixed (RF), rhythmic variable (RV), arrhythmic fixed (AF), and arrhythmic variable (AV). By orthogonally varying rhythm and temporal position we were able to assess their independent contributions to recognition enhancement. Our behavioural results did not replicate previous findings that show a difference in recognition memory based on temporal predictability at encoding. However, event-related potential (ERP) component analysis did show an early (N1) interaction effect of temporal position and rhythm, and later (N2 and Dm) effects driven by temporal position only. Taken together, we observed effects of temporal prediction at encoding, but these differences did not translate to later effects of memory, suggesting that effects of temporal prediction on recognition are less robust than previously thought.
Collapse
|
3
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
4
|
Lenc T, Merchant H, Keller PE, Honing H, Varlet M, Nozaradan S. Mapping between sound, brain and behaviour: four-level framework for understanding rhythm processing in humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200325. [PMID: 34420381 PMCID: PMC8380981 DOI: 10.1098/rstb.2020.0325] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/16/2022] Open
Abstract
Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Tomas Lenc
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Querétaro 76230, Mexico
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Henkjan Honing
- Amsterdam Brain and Cognition (ABC), Institute for Logic, Language and Computation (ILLC), University of Amsterdam, Amsterdam 1090 GE, The Netherlands
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| |
Collapse
|
5
|
Arrazola VDC. Deviants Are Detected Faster at the End of Verse-Like Sound Sequences. Front Psychol 2021; 12:614872. [PMID: 34531777 PMCID: PMC8438167 DOI: 10.3389/fpsyg.2021.614872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 07/26/2021] [Indexed: 11/13/2022] Open
Abstract
Songs and poems from different traditions show a striking formal similarity: lines are flexible at the beginning and get more regular toward the end. This suggests that the free-beginning/strict-end pattern stems from a cognitive bias shared among humans. We propose that this is due to an increased sensitivity to deviants later in the line, resulting from a prediction-driven attention increase disrupted by line breaks. The study tests this hypothesis using an auditory oddball task where drum strokes are presented in sequences of eight, mimicking syllables in song or poem lines. We find that deviant strokes occurring later in the line are detected faster, mirroring the lower occurrence of deviant syllables toward the end of verse lines.
Collapse
Affiliation(s)
- Varun D C Arrazola
- Leiden University Centre for Linguistics, Leiden, Netherlands.,The Meertens Institute, Amsterdam, Netherlands
| |
Collapse
|
6
|
Kondoh S, Okanoya K, Tachibana RO. Switching perception of musical meters by listening to different acoustic cues of biphasic sound stimulus. PLoS One 2021; 16:e0256712. [PMID: 34460855 PMCID: PMC8405023 DOI: 10.1371/journal.pone.0256712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 08/12/2021] [Indexed: 11/20/2022] Open
Abstract
Meter is one of the core features of music perception. It is the cognitive grouping of regular sound sequences, typically for every 2, 3, or 4 beats. Previous studies have suggested that one can not only passively perceive the meter from acoustic cues such as loudness, pitch, and duration of sound elements, but also actively perceive it by paying attention to isochronous sound events without any acoustic cues. Studying the interaction of top-down and bottom-up processing in meter perception leads to understanding the cognitive system’s ability to perceive the entire structure of music. The present study aimed to demonstrate that meter perception requires the top-down process (which maintains and switches attention between cues) as well as the bottom-up process for discriminating acoustic cues. We created a “biphasic” sound stimulus, which consists of successive tone sequences designed to provide cues for both the triple and quadruple meters in different sound attributes, frequency, and duration. Participants were asked to focus on either frequency or duration of the stimulus, and to answer how they perceived meters on a five-point scale (ranged from “strongly triple” to “strongly quadruple”). As a result, we found that participants perceived different meters by switching their attention to specific cues. This result adds evidence to the idea that meter perception involves the interaction between top-down and bottom-up processes.
Collapse
Affiliation(s)
- Sotaro Kondoh
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Kazuo Okanoya
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Center for Evolutionary Cognitive Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- RIKEN Center for Brain Science, Saitama, Japan
- * E-mail: (KO); (ROT)
| | - Ryosuke O. Tachibana
- Center for Evolutionary Cognitive Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- * E-mail: (KO); (ROT)
| |
Collapse
|
7
|
Zhao TC, Kuhl PK. Neural and physiological relations observed in musical beat and meter processing. Brain Behav 2020; 10:e01836. [PMID: 32920995 PMCID: PMC7667306 DOI: 10.1002/brb3.1836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 08/22/2020] [Indexed: 12/12/2022] Open
Abstract
INTRODUCTION Music is ubiquitous and powerful in the world's cultures. Music listening involves abundant information processing (e.g., pitch, rhythm) in the central nervous system and can also induce changes in the physiology, such as heart rate and perspiration. Yet, previous studies tended to examine music information processing in the brain separately from physiological changes. In the current study, we focused on the temporal structure of music (i.e., beat and meter) and examined the physiology, neural processing, and, most importantly, the relation between the two areas. METHODS Simultaneous MEG and ECG data were collected from a group of adults (N = 15) while they passively listened to duple and triple rhythmic patterns. To characterize physiology, we measured heart rate variability (HRV), indexing the parasympathetic nervous system function (PSNS). To characterize neural processing of beat and meter, we examined the neural entertainment and calculated the beat-to-meter ratio to index the relation between beat-level and meter-level entrainment. Specifically, the current study investigated three related questions: (a) whether listening to musical rhythms affects HRV; (b) whether the neural beat-to-meter ratio differed between metrical conditions, and (c) whether neural beat-to-meter ratio is related to HRV. RESULTS Results suggest that while at the group level, both HRV and neural processing are highly similar across metrical conditions, at the individual level, neural beat-to-meter ratio significantly predicts HRV, establishing a neural-physiological link. CONCLUSION This observed link is discussed under the theoretical "neurovisceral integration model," and it provides important new perspectives in music cognition and auditory neuroscience research.
Collapse
Affiliation(s)
- T. Christina Zhao
- Institute for Learning and Brain SciencesUniversity of WashingtonSeattleWAUSA
| | - Patricia K. Kuhl
- Institute for Learning and Brain SciencesUniversity of WashingtonSeattleWAUSA
| |
Collapse
|
8
|
Mathias B, Zamm A, Gianferrara PG, Ross B, Palmer C. Rhythm Complexity Modulates Behavioral and Neural Dynamics During Auditory–Motor Synchronization. J Cogn Neurosci 2020; 32:1864-1880. [DOI: 10.1162/jocn_a_01601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
We addressed how rhythm complexity influences auditory–motor synchronization in musically trained individuals who perceived and produced complex rhythms while EEG was recorded. Participants first listened to two-part auditory sequences (Listen condition). Each part featured a single pitch presented at a fixed rate; the integer ratio formed between the two rates varied in rhythmic complexity from low (1:1) to moderate (1:2) to high (3:2). One of the two parts occurred at a constant rate across conditions. Then, participants heard the same rhythms as they synchronized their tapping at a fixed rate (Synchronize condition). Finally, they tapped at the same fixed rate (Motor condition). Auditory feedback from their taps was present in all conditions. Behavioral effects of rhythmic complexity were evidenced in all tasks; detection of missing beats (Listen) worsened in the most complex (3:2) rhythm condition, and tap durations (Synchronize) were most variable and least synchronous with stimulus onsets in the 3:2 condition. EEG power spectral density was lowest at the fixed rate during the 3:2 rhythm and greatest during the 1:1 rhythm (Listen and Synchronize). ERP amplitudes corresponding to an N1 time window were smallest for the 3:2 rhythm and greatest for the 1:1 rhythm (Listen). Finally, synchronization accuracy (Synchronize) decreased as amplitudes in the N1 time window became more positive during the high rhythmic complexity condition (3:2). Thus, measures of neural entrainment corresponded to synchronization accuracy, and rhythmic complexity modulated the behavioral and neural measures similarly.
Collapse
Affiliation(s)
- Brian Mathias
- McGill University
- Max Planck Institute for Human Cognitive and Brain Science
| | - Anna Zamm
- McGill University
- Central European University, Budapest, Hungary
| | | | | | | |
Collapse
|
9
|
Meter enhances the subcortical processing of speech sounds at a strong beat. Sci Rep 2020; 10:15973. [PMID: 32994430 PMCID: PMC7525485 DOI: 10.1038/s41598-020-72714-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 09/07/2020] [Indexed: 11/08/2022] Open
Abstract
The temporal structure of sound such as in music and speech increases the efficiency of auditory processing by providing listeners with a predictable context. Musical meter is a good example of a sound structure that is temporally organized in a hierarchical manner, with recent studies showing that meter optimizes neural processing, particularly for sounds located at a higher metrical position or strong beat. Whereas enhanced cortical auditory processing at times of high metric strength has been studied, there is to date no direct evidence showing metrical modulation of subcortical processing. In this work, we examined the effect of meter on the subcortical encoding of sounds by measuring human auditory frequency-following responses to speech presented at four different metrical positions. Results show that neural encoding of the fundamental frequency of the vowel was enhanced at the strong beat, and also that the neural consistency of the vowel was the highest at the strong beat. When comparing musicians to non-musicians, musicians were found, at the strong beat, to selectively enhance the behaviorally relevant component of the speech sound, namely the formant frequency of the transient part. Our findings indicate that the meter of sound influences subcortical processing, and this metrical modulation differs depending on musical expertise.
Collapse
|
10
|
Fitzroy AB, Breen M. Metric Structure and Rhyme Predictability Modulate Speech Intensity During Child-Directed and Read-Alone Productions of Children's Literature. LANGUAGE AND SPEECH 2020; 63:292-305. [PMID: 31074328 DOI: 10.1177/0023830919843158] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Temporal and phonological predictability in children's literature may support early literacy acquisition. Realization of predictive structure in caregiver prosody could guide children's attention during shared reading, thereby supporting reading subskill development. However, little is known about how predictive structure is realized prosodically during child-directed reading. We investigated whether speakers use word intensity to signal predictive metric and rhyme structure in child-directed and read-alone productions of The Cat in the Hat (Dr. Seuss, 1957), by modeling maximum intensity (dB) of monosyllabic words as a function of metric strength, rhyme predictability, and a set of control parameters. In the control model, intensity increased with lower lexical frequency, capitalization, first mention, and likelihood of a syntactic boundary. Metric structure predicted word intensity beyond these control factors in a hierarchical manner: words aligned with beat one in a 6/8 metric structure were produced with highest intensity, words aligned with beat four were produced with intermediate intensity, and words aligned with all other beats were produced with the lowest intensity. Additionally, phonologically predictable rhyme targets were reduced in intensity. The effects of meter and rhyme were not moderated by the presence of a child audience. These results demonstrate that predictability along multiple dimensions is encoded during reading of poetic children's literature, and that metric structure is realized hierarchically in word intensity. Further, the manner by which predictability is encoded in word intensity differs from that previously reported for word duration in this corpus (Breen, 2018), demonstrating that intensity and duration present nonidentical prosodic information channels.
Collapse
Affiliation(s)
- Ahren B Fitzroy
- Department of Psychology and Education, Mount Holyoke College; Department of Psychological and Brain Sciences, University of Massachusetts, USA
| | - Mara Breen
- Department of Psychology and Education, Mount Holyoke College, USA
| |
Collapse
|
11
|
Dauer T, Nerness B, Fujioka T. Predictability of higher-order temporal structure of musical stimuli is associated with auditory evoked response. Int J Psychophysiol 2020; 153:53-64. [PMID: 32325078 DOI: 10.1016/j.ijpsycho.2020.04.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 03/30/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Sound predictability resulting from repetitive patterns can be implicitly learned and often neither requires nor captures our conscious attention. Recently, predictive coding theory has been used as a framework to explain how predictable or expected stimuli evoke and gradually attenuate obligatory neural responses over time compared to those elicited by unpredictable events. However, these results were obtained using the repetition of simple auditory objects such as pairs of tones or phonemes. Here we examined whether the same principle would hold for more abstract temporal structures of sounds. If this is the case, we hypothesized that a regular repetition schedule of a set of musical patterns would reduce neural processing over the course of listening compared to stimuli with an irregular repetition schedule (and the same set of musical patterns). Electroencephalography (EEG) was recorded while participants passively listened to 6-8 min stimulus sequences in which five different four-tone patterns with temporally regular or irregular repetition were presented successively in a randomized order. N1 amplitudes in response to the first tone of each musical pattern were significantly less negative at the end of the regular sequence compared to the beginning, while such reduction was absent in the irregular sequence. These results extend previous findings by showing that N1 reflects automatic learning of the predictable higher-order structure of sound sequences, while continuous engagement of preattentive auditory processing is necessary for the unpredictable structure.
Collapse
Affiliation(s)
- Tysen Dauer
- Department of Music, Stanford University, United States.
| | - Barbara Nerness
- Department of Music, Stanford University, United States; Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, United States
| | - Takako Fujioka
- Department of Music, Stanford University, United States; Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States
| |
Collapse
|
12
|
Bouwer FL, Honing H, Slagter HA. Beat-based and Memory-based Temporal Expectations in Rhythm: Similar Perceptual Effects, Different Underlying Mechanisms. J Cogn Neurosci 2020; 32:1221-1241. [PMID: 31933432 DOI: 10.1162/jocn_a_01529] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.
Collapse
|
13
|
Silva S, Castro SL. Structural meter perception is pre-attentive. Neuropsychologia 2019; 133:107184. [PMID: 31518576 DOI: 10.1016/j.neuropsychologia.2019.107184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 06/24/2019] [Accepted: 09/04/2019] [Indexed: 11/19/2022]
Abstract
A prominent question in timing research is whether meter perception is possible without attention to meter. So far, research has probed attention effects on meter perception with a surface-based approach that may create confounds between meter and rhythm, and not with a structural approach requiring abstraction from surface patterns. The available pattern of findings suggests that different meter dimensions (meter as beat hierarchy vs. meter as regular cycle length) may yield different attention effects: meter as cycle-length regularity may require attention (it is attentive but not pre-attentive), while meter as beat-hierarchy may be pre-attentive. However, it is unknown whether this dissociation prevails under structural meter processing. We examined attention effects on the EEG correlates of structural meter-processing, considering the two dimensions of meter perception: hierarchy and cycle-length. While the results for hierarchy violations were inconclusive, cycle-length violations induced pre-attentive, but not attentive, responses. These pre-attentive responses corresponded to late ERPs (300-600 ms), consistent with deep, structural meter-processing. Our findings highlight the importance of pre-attentive processing in meter perception, and they raise the hypothesis of dissociation between surface- and structure-based meter processing.
Collapse
Affiliation(s)
- Susana Silva
- Center for Psychology at University of Porto (CPUP), Porto, Portugal.
| | - São Luís Castro
- Center for Psychology at University of Porto (CPUP), Porto, Portugal.
| |
Collapse
|
14
|
Lima DDBD, Regaçone SF, Oliveira ACSD, Alcântara YB, Chagas EFB, Frizzo ACF. Analysis of the Effect of Musical Stimulation on Cortical Auditory Evoked Potentials. Int Arch Otorhinolaryngol 2019; 23:31-35. [PMID: 30647781 PMCID: PMC6331299 DOI: 10.1055/s-0038-1651507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Accepted: 03/11/2018] [Indexed: 10/29/2022] Open
Abstract
Introduction Cortical auditory evoked potentials (CAEPs) are bioelectric responses that occur from acoustic stimulations, and they assess the functionality of the central auditory system. Objective The objective of the present study was to analyze the effect of musical stimulation on CAEPs. Methods The sample consisted of 42 healthy female subjects, aged between 18 and 24 years, divided into two groups - G1: without musical stimulation prior to the CAEP examination; and G2: with stimulation prior to the examination. In both groups, as a pre-collection procedure, the complete basic audiological evaluation was performed. For the musical stimulation performed in G2, we used an MP4 player programmed to play Pachelbel's "Canon in D Major" for five minutes prior to the CAEP examination. To analyze the effect on the groups, the ear side and the ide-group interaction , a mixed analysis of variance (ANOVA) of repeated measures was performed. Box M test and Mauchly sphericity test were also performed. Results Test differences were considered statistically significant when the p -value was < 0.05 (5%). Thus, it was possible to observe that there was a statistically significant difference of the P2 component characterized by the decrease in the amplitude of response in the left ear in G2 when comparing the responses of CAEP with and without prior musical stimulation. Conclusion The result of the present study enabled us to conclude that there was a change in the response of CAEPs with musical stimulation.
Collapse
Affiliation(s)
- Daiane Damaris Baptista de Lima
- Department of Speech Therapy and Audiology, Faculty of Philosophy and Science, Universidade Estadual Paulista, (FFC/UNESP), Marília (SP), Brazil
| | - Simone Fiuza Regaçone
- Department of Speech Therapy and Audiology, Dentistry School of Bauru, Universidade de São Paulo (FOB/USP), Bauru (SP), Brazil
| | - Anna Caroline Silva de Oliveira
- Department of Speech Therapy and Audiology, Faculty of Philosophy and Science, Universidade Estadual Paulista, (FFC/UNESP), Marília (SP), Brazil
| | - Yara Bagali Alcântara
- Department of Speech Therapy and Audiology, Faculty of Philosophy and Science, Universidade Estadual Paulista, (FFC/UNESP), Marília (SP), Brazil
| | - Eduardo Federighi Baisi Chagas
- Postgraduate Program in Human Development and Technology, Faculty of Philosophy, Sciences and Letters of Rio Claro, Universidade Estadual Paulista, Rio Claro, SP, Brazil; Department of Physical Education, Universidade de Marília, SP, Brazil
| | - Ana Claudia Figueiredo Frizzo
- Department of Speech Therapy and Audiology, Faculty of Philosophy and Science, Universidade Estadual Paulista, (FFC/UNESP), Marília (SP), Brazil
| |
Collapse
|
15
|
Kosie JE, Baldwin D. Attention rapidly reorganizes to naturally occurring structure in a novel activity sequence. Cognition 2019; 182:31-44. [DOI: 10.1016/j.cognition.2018.09.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2017] [Revised: 08/28/2018] [Accepted: 09/03/2018] [Indexed: 10/28/2022]
|
16
|
Harding EE, Sammler D, Henry MJ, Large EW, Kotz SA. Cortical tracking of rhythm in music and speech. Neuroimage 2018; 185:96-101. [PMID: 30336253 DOI: 10.1016/j.neuroimage.2018.10.037] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Revised: 10/08/2018] [Accepted: 10/13/2018] [Indexed: 11/26/2022] Open
Abstract
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Collapse
Affiliation(s)
- Eleanor E Harding
- Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Brain and Mind Institute, Department of Psychology, The University of Western Ontario, London, Ontario, Canada
| | - Edward W Large
- Department of Psychology, University of Connecticut Storrs, Connecticut, USA
| | - Sonja A Kotz
- Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
17
|
Breen M. Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat. Cognition 2018; 174:71-81. [PMID: 29425988 DOI: 10.1016/j.cognition.2018.01.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 01/25/2018] [Accepted: 01/29/2018] [Indexed: 11/30/2022]
Abstract
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes.
Collapse
Affiliation(s)
- Mara Breen
- Department of Psychology and Education, Mount Holyoke College, South Hadley, MA, USA.
| |
Collapse
|
18
|
Neural processing of musical meter in musicians and non-musicians. Neuropsychologia 2017; 106:289-297. [DOI: 10.1016/j.neuropsychologia.2017.10.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 10/01/2017] [Accepted: 10/03/2017] [Indexed: 11/17/2022]
|
19
|
Woodruff Carr K, Fitzroy AB, Tierney A, White-Schwoch T, Kraus N. Incorporation of feedback during beat synchronization is an index of neural maturation and reading skills. BRAIN AND LANGUAGE 2017; 164:43-52. [PMID: 27701006 DOI: 10.1016/j.bandl.2016.09.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Revised: 07/29/2016] [Accepted: 09/11/2016] [Indexed: 06/06/2023]
Abstract
Speech communication involves integration and coordination of sensory perception and motor production, requiring precise temporal coupling. Beat synchronization, the coordination of movement with a pacing sound, can be used as an index of this sensorimotor timing. We assessed adolescents' synchronization and capacity to correct asynchronies when given online visual feedback. Variability of synchronization while receiving feedback predicted phonological memory and reading sub-skills, as well as maturation of cortical auditory processing; less variable synchronization during the presence of feedback tracked with maturation of cortical processing of sound onsets and resting gamma activity. We suggest the ability to incorporate feedback during synchronization is an index of intentional, multimodal timing-based integration in the maturing adolescent brain. Precision of temporal coding across modalities is important for speech processing and literacy skills that rely on dynamic interactions with sound. Synchronization employing feedback may prove useful as a remedial strategy for individuals who struggle with timing-based language learning impairments.
Collapse
Affiliation(s)
- Kali Woodruff Carr
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | - Ahren B Fitzroy
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | - Adam Tierney
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA; Department of Neurobiology & Physiology, Northwestern University, 2205 Tech Drive, Evanston, IL 60208, USA; Department of Otolaryngology, Northwestern University, 675 North St Clair, Chicago, IL, USA.
| |
Collapse
|