1
|
Zheng Y, Gao P, Li X. The modulating effect of musical expertise on lexical-semantic prediction in speech-in-noise comprehension: Evidence from an EEG study. Psychophysiology 2023; 60:e14371. [PMID: 37350401 DOI: 10.1111/psyp.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 06/24/2023]
Abstract
Musical expertise has been proposed to facilitate speech perception and comprehension in noisy environments. This study further examined the open question of whether musical expertise modulates high-level lexical-semantic prediction to aid online speech comprehension in noisy backgrounds. Musicians and nonmusicians listened to semantically strongly/weakly constraining sentences during EEG recording. At verbs prior to target nouns, both groups showed a positivity-ERP effect (Strong vs. Weak) associated with the predictability of incoming nouns; this correlation effect was stronger in musicians than in nonmusicians. After the target nouns appeared, both groups showed an N400 reduction effect (Strong vs. Weak) associated with noun predictability, but musicians exhibited an earlier onset latency and stronger effect size of this correlation effect than nonmusicians. To determine whether musical expertise enhances anticipatory semantic processing in general, the same group of participants participated in a control reading comprehension experiment. The results showed that, compared with nonmusicians, musicians demonstrated more delayed ERP correlation effects of noun predictability at words preceding the target nouns; musicians also exhibited more delayed and reduced N400 decrease effects correlated with noun predictability at the target nouns. Taken together, these results suggest that musical expertise enhances lexical-semantic predictive processing in speech-in-noise comprehension. This musical-expertise effect may be related to the strengthened hierarchical speech processing in particular.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
2
|
Ishida K, Nittono H. Relationship between early neural responses to syntactic and acoustic irregularities in music. Eur J Neurosci 2022; 56:6201-6214. [PMID: 36310105 DOI: 10.1111/ejn.15856] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 07/21/2022] [Accepted: 10/17/2022] [Indexed: 12/29/2022]
Abstract
Humans can detect various anomalies in a sound sequence without attending to each dimension explicitly. Event-related potentials (ERPs) have been used to examine the processes of auditory deviance detection. Previous research has shown that music-syntactic anomalies elicit early right anterior negativity (ERAN), whereas more general acoustic irregularities elicit mismatch negativity (MMN). Although these ERP components occur in a similar latency range with a similar scalp topography, the relationship between the detection processes they reflect remains unclear. This study compared these components by manipulating music-syntactic (chord progression) and acoustic (intensity) irregularities orthogonally in two experiments. Non-musicians (Experiment 1: N = 39; Experiment 2: N = 24) were asked to listen to chord sequences, each consisting of 5 four-voice chords, as they watched a silent video clip. Standard, harmonic-deviant, intensity-deviant and double-deviant chords occurred at the final position in each sequence. Deviant stimuli were presented infrequently (p = .10) in Experiment 1 and equiprobably (p = .25) in Experiment 2. Regardless of deviance probability, both harmonic and intensity deviants elicited similar negativities, which were indistinguishable in terms of latency or scalp distribution. When the two deviant types occurred simultaneously, the negativity increased in an additive manner; that is, the amplitude of the double-deviant ERP was as large as the sum of the single-deviant ERPs. These findings suggest that the detection of music-syntactic and acoustic irregularities works independently, based on different regularity representations.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
3
|
Zhou T, Li Y, Liu H, Zhou S, Wang T. N400 Indexing the Motion Concept Shared by Music and Words. Front Psychol 2022; 13:888226. [PMID: 35837648 PMCID: PMC9275656 DOI: 10.3389/fpsyg.2022.888226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 05/18/2022] [Indexed: 11/16/2022] Open
Abstract
The two event-related potentials (ERP) studies investigated how verbs and nouns were processed in different music priming conditions in order to reveal whether the motion concept via embodiment can be stimulated and evoked across categories. Study 1 (Tasks 1 and 2) tested the processing of verbs (action verbs vs. state verbs) primed by two music types, with tempo changes (accelerating music vs. decelerating music) and without tempo changes (fast music vs. slow music) while Study 2 (Tasks 3 and 4) tested the processing of nouns (animate nouns vs. inanimate nouns) in the same priming condition as adopted in Study 1. During the experiments, participants were required to hear a piece of music prior to judging whether an ensuing word (verb or noun) is semantically congruent with the motion concept conveyed by the music. The results show that in the priming condition of music with tempo changes, state verbs and inanimate nouns elicited larger N400 amplitudes than action verbs and animate nouns, respectively in the anterior regions and anterior to central regions, whereas in the priming condition of music without tempo changes, action verbs elicited larger N400 amplitudes than state verbs and the two categories of nouns revealed no N400 difference, unexpectedly. The interactions between music and words were significant only in Tasks 1, 2, and 3. Taken together, the results demonstrate that firstly, music with tempo changes and music without tempo prime verbs and nouns in different fashions; secondly, action verbs and animate nouns are easier to process than state verbs and inanimate nouns when primed by music with tempo changes due to the shared motion concept across categories; thirdly, bodily experience differentiates between music and words in coding (encoding and decoding) fashion but the motion concept conveyed by the two categories can be subtly extracted on the metaphorical basis, as indicated in the N400 component. Our studies reveal that music tempos can prime different word classes, favoring the notion that embodied motion concept exists across domains and adding evidence to the hypothesis that music and language share the neural mechanism of meaning processing.
Collapse
|
4
|
Susino M, Schubert E. Musical emotions in the absence of music: A cross-cultural investigation of emotion communication in music by extra-musical cues. PLoS One 2020; 15:e0241196. [PMID: 33206664 PMCID: PMC7673536 DOI: 10.1371/journal.pone.0241196] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 10/09/2020] [Indexed: 11/19/2022] Open
Abstract
Research in music and emotion has long acknowledged the importance of extra-musical cues, yet has been unable to measure their effect on emotion communication in music. The aim of this research was to understand how extra-musical cues affect emotion responses to music in two distinguishable cultures. Australian and Cuban participants (N = 276) were instructed to name an emotion in response to written lyric excerpts from eight distinct music genres, using genre labels as cues. Lyrics were presented primed with genre labels (original priming and a false, lured genre label) or unprimed. For some genres, emotion responses to the same lyrics changed based on the primed genre label. We explain these results as emotion expectations induced by extra-musical cues. This suggests that prior knowledge elicited by lyrics and music genre labels are able to affect the musical emotion responses that music can communicate, independent of the emotion contribution made by psychoacoustic features. For example, the results show a lyric excerpt that is believed to belong to the Heavy Metal genre triggers high valence/high arousal emotions compared to the same excerpt primed as Japanese Gagaku, without the need of playing any music. The present study provides novel empirical evidence of extra-musical effects on emotion and music, and supports this interpretation from a multi-genre, cross-cultural perspective. Further findings were noted in relation to fandom that also supported the emotion expectation account. Participants with high levels of fandom for a genre reported a wider range of emotions in response to the lyrics labelled as being a song from that same specific genre, compared to lower levels of fandom. Both within and across culture differences were observed, and the importance of a culture effect discussed.
Collapse
Affiliation(s)
- Marco Susino
- Assemblage Centre for Creative Arts, College of Humanities, Arts and Social Sciences, Flinders University, Adelaide, Australia
- * E-mail:
| | - Emery Schubert
- Empirical Musicology Group, School of the Arts and Media, University of New South Wales, Sydney, Australia
| |
Collapse
|
5
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
6
|
Proverbio AM, Camporeale E, Brusa A. Multimodal Recognition of Emotions in Music and Facial Expressions. Front Hum Neurosci 2020; 14:32. [PMID: 32116613 PMCID: PMC7027335 DOI: 10.3389/fnhum.2020.00032] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/23/2020] [Indexed: 01/24/2023] Open
Abstract
The aim of the study was to investigate the neural processing of congruent vs. incongruent affective audiovisual information (facial expressions and music) by means of ERPs (Event Related Potentials) recordings. Stimuli were 200 infant faces displaying Happiness, Relaxation, Sadness, Distress and 32 piano musical pieces conveying the same emotional states (as specifically assessed). Music and faces were presented simultaneously, and paired so that in half cases they were emotionally congruent or incongruent. Twenty subjects were told to pay attention and respond to infrequent targets (adult neutral faces) while their EEG was recorded from 128 channels. The face-related N170 (160-180 ms) component was the earliest response affected by the emotional content of faces (particularly by distress), while visual P300 (250-450 ms) and auditory N400 (350-550 ms) responses were specifically modulated by the emotional content of both facial expressions and musical pieces. Face/music emotional incongruence elicited a wide N400 negativity indicating the detection of a mismatch in the expressed emotion. A swLORETA inverse solution applied to N400 (difference wave Incong. - Cong.), showed the crucial role of Inferior and Superior Temporal Gyri in the multimodal representation of emotional information extracted from faces and music. Furthermore, the prefrontal cortex (superior and medial, BA 10) was also strongly active, possibly supporting working memory. The data hints at a common system for representing emotional information derived by social cognition and music processing, including uncus and cuneus.
Collapse
|
7
|
Rossi S, Gugler MF, Rungger M, Galvan O, Zorowka PG, Seebacher J. How the Brain Understands Spoken and Sung Sentences. Brain Sci 2020; 10:E36. [PMID: 31936356 PMCID: PMC7017195 DOI: 10.3390/brainsci10010036] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 12/19/2019] [Accepted: 01/06/2020] [Indexed: 11/24/2022] Open
Abstract
The present study investigates whether meaning is similarly extracted from spoken and sung sentences. For this purpose, subjects listened to semantically correct and incorrect sentences while performing a correctness judgement task. In order to examine underlying neural mechanisms, a multi-methodological approach was chosen combining two neuroscientific methods with behavioral data. In particular, fast dynamic changes reflected in the semantically associated N400 component of the electroencephalography (EEG) were simultaneously assessed with the topographically more fine-grained vascular signals acquired by the functional near-infrared spectroscopy (fNIRS). EEG results revealed a larger N400 for incorrect compared to correct sentences in both spoken and sung sentences. However, the N400 was delayed for sung sentences, potentially due to the longer sentence duration. fNIRS results revealed larger activations for spoken compared to sung sentences irrespective of semantic correctness at predominantly left-hemispheric areas, potentially suggesting a greater familiarity with spoken material. Furthermore, the fNIRS revealed a widespread activation for correct compared to incorrect sentences irrespective of modality, potentially indicating a successful processing of sentence meaning. The combined results indicate similar semantic processing in speech and song.
Collapse
Affiliation(s)
- Sonja Rossi
- ICONE-Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Manfred F Gugler
- Department for Medical Psychology, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Markus Rungger
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Oliver Galvan
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Patrick G Zorowka
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Josef Seebacher
- Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|
8
|
Tanaka S, Kirino E. Increased Functional Connectivity of the Angular Gyrus During Imagined Music Performance. Front Hum Neurosci 2019; 13:92. [PMID: 30936827 PMCID: PMC6431621 DOI: 10.3389/fnhum.2019.00092] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 02/27/2019] [Indexed: 11/26/2022] Open
Abstract
The angular gyrus (AG) is a hub of several networks that are involved in various functions, including attention, self-processing, semantic information processing, emotion regulation, and mentalizing. Since these functions are required in music performance, it is likely that the AG plays a role in music performance. Considering that these functions emerge as network properties, this study analyzed the functional connectivity of the AG during the imagined music performance task and the resting condition. Our hypothesis was that the functional connectivity of the AG is modulated by imagined music performance. In the resting condition, the AG had connections with the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and precuneus as well as the superior and inferior frontal gyri and with the temporal cortex. Compared with the resting condition, imagined music performance increased the functional connectivity of the AG with the superior frontal gyrus (SFG), mPFC, precuneus, PCC, hippocampal/parahippocampal gyrus (H/PHG), and amygdala. The anterior cingulate cortex (ACC) and superior temporal gyrus (STG) were newly engaged or added to the AG network during the task. In contrast, the supplementary motor area (SMA), sensorimotor areas, and occipital regions, which were anti-correlated with the AG in the resting condition, were disengaged during the task. These results lead to the conclusion that the functional connectivity of the AG is modulated by imagined music performance, which suggests that the AG plays a role in imagined music performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|
9
|
Hides L, Dingle G, Quinn C, Stoyanov SR, Zelenko O, Tjondronegoro D, Johnson D, Cockshaw W, Kavanagh DJ. Efficacy and Outcomes of a Music-Based Emotion Regulation Mobile App in Distressed Young People: Randomized Controlled Trial. JMIR Mhealth Uhealth 2019; 7:e11482. [PMID: 30664457 PMCID: PMC6352004 DOI: 10.2196/11482] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 09/20/2018] [Accepted: 09/29/2018] [Indexed: 12/11/2022] Open
Abstract
Background Emotion dysregulation increases the risk of depression, anxiety, and substance use disorders. Music can help regulate emotions, and mobile phones provide constant access to it. The Music eScape mobile app teaches young people how to identify and manage emotions using music. Objective This study aimed to examine the effects of using Music eScape on emotion regulation, distress, and well-being at 1, 2, 3, and 6 months. Moderators of outcomes and user ratings of app quality were also examined. Methods A randomized controlled trial compared immediate versus 1-month delayed access to Music eScape in 169 young people (aged 16 to 25 years) with at least mild levels of mental distress (Kessler 10 score>17). Results No significant differences between immediate and delayed groups on emotion regulation, distress, or well-being were found at 1 month. Both groups achieved significant improvements in 5 of the 6 emotion regulation skills, mental distress, and well-being at 2, 3, and 6 months. Unhealthy music use moderated improvements on 3 emotion regulation skills. Users gave the app a high mean quality rating (mean 3.8 [SD 0.6]) out of 5. Conclusions Music eScape has the potential to provide a highly accessible way of improving young people’s emotion regulation skills, but further testing is required to determine its efficacy. Targeting unhealthy music use in distressed young people may improve their emotion regulation skills. Trial Registration Australian New Zealand Clinical Trials Registry ACTRN12615000051549; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=365974
Collapse
Affiliation(s)
- Leanne Hides
- School of Psychology, The University of Queensland, Brisbane, Australia.,School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia.,Centre for Children's Health Research, Queensland University of Technology, Brisbane, Australia
| | - Genevieve Dingle
- School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia
| | - Catherine Quinn
- School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia
| | - Stoyan R Stoyanov
- School of Psychology, The University of Queensland, Brisbane, Australia.,School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia.,Centre for Children's Health Research, Queensland University of Technology, Brisbane, Australia
| | - Oksana Zelenko
- Creative Industries Faculty, Queensland University of Technology, Brisbane, Australia
| | - Dian Tjondronegoro
- School of Business and Tourism, Southern Cross University, Gold Coast, Australia
| | - Daniel Johnson
- School of Business and Tourism, Southern Cross University, Gold Coast, Australia
| | - Wendell Cockshaw
- School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia.,Centre for Children's Health Research, Queensland University of Technology, Brisbane, Australia
| | - David J Kavanagh
- School of Psychology & Counselling, Institute of Health and Biomedical Innovation, Queensland University of Technology, Brisbane, Australia.,Centre for Children's Health Research, Queensland University of Technology, Brisbane, Australia
| |
Collapse
|
10
|
Belyk M, Johnson JF, Kotz SA. Poor neuro-motor tuning of the human larynx: a comparison of sung and whistled pitch imitation. ROYAL SOCIETY OPEN SCIENCE 2018; 5:171544. [PMID: 29765635 PMCID: PMC5936900 DOI: 10.1098/rsos.171544] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 03/13/2018] [Indexed: 06/08/2023]
Abstract
Vocal imitation is a hallmark of human communication that underlies the capacity to learn to speak and sing. Even so, poor vocal imitation abilities are surprisingly common in the general population and even expert vocalists cannot match the precision of a musical instrument. Although humans have evolved a greater degree of control over the laryngeal muscles that govern voice production, this ability may be underdeveloped compared with control over the articulatory muscles, such as the tongue and lips, volitional control of which emerged earlier in primate evolution. Human participants imitated simple melodies by either singing (i.e. producing pitch with the larynx) or whistling (i.e. producing pitch with the lips and tongue). Sung notes were systematically biased towards each individual's habitual pitch, which we hypothesize may act to conserve muscular effort. Furthermore, while participants who sung more precisely also whistled more precisely, sung imitations were less precise than whistled imitations. The laryngeal muscles that control voice production are under less precise control than the oral muscles that are involved in whistling. This imprecision may be due to the relatively recent evolution of volitional laryngeal-motor control in humans, which may be tuned just well enough for the coarse modulation of vocal-pitch in speech.
Collapse
Affiliation(s)
- Michel Belyk
- Bloorview Research Institute, 150 Kilgour Road, Toronto, CanadaM4G 1R8
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Joseph F. Johnson
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Sonja A. Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
11
|
Lumaca M, Baggio G. Cultural Transmission and Evolution of Melodic Structures in Multi-generational Signaling Games. ARTIFICIAL LIFE 2017; 23:406-423. [PMID: 28786724 DOI: 10.1162/artl_a_00238] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
It has been proposed that languages evolve by adapting to the perceptual and cognitive constraints of the human brain, developing, in the course of cultural transmission, structural regularities that maximize or optimize learnability and ease of processing. To what extent would perceptual and cognitive constraints similarly affect the evolution of musical systems? We conducted an experiment on the cultural evolution of artificial melodic systems, using multi-generational signaling games as a laboratory model of cultural transmission. Signaling systems, using five-tone sequences as signals, and basic and compound emotions as meanings, were transmitted from senders to receivers along diffusion chains in which the receiver in each game became the sender in the next game. During transmission, structural regularities accumulated in the signaling systems, following principles of proximity, symmetry, and good continuation. Although the compositionality of signaling systems did not increase significantly across generations, we did observe a significant increase in similarity among signals from the same set. We suggest that our experiment tapped into the cognitive and perceptual constraints operative in the cultural evolution of musical systems, which may differ from the mechanisms at play in language evolution and change.
Collapse
Affiliation(s)
- Massimo Lumaca
- SISSA International School for Advanced Studies
- Aarhus University
| | - Giosuè Baggio
- SISSA International School for Advanced Studies
- Norwegian University of Science and Technology
| |
Collapse
|
12
|
Abstract
Direct stimulation of the auditory nerve via a Cochlear Implant (CI) enables profoundly hearing-impaired people to perceive sounds. Many CI users find language comprehension satisfactory, but music perception is generally considered difficult. However, music contains different dimensions which might be accessible in different ways. We aimed to highlight three main dimensions of music processing in CI users which rely on different processing mechanisms: (1) musical discrimination abilities, (2) access to meaning in music, and (3) subjective music appreciation. All three dimensions were investigated in two CI user groups (post- and prelingually deafened CI users, all implanted as adults) and a matched normal hearing control group. The meaning of music was studied by using event-related potentials (with the N400 component as marker) during a music-word priming task while music appreciation was gathered by a questionnaire. The results reveal a double dissociation between the three dimensions of music processing. Despite impaired discrimination abilities of both CI user groups compared to the control group, appreciation was reduced only in postlingual CI users. While musical meaning processing was restorable in postlingual CI users, as shown by a N400 effect, data of prelingual CI users lack the N400 effect and indicate previous dysfunctional concept building.
Collapse
|
13
|
Särkämö T, Altenmüller E, Rodríguez-Fornells A, Peretz I. Editorial: Music, Brain, and Rehabilitation: Emerging Therapeutic Applications and Potential Neural Mechanisms. Front Hum Neurosci 2016; 10:103. [PMID: 27014034 PMCID: PMC4783433 DOI: 10.3389/fnhum.2016.00103] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Accepted: 02/25/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
- Teppo Särkämö
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Eckart Altenmüller
- Institute of Music Physiology and Musicians' Medicine, University of Music, Drama and Media Hanover Hanover, Germany
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Unit, Bellvitge Research Biomedical InstituteBarcelona, Spain; Department of Basic Psychology, University of BarcelonaBarcelona, Spain; Institució Catalana de Recerca i Estudis AvançatsBarcelona, Spain
| | - Isabelle Peretz
- International Laboratory for Brain, Music, and Sound Research and Centre for Research on Brain, Language and MusicMontréal, QC, Canada; Department of Psychology, Université de MontréalMontréal, QC, Canada
| |
Collapse
|
14
|
Music for a Brighter World: Brightness Judgment Bias by Musical Emotion. PLoS One 2016; 11:e0148959. [PMID: 26863420 PMCID: PMC4749205 DOI: 10.1371/journal.pone.0148959] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2014] [Accepted: 01/26/2016] [Indexed: 11/30/2022] Open
Abstract
A prevalent conceptual metaphor is the association of the concepts of good and evil with brightness and darkness, respectively. Music cognition, like metaphor, is possibly embodied, yet no study has addressed the question whether musical emotion can modulate brightness judgment in a metaphor consistent fashion. In three separate experiments, participants judged the brightness of a grey square that was presented after a short excerpt of emotional music. The results of Experiment 1 showed that short musical excerpts are effective emotional primes that cross-modally influence brightness judgment of visual stimuli. Grey squares were consistently judged as brighter after listening to music with a positive valence, as compared to music with a negative valence. The results of Experiment 2 revealed that the bias in brightness judgment does not require an active evaluation of the emotional content of the music. By applying a different experimental procedure in Experiment 3, we showed that this brightness judgment bias is indeed a robust effect. Altogether, our findings demonstrate a powerful role of musical emotion in biasing brightness judgment and that this bias is aligned with the metaphor viewpoint.
Collapse
|
15
|
Cai L, Huang P, Luo Q, Huang H, Mo L. Iconic Meaning in Music: An Event-Related Potential Study. PLoS One 2015; 10:e0132169. [PMID: 26161561 PMCID: PMC4498930 DOI: 10.1371/journal.pone.0132169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Accepted: 06/10/2015] [Indexed: 11/19/2022] Open
Abstract
Although there has been extensive research on the processing of the emotional meaning of music, little is known about other aspects of listeners’ experience of music. The present study investigated the neural correlates of the iconic meaning of music. Event-related potentials (ERP) were recorded while a group of 20 music majors and a group of 20 non-music majors performed a lexical decision task in the context of implicit musical iconic meaning priming. ERP analysis revealed a significant N400 effect of congruency in time window 260-510 ms following the onset of the target word only in the group of music majors. Time-course analysis using 50 ms windows indicated significant N400 effects both within the time window 410-460 ms and 460-510 ms for music majors, whereas only a partial N400 effect during time window 410-460 ms was observed for non-music majors. There was also a trend for the N400 effects in the music major group to be stronger than those in the non-major group in the sub-windows of 310-360ms and 410-460ms. Especially in the sub-window of 410-460 ms, the topographical map of the difference waveforms between congruent and incongruent conditions revealed different N400 distribution between groups; the effect was concentrated in bilateral frontal areas for music majors, but in central-parietal areas for non-music majors. These results imply probable neural mechanism differences underlying automatic iconic meaning priming of music. Our findings suggest that processing of the iconic meaning of music can be accomplished automatically and that musical training may facilitate the understanding of the iconic meaning of music.
Collapse
Affiliation(s)
- Liman Cai
- Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
- College of Education Science, South China Normal University, Guangzhou 510631, China
| | - Ping Huang
- Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
| | - Qiuling Luo
- Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
| | - Hong Huang
- Department of Music, Xinghai Conservatory of Music, Guangzhou, 510500, China
| | - Lei Mo
- Center for Studies of Psychological Application, South China Normal University, Guangzhou 510631, China
- * E-mail:
| |
Collapse
|
16
|
Asano R, Boeckx C. Syntax in language and music: what is the right level of comparison? Front Psychol 2015; 6:942. [PMID: 26191034 PMCID: PMC4488597 DOI: 10.3389/fpsyg.2015.00942] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 06/22/2015] [Indexed: 11/21/2022] Open
Abstract
It is often claimed that music and language share a process of hierarchical structure building, a mental “syntax.” Although several lines of research point to commonalities, and possibly a shared syntactic component, differences between “language syntax” and “music syntax” can also be found at several levels: conveyed meaning, and the atoms of combination, for example. To bring music and language closer to one another, some researchers have suggested a comparison between music and phonology (“phonological syntax”), but here too, one quickly arrives at a situation of intriguing similarities and obvious differences. In this paper, we suggest that a fruitful comparison between the two domains could benefit from taking the grammar of action into account. In particular, we suggest that what is called “syntax” can be investigated in terms of goal of action, action planning, motor control, and sensory-motor integration. At this level of comparison, we suggest that some of the differences between language and music could be explained in terms of different goals reflected in the hierarchical structures of action planning: the hierarchical structures of music arise to achieve goals with a strong relation to the affective-gestural system encoding tension-relaxation patterns as well as socio-intentional system, whereas hierarchical structures in language are embedded in a conceptual system that gives rise to compositional meaning. Similarities between music and language are most clear in the way several hierarchical plans for executing action are processed in time and sequentially integrated to achieve various goals.
Collapse
Affiliation(s)
- Rie Asano
- Department of Systematic Musicology, Institute of Musicology, University of Cologne , Cologne, Germany
| | - Cedric Boeckx
- Catalan Institute for Research and Advanced Studies , Barcelona, Spain ; Department of General Linguistics, Universitat de Barcelona , Barcelona, Spain
| |
Collapse
|
17
|
Perlovsky L. Origin of music and embodied cognition. Front Psychol 2015; 6:538. [PMID: 25972830 PMCID: PMC4411987 DOI: 10.3389/fpsyg.2015.00538] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Accepted: 04/14/2015] [Indexed: 11/27/2022] Open
Affiliation(s)
- Leonid Perlovsky
- Department of Psychology, Northeastern University Boston, MA, USA
| |
Collapse
|
18
|
Rohrmeier M, Zuidema W, Wiggins GA, Scharff C. Principles of structure building in music, language and animal song. Philos Trans R Soc Lond B Biol Sci 2015; 370:20140097. [PMID: 25646520 PMCID: PMC4321138 DOI: 10.1098/rstb.2014.0097] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Human language, music and a variety of animal vocalizations constitute ways of sonic communication that exhibit remarkable structural complexity. While the complexities of language and possible parallels in animal communication have been discussed intensively, reflections on the complexity of music and animal song, and their comparisons, are underrepresented. In some ways, music and animal songs are more comparable to each other than to language as propositional semantics cannot be used as indicator of communicative success or wellformedness, and notions of grammaticality are less easily defined. This review brings together accounts of the principles of structure building in music and animal song. It relates them to corresponding models in formal language theory, the extended Chomsky hierarchy (CH), and their probabilistic counterparts. We further discuss common misunderstandings and shortcomings concerning the CH and suggest ways to move beyond. We discuss language, music and animal song in the context of their function and motivation and further integrate problems and issues that are less commonly addressed in the context of language, including continuous event spaces, features of sound and timbre, representation of temporality and interactions of multiple parallel feature streams. We discuss these aspects in the light of recent theoretical, cognitive, neuroscientific and modelling research in the domains of music, language and animal song.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Institut für Kunst- und Musikwissenschaft, Technische Universität Dresden, August-Bebel-Straße 20, 01219 Dresden, Germany
| | - Willem Zuidema
- ILLC, University of Amsterdam, PO Box 94242, 1090 CE Amsterdam, The Netherlands
| | - Geraint A Wiggins
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4FZ, UK
| | - Constance Scharff
- Animal Behavior, Freie Universität Berlin, Takustraße 6, 14195 Berlin, Germany
| |
Collapse
|
19
|
Clark CN, Downey LE, Warren JD. Brain disorders and the biological role of music. Soc Cogn Affect Neurosci 2015; 10:444-52. [PMID: 24847111 PMCID: PMC4350491 DOI: 10.1093/scan/nsu079] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 03/07/2014] [Accepted: 05/14/2014] [Indexed: 12/16/2022] Open
Abstract
Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly understood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
20
|
Featherstone CR, Morrison CM, Waterman MG, MacGregor LJ. Semantics, syntax or neither? A case for resolution in the interpretation of N500 and P600 responses to harmonic incongruities. PLoS One 2013; 8:e76600. [PMID: 24223704 PMCID: PMC3818369 DOI: 10.1371/journal.pone.0076600] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2011] [Accepted: 09/02/2013] [Indexed: 11/24/2022] Open
Abstract
The processing of notes and chords which are harmonically incongruous with their context has been shown to elicit two distinct late ERP effects. These effects strongly resemble two effects associated with the processing of linguistic incongruities: a P600, resembling a typical response to syntactic incongruities in language, and an N500, evocative of the N400, which is typically elicited in response to semantic incongruities in language. Despite the robustness of these two patterns in the musical incongruity literature, no consensus has yet been reached as to the reasons for the existence of two distinct responses to harmonic incongruities. This study was the first to use behavioural and ERP data to test two possible explanations for the existence of these two patterns: the musicianship of listeners, and the resolved or unresolved nature of the harmonic incongruities. Results showed that harmonically incongruous notes and chords elicited a late positivity similar to the P600 when they were embedded within sequences which started and ended in the same key (harmonically resolved). The notes and chords which indicated that there would be no return to the original key (leaving the piece harmonically unresolved) were associated with a further P600 in musicians, but with a negativity resembling the N500 in non-musicians. We suggest that the late positivity reflects the conscious perception of a specific element as being incongruous with its context and the efforts of musicians to integrate the harmonic incongruity into its local context as a result of their analytic listening style, while the late negativity reflects the detection of the absence of resolution in non-musicians as a result of their holistic listening style.
Collapse
Affiliation(s)
- Cara R Featherstone
- Institute of Psychological Sciences, University of Leeds, Leeds, United Kingdom
| | | | | | | |
Collapse
|
21
|
Perrachione TK, Fedorenko EG, Vinke L, Gibson E, Dilley LC. Evidence for shared cognitive processing of pitch in music and language. PLoS One 2013; 8:e73372. [PMID: 23977386 PMCID: PMC3744486 DOI: 10.1371/journal.pone.0073372] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2012] [Accepted: 07/28/2013] [Indexed: 11/19/2022] Open
Abstract
Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct--either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
Collapse
Affiliation(s)
- Tyler K. Perrachione
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Evelina G. Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Louis Vinke
- Department of Psychology, Bowling Green State University, Bowling Green, Ohio, United States of America
| | - Edward Gibson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Laura C. Dilley
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan, United States of America
| |
Collapse
|
22
|
Koelsch S. Toward a neural basis of music perception - a review and updated model. Front Psychol 2011; 2:110. [PMID: 21713060 PMCID: PMC3114071 DOI: 10.3389/fpsyg.2011.00110] [Citation(s) in RCA: 170] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2011] [Accepted: 05/13/2011] [Indexed: 12/11/2022] Open
Abstract
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located.
Collapse
Affiliation(s)
- Stefan Koelsch
- Cluster of Excellence "Languages of Emotion", Freie Universität Berlin Berlin, Germany
| |
Collapse
|
23
|
Signification and significance: Music, brain, and culture: Comment on "Towards a neural basis of processing musical semantics" by S. Koelsch. Phys Life Rev 2011; 8:122-4; discussion 125-8. [PMID: 21636333 DOI: 10.1016/j.plrev.2011.05.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2011] [Accepted: 05/19/2011] [Indexed: 11/21/2022]
|
24
|
Besson M, Frey A, Aramaki M. Is the distinction between intra- and extra-musical meaning implemented in the brain? Comment on "Towards a neural basis of processing musical semantics" by Stefan Koelsch. Phys Life Rev 2011; 8:112-3; discussion 125-8. [PMID: 21622035 DOI: 10.1016/j.plrev.2011.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2011] [Accepted: 05/17/2011] [Indexed: 10/18/2022]
Affiliation(s)
- Mireille Besson
- CNRS & Université de la Méditerranée, Institut de Neurosciences Cognitives de la Méditerranée, 31 Chemin Joseph Aiguier, 13402 Marseille Cedex 20, France.
| | | | | |
Collapse
|
25
|
Davies S. Questioning the distinction between intra- and extra-musical meaning: Comment on "Towards a neural basis for processing musical semantics" by Stefan Koelsch. Phys Life Rev 2011; 8:114-5; discussion 125-8. [PMID: 21621491 DOI: 10.1016/j.plrev.2011.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2011] [Accepted: 05/17/2011] [Indexed: 10/18/2022]
Affiliation(s)
- Stephen Davies
- Department of Philosophy, University of Auckland, Private Bag 92019, Auckland Mail Centre, Auckland 1142, New Zealand.
| |
Collapse
|
26
|
Multiple varieties of musical meaning: Comment on "Towards a neural basis of processing musical semantics" by Stefan Koelsch. Phys Life Rev 2011; 8:108-9; discussion 125-8. [PMID: 21616730 DOI: 10.1016/j.plrev.2011.05.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2011] [Accepted: 05/10/2011] [Indexed: 11/22/2022]
|
27
|
Slevc LR, Patel AD. Meaning in music and language: Three key differences: Comment on "Towards a neural basis of processing musical semantics" by Stefan Koelsch. Phys Life Rev 2011; 8:110-1; discussion 125-8. [PMID: 21570367 DOI: 10.1016/j.plrev.2011.05.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2011] [Accepted: 05/06/2011] [Indexed: 11/17/2022]
Affiliation(s)
- L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD 20742, USA.
| | | |
Collapse
|