1
|
Haiduk F, Zatorre RJ, Benjamin L, Morillon B, Albouy P. Spectrotemporal cues and attention jointly modulate fMRI network topology for sentence and melody perception. Sci Rep 2024; 14:5501. [PMID: 38448636 PMCID: PMC10917817 DOI: 10.1038/s41598-024-56139-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/01/2024] [Indexed: 03/08/2024] Open
Abstract
Speech and music are two fundamental modes of human communication. Lateralisation of key processes underlying their perception has been related both to the distinct sensitivity to low-level spectrotemporal acoustic features and to top-down attention. However, the interplay between bottom-up and top-down processes needs to be clarified. In the present study, we investigated the contribution of acoustics and attention to melodies or sentences to lateralisation in fMRI functional network topology. We used sung speech stimuli selectively filtered in temporal or spectral modulation domains with crossed and balanced verbal and melodic content. Perception of speech decreased with degradation of temporal information, whereas perception of melodies decreased with spectral degradation. Applying graph theoretical metrics on fMRI connectivity matrices, we found that local clustering, reflecting functional specialisation, linearly increased when spectral or temporal cues crucial for the task goal were incrementally degraded. These effects occurred in a bilateral fronto-temporo-parietal network for processing temporally degraded sentences and in right auditory regions for processing spectrally degraded melodies. In contrast, global topology remained stable across conditions. These findings suggest that lateralisation for speech and music partially depends on an interplay of acoustic cues and task goals under increased attentional demands.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria.
- Department of General Psychology, University of Padua, Padua, Italy.
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada
| | - Lucas Benjamin
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, CEA, Université Paris-Saclay, NeuroSpin Center, 91191, Gif/Yvette, France
| | - Benjamin Morillon
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Philippe Albouy
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - CRBLM, Montreal, QC, Canada
- CERVO Brain Research Centre, School of Psychology, Laval University, Quebec, QC, Canada
| |
Collapse
|
2
|
Durojaye C, Fink L, Roeske T, Wald-Fuhrmann M, Larrouy-Maestri P. Perception of Nigerian Dùndún Talking Drum Performances as Speech-Like vs. Music-Like: The Role of Familiarity and Acoustic Cues. Front Psychol 2021; 12:652673. [PMID: 34093341 PMCID: PMC8173200 DOI: 10.3389/fpsyg.2021.652673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/21/2021] [Indexed: 11/23/2022] Open
Abstract
It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.
Collapse
Affiliation(s)
- Cecilia Durojaye
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.,Department of Psychology, Arizona State University, Tempe, AZ, United States
| | - Lauren Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.,Max Planck-NYU, Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Tina Roeske
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Melanie Wald-Fuhrmann
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.,Max Planck-NYU, Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Pauline Larrouy-Maestri
- Max Planck-NYU, Center for Language, Music, and Emotion, Frankfurt am Main, Germany.,Neuroscience Department, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| |
Collapse
|
3
|
Sadakata M, Weidema JL, Honing H. Parallel pitch processing in speech and melody: A study of the interference of musical melody on lexical pitch perception in speakers of Mandarin. PLoS One 2020; 15:e0229109. [PMID: 32130244 PMCID: PMC7055904 DOI: 10.1371/journal.pone.0229109] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/29/2020] [Indexed: 11/30/2022] Open
Abstract
Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.
Collapse
Affiliation(s)
- Makiko Sadakata
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
- * E-mail:
| | - Joey L. Weidema
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Dawson C, Tervaniemi M, Aalto D. Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers. PLoS One 2018; 13:e0190793. [PMID: 29300756 PMCID: PMC5754139 DOI: 10.1371/journal.pone.0190793] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2017] [Accepted: 12/20/2017] [Indexed: 02/02/2023] Open
Abstract
Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers.
Collapse
Affiliation(s)
- Caitlin Dawson
- Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- * E-mail:
| | - Mari Tervaniemi
- CICERO Learning Network, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland
| | - Daniel Aalto
- Communication Sciences and Disorders, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Canada
- Institute for Reconstructive Sciences in Medicine, Misericordia Community Hospital, Edmonton, Canada
| |
Collapse
|
5
|
Ravignani A, Honing H, Kotz SA. Editorial: The Evolution of Rhythm Cognition: Timing in Music and Speech. Front Hum Neurosci 2017; 11:303. [PMID: 28659775 PMCID: PMC5468413 DOI: 10.3389/fnhum.2017.00303] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Accepted: 05/26/2017] [Indexed: 01/12/2023] Open
Affiliation(s)
- Andrea Ravignani
- Veterinary and Research Department, Sealcentre PieterburenPieterburen, Netherlands.,Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands.,Artificial Intelligence Lab, Vrije Universiteit BrusselBrussels, Belgium
| | - Henkjan Honing
- Music Cognition Group, Amsterdam Brain and Cognition, Institute for Logic, Language, and Computation, University of AmsterdamAmsterdam, Netherlands
| | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands.,Department of Neuropsychology, Max-Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| |
Collapse
|
6
|
Bidelman GM, Walker BS. Attentional modulation and domain-specificity underlying the neural organization of auditory categorical perception. Eur J Neurosci 2017; 45:690-699. [DOI: 10.1111/ejn.13526] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Revised: 01/13/2017] [Accepted: 01/13/2017] [Indexed: 11/29/2022]
Affiliation(s)
- Gavin M. Bidelman
- Institute for Intelligent Systems; University of Memphis; Memphis TN USA
- School of Communication Sciences & Disorders; University of Memphis; 4055 North Park Loop Memphis TN 38152 USA
- Department of Anatomy and Neurobiology; Univeristy of Tennessee Health Sciences Center; Memphis TN USA
| | - Breya S. Walker
- Institute for Intelligent Systems; University of Memphis; Memphis TN USA
- Department of Psychology; University of Memphis; Memphis TN USA
| |
Collapse
|