1
|
Tervaniemi M. The neuroscience of music – towards ecological validity. Trends Neurosci 2023; 46:355-364. [PMID: 37012175 DOI: 10.1016/j.tins.2023.03.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/28/2023] [Accepted: 03/02/2023] [Indexed: 04/03/2023]
Abstract
Studies in the neuroscience of music gained momentum in the 1990s as an integrated part of the well-controlled experimental research tradition. However, during the past two decades, these studies have moved toward more naturalistic, ecologically valid paradigms. Here, I introduce this move in three frameworks: (i) sound stimulation and empirical paradigms, (ii) study participants, and (iii) methods and contexts of data acquisition. I wish to provide a narrative historical overview of the development of the field and, in parallel, to stimulate innovative thinking to further advance the ecological validity of the studies without overlooking experimental rigor.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Centre of Excellence in Music, Mind, Body, and Brain, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Locopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
2
|
Moerel M, Yacoub E, Gulban OF, Lage-Castellanos A, De Martino F. Using high spatial resolution fMRI to understand representation in the auditory network. Prog Neurobiol 2021; 207:101887. [PMID: 32745500 PMCID: PMC7854960 DOI: 10.1016/j.pneurobio.2020.101887] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/27/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022]
Abstract
Following rapid methodological advances, ultra-high field (UHF) functional and anatomical magnetic resonance imaging (MRI) has been repeatedly and successfully used for the investigation of the human auditory system in recent years. Here, we review this work and argue that UHF MRI is uniquely suited to shed light on how sounds are represented throughout the network of auditory brain regions. That is, the provided gain in spatial resolution at UHF can be used to study the functional role of the small subcortical auditory processing stages and details of cortical processing. Further, by combining high spatial resolution with the versatility of MRI contrasts, UHF MRI has the potential to localize the primary auditory cortex in individual hemispheres. This is a prerequisite to study how sound representation in higher-level auditory cortex evolves from that in early (primary) auditory cortex. Finally, the access to independent signals across auditory cortical depths, as afforded by UHF, may reveal the computations that underlie the emergence of an abstract, categorical sound representation based on low-level acoustic feature processing. Efforts on these research topics are underway. Here we discuss promises as well as challenges that come with studying these research questions using UHF MRI, and provide a future outlook.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| | - Omer Faruk Gulban
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Brain Innovation B.V., Maastricht, the Netherlands.
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Department of NeuroInformatics, Cuban Center for Neuroscience, Cuba.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| |
Collapse
|
3
|
Raharjo I, Kothare H, Nagarajan SS, Houde JF. Speech compensation responses and sensorimotor adaptation to formant feedback perturbations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1147. [PMID: 33639824 PMCID: PMC7892200 DOI: 10.1121/10.0003440] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 01/11/2021] [Accepted: 01/13/2021] [Indexed: 06/11/2023]
Abstract
Control of speech formants is important for the production of distinguishable speech sounds and is achieved with both feedback and learned feedforward control. However, it is unclear whether the learning of feedforward control involves the mechanisms of feedback control. Speakers have been shown to compensate for unpredictable transient mid-utterance perturbations of pitch and loudness feedback, demonstrating online feedback control of these speech features. To determine whether similar feedback control mechanisms exist in the production of formants, responses to unpredictable vowel formant feedback perturbations were examined. Results showed similar within-trial compensatory responses to formant perturbations that were presented at utterance onset and mid-utterance. The relationship between online feedback compensation to unpredictable formant perturbations and sensorimotor adaptation to consistent formant perturbations was further examined. Within-trial online compensation responses were not correlated with across-trial sensorimotor adaptation. A detailed analysis of within-trial time course dynamics across trials during sensorimotor adaptation revealed that across-trial sensorimotor adaptation responses did not result from an incorporation of within-trial compensation response. These findings suggest that online feedback compensation and sensorimotor adaptation are governed by distinct neural mechanisms. These findings have important implications for models of speech motor control in terms of how feedback and feedforward control mechanisms are implemented.
Collapse
Affiliation(s)
- Inez Raharjo
- University of California, Berkeley and University of California, San Francisco, Graduate Program in Bioengineering
| | - Hardik Kothare
- University of California, Berkeley and University of California, San Francisco, Graduate Program in Bioengineering
| | - Srikantan S Nagarajan
- Biomagnetic Imaging Laboratory, Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California 94143, USA
| | - John F Houde
- Speech Neuroscience Laboratory, Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, California 94143, USA
| |
Collapse
|
4
|
Lecaignard F, Bertrand O, Caclin A, Mattout J. Empirical Bayes evaluation of fused EEG-MEG source reconstruction: Application to auditory mismatch evoked responses. Neuroimage 2020; 226:117468. [PMID: 33075561 DOI: 10.1016/j.neuroimage.2020.117468] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 09/08/2020] [Accepted: 10/09/2020] [Indexed: 12/12/2022] Open
Abstract
We here turn the general and theoretical question of the complementarity of EEG and MEG for source reconstruction, into a practical empirical one. Precisely, we address the challenge of evaluating multimodal data fusion on real data. For this purpose, we build on the flexibility of Parametric Empirical Bayes, namely for EEG-MEG data fusion, group level inference and formal hypothesis testing. The proposed approach follows a two-step procedure by first using unimodal or multimodal inference to derive a cortical solution at the group level; and second by using this solution as a prior model for single subject level inference based on either unimodal or multimodal data. Interestingly, for inference based on the same data (EEG, MEG or both), one can then formally compare, as alternative hypotheses, the relative plausibility of the two unimodal and the multimodal group priors. Using auditory data, we show that this approach enables to draw important conclusions, namely on (i) the superiority of multimodal inference, (ii) the greater spatial sensitivity of MEG compared to EEG, (iii) the ability of EEG data alone to source reconstruct temporal lobe activity, (iv) the usefulness of EEG to improve MEG based source reconstruction. Importantly, we largely reproduce those findings over two different experimental conditions. We here focused on Mismatch Negativity (MMN) responses for which generators have been extensively investigated with little homogeneity in the reported results. Our multimodal inference at the group level revealed spatio-temporal activity within the supratemporal plane with a precision which, to our knowledge, has never been achieved before with non-invasive recordings.
Collapse
Affiliation(s)
- Françoise Lecaignard
- Lyon Neuroscience Research Center, CRNL; INSERM, U1028; CNRS, UMR5292; Brain Dynamics and Cognition Team, Lyon, F-69000, France; University Lyon 1, Lyon, F-69000, France.
| | - Olivier Bertrand
- Lyon Neuroscience Research Center, CRNL; INSERM, U1028; CNRS, UMR5292; Brain Dynamics and Cognition Team, Lyon, F-69000, France; University Lyon 1, Lyon, F-69000, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CRNL; INSERM, U1028; CNRS, UMR5292; Brain Dynamics and Cognition Team, Lyon, F-69000, France; University Lyon 1, Lyon, F-69000, France
| | - Jérémie Mattout
- Lyon Neuroscience Research Center, CRNL; INSERM, U1028; CNRS, UMR5292; Brain Dynamics and Cognition Team, Lyon, F-69000, France; University Lyon 1, Lyon, F-69000, France
| |
Collapse
|
5
|
Niesen M, Vander Ghinst M, Bourguignon M, Wens V, Bertels J, Goldman S, Choufani G, Hassid S, De Tiège X. Tracking the Effects of Top-Down Attention on Word Discrimination Using Frequency-tagged Neuromagnetic Responses. J Cogn Neurosci 2020; 32:877-888. [PMID: 31933439 DOI: 10.1162/jocn_a_01522] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top-down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.
Collapse
|
6
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
7
|
Benítez-Burraco A, Murphy E. The Oscillopathic Nature of Language Deficits in Autism: From Genes to Language Evolution. Front Hum Neurosci 2016; 10:120. [PMID: 27047363 PMCID: PMC4796018 DOI: 10.3389/fnhum.2016.00120] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 03/07/2016] [Indexed: 12/11/2022] Open
Abstract
Autism spectrum disorders (ASD) are pervasive neurodevelopmental disorders involving a number of deficits to linguistic cognition. The gap between genetics and the pathophysiology of ASD remains open, in particular regarding its distinctive linguistic profile. The goal of this article is to attempt to bridge this gap, focusing on how the autistic brain processes language, particularly through the perspective of brain rhythms. Due to the phenomenon of pleiotropy, which may take some decades to overcome, we believe that studies of brain rhythms, which are not faced with problems of this scale, may constitute a more tractable route to interpreting language deficits in ASD and eventually other neurocognitive disorders. Building on recent attempts to link neural oscillations to certain computational primitives of language, we show that interpreting language deficits in ASD as oscillopathic traits is a potentially fruitful way to construct successful endophenotypes of this condition. Additionally, we will show that candidate genes for ASD are overrepresented among the genes that played a role in the evolution of language. These genes include (and are related to) genes involved in brain rhythmicity. We hope that the type of steps taken here will additionally lead to a better understanding of the comorbidity, heterogeneity, and variability of ASD, and may help achieve a better treatment of the affected populations.
Collapse
Affiliation(s)
| | - Elliot Murphy
- Division of Psychology and Language Sciences, University College LondonLondon, UK
| |
Collapse
|