1
|
Correa JP. Cross-Modal Musical Expectancy in Complex Sound Music: A Grounded Theory. J Cogn 2023; 6:33. [PMID: 37426063 PMCID: PMC10327858 DOI: 10.5334/joc.281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 05/16/2023] [Indexed: 07/11/2023] Open
Abstract
Expectancy is a core mechanism for constructing affective and cognitive experiences of music. However, research on musical expectations has been largely founded upon the perception of tonal music. Therefore, it is still to be determined how this mechanism explains the cognition of sound-based acoustic and electroacoustic music, such as complex sound music (CSM). Additionally, the dominant methodologies have consisted of well-controlled experimental designs with low ecological validity that have overlooked the listening experience as described by the listeners. This paper presents results concerning musical expectancy from a qualitative research project that investigated the listening experiences of 15 participants accustomed to CSM listening. Corbin and Strauss' (2015) grounded theory was used to triangulate data from interviews along with musical analyses of the pieces chosen by the participants to describe their listening experiences. Cross-modal musical expectancy (CMME) emerged from the data as a subcategory that explained prediction through the interaction of multimodal elements beyond just the acoustic properties of music. The results led to hypothesise that multimodal information coming from sounds, performance gestures, and indexical, iconic, and conceptual associations re-enact cross-modal schemata and episodic memories where real and imagined sounds, objects, actions, and narratives interrelate to give rise to CMME processes. This construct emphasises the effect of CSM's subversive acoustic features and performance practices on the listening experience. Further, it reveals the multiplicity of factors involved in musical expectancy, such as cultural values, subjective musical and non-musical experiences, music structure, listening situation, and psychological mechanisms. Following these ideas, CMME is conceived as a grounded cognition process.
Collapse
|
2
|
Giordano BL, Esposito M, Valente G, Formisano E. Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds. Nat Neurosci 2023; 26:664-672. [PMID: 36928634 PMCID: PMC10076214 DOI: 10.1038/s41593-023-01285-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 02/15/2023] [Indexed: 03/18/2023]
Abstract
Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.
Collapse
Affiliation(s)
- Bruno L Giordano
- Institut de Neurosciences de La Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France.
| | - Michele Esposito
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands. .,Maastricht Centre for Systems Biology (MaCSBio), Faculty of Science and Engineering, Maastricht University, Maastricht, the Netherlands. .,Brightlands Institute for Smart Society (BISS), Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
4
|
Ogg M, Moraczewski D, Kuchinsky SE, Slevc LR. Separable neural representations of sound sources: Speaker identity and musical timbre. Neuroimage 2019; 191:116-126. [PMID: 30731247 DOI: 10.1016/j.neuroimage.2019.01.075] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 12/14/2018] [Accepted: 01/30/2019] [Indexed: 11/28/2022] Open
Abstract
Human listeners can quickly and easily recognize different sound sources (objects and events) in their environment. Understanding how this impressive ability is accomplished can improve signal processing and machine intelligence applications along with assistive listening technologies. However, it is not clear how the brain represents the many sounds that humans can recognize (such as speech and music) at the level of individual sources, categories and acoustic features. To examine the cortical organization of these representations, we used patterns of fMRI responses to decode 1) four individual speakers and instruments from one another (separately, within each category), 2) the superordinate category labels associated with each stimulus (speech or instrument), and 3) a set of simple synthesized sounds that could be differentiated entirely on their acoustic features. Data were collected using an interleaved silent steady state sequence to increase the temporal signal-to-noise ratio, and mitigate issues with auditory stimulus presentation in fMRI. Largely separable clusters of voxels in the temporal lobes supported the decoding of individual speakers and instruments from other stimuli in the same category. Decoding the superordinate category of each sound was more accurate and involved a larger portion of the temporal lobes. However, these clusters all overlapped with areas that could decode simple, acoustically separable stimuli. Thus, individual sound sources from different sound categories are represented in separate regions of the temporal lobes that are situated within regions implicated in more general acoustic processes. These results bridge an important gap in our understanding of cortical representations of sounds and their acoustics.
Collapse
Affiliation(s)
- Mattson Ogg
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA.
| | - Dustin Moraczewski
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| | - Stefanie E Kuchinsky
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Center for Advanced Study of Language, University of Maryland, College Park, MD, 20742, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD, 20742, USA
| | - L Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, MD, 20742, USA; Department of Psychology, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
5
|
Xu Y, Wang X, Wang X, Men W, Gao JH, Bi Y. Doctor, Teacher, and Stethoscope: Neural Representation of Different Types of Semantic Relations. J Neurosci 2018; 38:3303-3317. [PMID: 29476016 PMCID: PMC6596060 DOI: 10.1523/jneurosci.2562-17.2018] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Revised: 02/08/2018] [Accepted: 02/13/2018] [Indexed: 11/21/2022] Open
Abstract
Concepts can be related in many ways. They can belong to the same taxonomic category (e.g., "doctor" and "teacher," both in the category of people) or be associated with the same event context (e.g., "doctor" and "stethoscope," both associated with medical scenarios). How are these two major types of semantic relations coded in the brain? We constructed stimuli from three taxonomic categories (people, manmade objects, and locations) and three thematic categories (school, medicine, and sports) and investigated the neural representations of these two dimensions using representational similarity analyses in human participants (10 men and nine women). In specific regions of interest, the left anterior temporal lobe (ATL) and the left temporoparietal junction (TPJ), we found that, whereas both areas had significant effects of taxonomic information, the taxonomic relations had stronger effects in the ATL than in the TPJ ("doctor" and "teacher" closer in ATL neural activity), with the reverse being true for thematic relations ("doctor" and "stethoscope" closer in TPJ neural activity). A whole-brain searchlight analysis revealed that widely distributed regions, mainly in the left hemisphere, represented the taxonomic dimension. Interestingly, the significant effects of the thematic relations were only observed after the taxonomic differences were controlled for in the left TPJ, the right superior lateral occipital cortex, and other frontal, temporal, and parietal regions. In summary, taxonomic grouping is a primary organizational dimension across distributed brain regions, with thematic grouping further embedded within such taxonomic structures.SIGNIFICANCE STATEMENT How are concepts organized in the brain? It is well established that concepts belonging to the same taxonomic categories (e.g., "doctor" and "teacher") share neural representations in specific brain regions. How concepts are associated in other manners (e.g., "doctor" and "stethoscope," which are thematically related) remains poorly understood. We used representational similarity analyses to unravel the neural representations of these different types of semantic relations by testing the same set of words that could be differently grouped by taxonomic categories or by thematic categories. We found that widely distributed brain areas primarily represented taxonomic categories, with the thematic categories further embedded within the taxonomic structure.
Collapse
Affiliation(s)
- Yangwen Xu
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China, 100875
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China, 100875
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China, 100875
| | - Xiaosha Wang
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China, 100875
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China, 100875
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China, 100875
| | - Xiaoying Wang
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China, 100875
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China, 100875
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China, 100875
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China, 100871
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China, 100871, and
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China, 100871
- Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China, 100871, and
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, China, 100871
| | - Yanchao Bi
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China, 100875,
- IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China, 100875
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China, 100875
| |
Collapse
|