1
|
Kumar U, Pandey HR, Dhanik K, Padakannaya P. Neural correlates of auditory comprehension and integration of sanskrit verse: a functional MRI study. Brain Struct Funct 2025; 230:28. [PMID: 39786634 DOI: 10.1007/s00429-025-02892-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 01/01/2025] [Indexed: 01/12/2025]
Abstract
In this investigation, we delve into the neural underpinnings of auditory processing of Sanskrit verse comprehension, an area not previously explored by neuroscientific research. Our study examines a diverse group of 44 bilingual individuals, including both proficient and non-proficient Sanskrit speakers, to uncover the intricate neural patterns involved in processing verses of this ancient language. Employing an integrated neuroimaging approach that combines functional connectivity-multivariate pattern analysis (fc-MVPA), voxel-based univariate analysis, seed-based connectivity analysis, and the use of sparse fMRI techniques to minimize the interference of scanner noise, we highlight the brain's adaptability and ability to integrate multiple types of information. Our findings from fc-MVPA reveal distinct connectivity patterns in proficient Sanskrit speakers, particularly involving the bilateral inferior temporal, left middle temporal, bilateral orbitofrontal, and bilateral occipital pole. Voxel-based univariate analysis showed significant activation in the right middle frontal gyrus, bilateral caudate nuclei, bilateral middle occipital gyri, left lingual gyrus, bilateral inferior parietal lobules, and bilateral inferior frontal gyri. Seed-based connectivity analysis further emphasizes the interconnected nature of the neural networks involved in language processing, demonstrating how these regions collaborate to support complex linguistic tasks. This research reveals how the brain processes the complex syntactic and semantic elements of Sanskrit verse. Findings indicate that proficient speakers effectively navigate intricate syntactic structures and semantic associations, engaging multiple brain regions in coordination. By examining the cognitive mechanisms underlying Sanskrit verse comprehension, which shares rhythmic and structural features with music and poetry, this study highlights the neural connections between language, culture, and cognition.
Collapse
Affiliation(s)
- Uttam Kumar
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India.
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India.
| | - Himanshu Raj Pandey
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | - Kalpana Dhanik
- Centre of Bio-Medical Research, Sanjay Gandhi Postgraduate Institute of Medical Sciences Campus, Lucknow, Uttar Pradesh, 226014, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, India
| | | |
Collapse
|
2
|
Igamberdiev AU. Reflexive neural circuits and the origin of language and music codes. Biosystems 2024; 246:105346. [PMID: 39349135 DOI: 10.1016/j.biosystems.2024.105346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 09/23/2024] [Accepted: 09/26/2024] [Indexed: 10/02/2024]
Abstract
Conscious activity is grounded in the reflexive self-awareness in sense perception, through which the codes signifying sensual perceptive events operate and constrain human behavior. These codes grow via the creative generation of hypertextual statements. We apply the model of Vladimir Lefebvre (Lefebvre, V.A., 1987, J. Soc. Biol. Struct. 10, 129-175) to reveal the underlying structures on which the perception and creative development of language and music codes are based. According to this model, the reflexive structure of conscious subject is grounded in three thermodynamic cycles united by the control of the basic functional cycle by the second one, and resulting in the internal action that it turn is perceived by the third cycle evaluating this action. In this arrangement, the generative language structures are formed and the frequencies of sounds that form musical phrases and patterns are selected. We discuss the participation of certain neural brain structures and the establishment of reflexive neural circuits in the ad hoc transformation of perceptive signals, and show the similarities between the processes of perception and of biological self-maintenance and morphogenesis. We trace the peculiarities of the temporal encoding of emotions in music and musical creativity, as well as the principles of sharing musical information between the performing and the perceiving individuals.
Collapse
Affiliation(s)
- Abir U Igamberdiev
- Department of Biology, Memorial University of Newfoundland, St. John's, NL A1C 5S7, Canada.
| |
Collapse
|
3
|
Sun M, Xing W, Yu W, Slevc LR, Li W. ERP evidence for cross-domain prosodic priming from music to speech. BRAIN AND LANGUAGE 2024; 254:105439. [PMID: 38945108 DOI: 10.1016/j.bandl.2024.105439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 06/19/2024] [Accepted: 06/25/2024] [Indexed: 07/02/2024]
Abstract
Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.
Collapse
Affiliation(s)
- Mingjiang Sun
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weijing Xing
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenjing Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD, USA.
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
4
|
Mizrahi T, Axelrod V. Naturalistic auditory stimuli with fNIRS prefrontal cortex imaging: A potential paradigm for disorder of consciousness diagnostics (a study with healthy participants). Neuropsychologia 2023; 187:108604. [PMID: 37271305 DOI: 10.1016/j.neuropsychologia.2023.108604] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 06/01/2023] [Accepted: 06/02/2023] [Indexed: 06/06/2023]
Abstract
Disorder of consciousness (DOC) is a devastating condition due to brain damage. A patient in this condition is non-responsive, but nevertheless might be conscious at least at some level. Determining the conscious level of DOC patients is important for both medical and ethical reasons, but reliably achieving this has been a major challenge. Naturalistic stimuli in combination with neuroimaging have been proposed as a promising approach for DOC patient diagnosis. Capitalizing on and extending this proposal, the goal of the present study conducted with healthy participants was to develop a new paradigm with naturalistic auditory stimuli and functional near-infrared spectroscopy (fNIRS) - an approach that can be used at the bedside. Twenty-four healthy participants passively listened to 9 min of auditory story, scrambled auditory story, classical music, and scrambled classical music segments while their prefrontal cortex activity was recorded using fNIRS. We found much higher intersubject correlation (ISC) during story compared to scrambled story conditions both at the group level and in the majority of individual subjects, suggesting that fNIRS imaging of the prefrontal cortex might be a sensitive method to capture neural changes associated with narrative comprehension. In contrast, the ISC during the classical music segment did not differ reliably from scrambled classical music and was also much lower than the story condition. Our main result is that naturalistic auditory stories with fNIRS might be used in a clinical setup to identify high-level processing and potential consciousness in DOC patients.
Collapse
Affiliation(s)
- Tamar Mizrahi
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel; Head Injuries Rehabilitation Department, Sheba Medical Center, Ramat Gan, Israel
| | - Vadim Axelrod
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
| |
Collapse
|
5
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
6
|
Cecchetti G, Herff SA, Rohrmeier MA. Musical Garden Paths: Evidence for Syntactic Revision Beyond the Linguistic Domain. Cogn Sci 2022; 46:e13165. [PMID: 35738498 PMCID: PMC9286404 DOI: 10.1111/cogs.13165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 04/10/2022] [Accepted: 05/16/2022] [Indexed: 11/30/2022]
Abstract
While theoretical and empirical insights suggest that the capacity to represent and process complex syntax is crucial in language as well as other domains, it is still unclear whether specific parsing mechanisms are also shared across domains. Focusing on the musical domain, we developed a novel behavioral paradigm to investigate whether a phenomenon of syntactic revision occurs in the processing of tonal melodies under analogous conditions as in language. We present the first proof-of-existence for syntactic revision in a set of tonally ambiguous melodies, supporting the relevance of syntactic representations and parsing with language-like characteristics in a nonlinguistic domain. Furthermore, we find no evidence for a modulatory effect of musical training, suggesting that a general cognitive capacity, rather than explicit knowledge and strategies, may underlie the observed phenomenon in music.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
7
|
Asano R, Boeckx C, Fujita K. Moving beyond domain-specific vs. domain-general options in cognitive neuroscience. Cortex 2022; 154:259-268. [DOI: 10.1016/j.cortex.2022.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 04/07/2022] [Accepted: 05/11/2022] [Indexed: 11/26/2022]
|
8
|
Chiappetta B, Patel AD, Thompson CK. Musical and linguistic syntactic processing in agrammatic aphasia: An ERP study. JOURNAL OF NEUROLINGUISTICS 2022; 62:101043. [PMID: 35002061 PMCID: PMC8740885 DOI: 10.1016/j.jneuroling.2021.101043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.
Collapse
Affiliation(s)
- Brianne Chiappetta
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, CA
| | - Cynthia K. Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University, Chicago, IL, USA
- Department of Neurology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
9
|
Rodriguez AH, Zallek SN, Xu M, Aldag J, Russell-Chapin L, Mattei TA, Litofsky NS. Neurophysiological effects of various music genres on electroencephalographic (EEG) cerebral cortex activity. JOURNAL OF PSYCHEDELIC STUDIES 2021. [DOI: 10.1556/2054.2019.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Abstract
Background
Music has been associated with therapeutic properties for thousands of years across a vast number of diverse regions and cultures. This study expands upon our current understanding of music’s influence on human neurophysiology by investigating the effects of various music genres on cerebral cortex activity using electroencephalography (EEG).
Methods
A randomized, controlled study design was used. EEG data were recorded from 23 healthy adults, ages 19–28, while listening to a music sequence consisting of five randomized songs and two controls. The five studied music genres include: Classical, Tribal Downtempo, Psychedelic Trance (Psytrance), Goa Trance, and Subject Choice.
Results
Controls were associated with lower percentages of beta frequencies and higher percentages of alpha frequencies than the music genres. Psytrance was associated with higher percentages of theta and delta frequencies than the other music genres and controls. The lowest percentages of beta frequencies and highest percentages of alpha frequencies occurred in the occipital and parietal regions. The highest percentages of theta and delta frequencies occurred in the frontal and temporal regions. Subjects with prior music training exhibited increased percentages of delta frequencies in the frontal region. Subject gender and music preference did not have a significant influence on frequency band percentages.
Conclusions
Findings from this study support those of previous music therapy studies and provide novel insights regarding music’s influence on human neurophysiology. These findings also support the hypothesis that music may promote changes in cerebral cortex activity that have similarities to non-rapid eye movement (NREM) sleep, while the listener remains awake.
Collapse
Affiliation(s)
| | - Sarah Nath Zallek
- 2 Department of Neurology, University of Illinois College of Medicine, Peoria, IL, USA
| | - Michael Xu
- 2 Department of Neurology, University of Illinois College of Medicine, Peoria, IL, USA
| | - Jean Aldag
- 3 James Scholar Research Program, University of Illinois College of Medicine, Peoria, IL, USA
| | - Lori Russell-Chapin
- 4 Center for Collaborative Brain Research, Bradley University, Peoria, IL, USA
| | - Tobias A. Mattei
- 5 Division of Neurological Surgery, Saint Louis University School of Medicine, St. Louis, MO, USA
| | - N. Scott Litofsky
- 6 Division of Neurological Surgery, University of Missouri School of Medicine, Columbia, MO, USA
| |
Collapse
|
10
|
Abstract
Credible signaling may have provided a selection pressure for producing and discriminating increasingly elaborate proto-musical signals. But, why evolve them to have hierarchical structure? We argue that the hierarchality of tonality and meter is a byproduct of domain-general mechanisms evolved for reasons other than credible signaling.
Collapse
|
11
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
12
|
Li CW, Guo FY, Tsai CG. Predictive processing, cognitive control, and tonality stability of music: An fMRI study of chromatic harmony. Brain Cogn 2021; 151:105751. [PMID: 33991840 DOI: 10.1016/j.bandc.2021.105751] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
The present study aimed at identifying the brain regions which preferentially responded to music with medium degrees of key stability. There were three types of auditory stimuli. Diatonic music based strictly on major and minor scales has the highest key stability, whereas atonal music has the lowest key stability. Between these two extremes, chromatic music is characterized by sophisticated uses of out-of-key notes, which challenge the internal model of musical pitch and lead to higher precision-weighted prediction error compared to diatonic and atonal music. The brain activity of 29 adults with excellent relative pitch was measured with functional magnetic resonance imaging while they listened to diatonic music, chromatic music, and atonal random note sequences. Several frontoparietal regions showed significantly greater response to chromatic music than to diatonic music and atonal sequences, including the pre-supplementary motor area (extending into the dorsal anterior cingulate cortex), dorsolateral prefrontal cortex, rostrolateral prefrontal cortex, intraparietal sulcus, and precuneus. We suggest that these frontoparietal regions may support working memory processes, hierarchical sequencing, and conflict resolution of remotely related harmonic elements during the predictive processing of chromatic music. This finding suggested a possible correlation between precision-weighted prediction error and the frontoparietal regions implicated in cognitive control.
Collapse
Affiliation(s)
- Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Fong-Yi Guo
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan
| | - Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
13
|
Abstract
Some researchers theorize that musicians’ greater language ability is mediated by greater working memory because music and language share the same processing resources. Prior work using working memory sentence processing dual-task paradigms have shown that holding verbal information (e.g., words) in working memory interferes with sentence processing. In contrast, visuospatial stimuli are processed in a different working memory store and should not interfere with sentence processing. We tested whether music showed similar interference to sentence processing as opposed to noninterference like visuospatial stimuli. We also compared musicians to nonmusicians to investigate whether musical training improves verbal working memory. Findings revealed that musical stimuli produced similar working memory interference as linguistic stimuli, but visuospatial stimuli did not—suggesting that music and language rely on similar working memory resources (i.e., verbal skills) that are distinct from visuospatial skills. Musicians performed more accurately on the working memory tasks, particularly for the verbal and musical working memory stimuli, supporting an association between musicianship and greater verbal working memory capacity. Future research is necessary to evaluate the role of music training as a cognitive intervention or educational strategy to enhance reading fluency.
Collapse
|
14
|
Distractor probabilities modulate flanker task performance. Atten Percept Psychophys 2020; 83:866-881. [PMID: 33135099 DOI: 10.3758/s13414-020-02151-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/13/2020] [Indexed: 12/22/2022]
Abstract
Expectations about upcoming events help humans to effectively filter out potential distractors and respond more efficiently to task-relevant inputs. While previous work has emphasized the role of expectations about task-relevant inputs, less is known about the role that expectations play in suppressing specific distractors. To address this question, we manipulated the probabilities of different flanker configurations in the Eriksen flanker task. Across four studies, we found robust evidence for sensitivity to the probability of flankers, with an approximately logarithmic relationship between the likelihood of a particular flanker configuration and the accuracy of subjects' responses. Subjects were also sensitive to length of runs of repeated targets, but minimally sensitive to length of runs of repeated flankers. Two studies used chevron stimuli, and two used letters (confirming that results generalize with greater dissimilarity between stimuli). Expanding the set of stimuli (thus reducing the dominance of any one exemplar) eliminated the effect. Our findings suggest that expectations about distractors form in response to statistical regularities at multiple timescales, and that their effects are strongest when stimuli are geometrically similar and subjects are able to respond to trials quickly. Unexpected distractors could disrupt performance, most likely via a form of attentional capture. This work demonstrates how expectations can influence attention in complex cognitive settings, and illuminates the multiple, nested factors that contribute.
Collapse
|
15
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
16
|
Lee DJ, Jung H, Loui P. Attention Modulates Electrophysiological Responses to Simultaneous Music and Language Syntax Processing. Brain Sci 2019; 9:E305. [PMID: 31683961 PMCID: PMC6895977 DOI: 10.3390/brainsci9110305] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 10/22/2019] [Accepted: 10/29/2019] [Indexed: 11/16/2022] Open
Abstract
Music and language are hypothesized to engage the same neural resources, particularly at the level of syntax processing. Recent reports suggest that attention modulates the shared processing of music and language, but the time-course of the effects of attention on music and language syntax processing are yet unclear. In this EEG study we vary top-down attention to language and music, while manipulating the syntactic structure of simultaneously presented musical chord progressions and garden-path sentences in a modified rapid serial visual presentation paradigm. The Early Right Anterior Negativity (ERAN) was observed in response to both attended and unattended musical syntax violations. In contrast, an N400 was only observed in response to attended linguistic syntax violations, and a P3/P600 only in response to attended musical syntax violations. Results suggest that early processing of musical syntax, as indexed by the ERAN, is relatively automatic; however, top-down allocation of attention changes the processing of syntax in both music and language at later stages of cognitive processing.
Collapse
Affiliation(s)
- Daniel J Lee
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
| | - Harim Jung
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
| | - Psyche Loui
- Department of Psychology, Wesleyan University, Middletown, CT 06459, USA.
- Department of Music, Northeastern University, Boston, MA 02115, USA.
| |
Collapse
|
17
|
Individual differences in musical training and executive functions: A latent variable approach. Mem Cognit 2019; 46:1076-1092. [PMID: 29752659 DOI: 10.3758/s13421-018-0822-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Learning and performing music draw on a host of cognitive abilities, and previous research has postulated that musicians might have advantages in related cognitive processes. One such aspect of cognition that may be related to musical training is executive functions (EFs), a set of top-down processes that regulate behavior and cognition according to task demands. Previous studies investigating the link between musical training and EFs have yielded mixed results and are difficult to compare. In part, this is because most studies have looked at only one specific cognitive process, and even studies looking at the same process have used different experimental tasks. Furthermore, most correlational studies have used different "musician" and "non-musician" categorizations for their comparisons, so generalizing the findings is difficult. The present study provides a more comprehensive assessment of how individual differences in musical training relate to latent measures of three separable aspects of EFs. We administered a well-validated EF battery containing multiple tasks tapping the EF components of inhibition, shifting, and working memory updating (Friedman et al. in Journal of Experimental Psychology: General, 137, 201-225, 2008), as well as a comprehensive, continuous measure of musical training and sophistication (Müllensiefen et al., in PLoS ONE, 9, e89642, 2014). Musical training correlated with some individual EF tasks involving inhibition and working memory updating, but not with individual tasks involving shifting. However, musical training only predicted the latent variable of working memory updating, but not the latent variables of inhibition or shifting after controlling for IQ, socioeconomic status, and handedness. Although these data are correlational, they nonetheless suggest that musical experience places particularly strong demands specifically on working memory updating processes.
Collapse
|
18
|
Hernandez-Ruiz E. How is music processed? Tentative answers from cognitive neuroscience. NORDIC JOURNAL OF MUSIC THERAPY 2019. [DOI: 10.1080/08098131.2019.1587785] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Affiliation(s)
- Eugenia Hernandez-Ruiz
- Department of Music Education and Music Therapy, School of Music, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
19
|
Bowmer A, Mason K, Knight J, Welch G. Investigating the Impact of a Musical Intervention on Preschool Children's Executive Function. Front Psychol 2018; 9:2389. [PMID: 30618906 PMCID: PMC6307457 DOI: 10.3389/fpsyg.2018.02389] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Accepted: 11/13/2018] [Indexed: 12/13/2022] Open
Abstract
The impact of music interventions on the cognitive skills of young children has become the focus of a growing number of research studies in recent years. This study investigated the effect of weekly musicianship training on the executive function abilities of 3-to-4-year-old children at a London, United Kingdom preschool, using a two-phase experimental design. In Phase 1, 14 children (Group A) took part in eight weekly musicianship classes, provided by a specialist music teacher, while 25 children (Groups B and C combined) engaged in nursery free play. Results of this Phase showed Group A to have improved on two measures relating to planning and inhibition skills. During Phase 2, Group A continued with music classes, while Group B began music classes for the first time and Group C took part in an art intervention. Repeated measures ANOVA found no significant difference in performance improvement between the three participant groups during phase 2; however, the performance difference between groups was nearing significance for the peg tapping task (p = 0.06). The findings from this study contribute to current debates about the potential cognitive benefit of musical interventions, including important issues regarding intervention duration, experimental design, target age groups, executive function testing, and task novelty.
Collapse
Affiliation(s)
- Alice Bowmer
- UCL Institute of Education, University College London, London, United Kingdom
| | - Kathryn Mason
- UCL Institute of Education, University College London, London, United Kingdom
| | | | - Graham Welch
- UCL Institute of Education, University College London, London, United Kingdom
| |
Collapse
|
20
|
Fiveash A, McArthur G, Thompson WF. Syntactic and non-syntactic sources of interference by music on language processing. Sci Rep 2018; 8:17918. [PMID: 30559400 PMCID: PMC6297162 DOI: 10.1038/s41598-018-36076-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 11/08/2018] [Indexed: 11/09/2022] Open
Abstract
Music and language are complex hierarchical systems in which individual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction.
Collapse
Affiliation(s)
- Anna Fiveash
- Department of Psychology, Macquarie University, Sydney, Australia.
- Lyon Neuroscience Research Centre, Auditory Cognition and Psychoacoustics Team and Dynamique Du Langage Laboratory, INSERM, U1028, CNRS, UMR5292, Lyon, France.
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia.
| | - Genevieve McArthur
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - William Forde Thompson
- Department of Psychology, Macquarie University, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
| |
Collapse
|
21
|
Ma X, Ding N, Tao Y, Yang YF. Syntactic complexity and musical proficiency modulate neural processing of non-native music. Neuropsychologia 2018; 121:164-174. [PMID: 30359654 DOI: 10.1016/j.neuropsychologia.2018.10.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Revised: 09/21/2018] [Accepted: 10/08/2018] [Indexed: 11/18/2022]
Abstract
In music, chords are organized into hierarchical structures on the basis of musical syntax and the syntax of Western music can be implicitly acquired by listeners growing up in a Western musical culture. Here, we investigated whether Western musical syntax of different complexities can be implicitly acquired by non-native listeners growing up in China. This study used electroencephalography (EEG) to measure how the neural responses to musical sequences that either follow a simple rule, i.e., finite state grammar (FSG), or a complex rule, i.e., phrase structure grammar (PSG), are affected. We tested three groups of Chinese listeners who varied in their proficiency and experience in Western music. Only the high-proficiency group had received formal Western musical training, whereas the low- and moderate-proficiency groups varied in their degree of exposure to Western music. The results showed that in the FSG condition, the event-related potentials (ERPs) evoked by regular and irregular final chords were not significantly different in the low-proficiency group. In contrast, in the moderate- and high-proficiency groups, the irregular final chords evoked an ERAN-N5 biphasic response. In the PSG condition, however, only the high-proficiency group showed an ERAN-N5 biphasic response evoked by irregular final chords. This study provides evidence that although simple structures of Western music, such as FSG, can be acquired by long-term implicit learning, the acquisition of more complex structures, such as PSG, merely from exposure to western music may not be as easy.
Collapse
Affiliation(s)
- Xie Ma
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; College of Educational Science and Management, Yunnan Normal University, Kunming, China; Key Laboratory of Educational Informatization for Nationalities, Yunnan Normal University, Kunming, China
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China; State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, China
| | - Yun Tao
- College of Educational Science and Management, Yunnan Normal University, Kunming, China; Key Laboratory of Educational Informatization for Nationalities, Yunnan Normal University, Kunming, China
| | - Yu Fang Yang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
22
|
Sun Y, Lu X, Ho HT, Johnson BW, Sammler D, Thompson WF. Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NEUROIMAGE-CLINICAL 2018; 19:640-651. [PMID: 30013922 PMCID: PMC6022360 DOI: 10.1016/j.nicl.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 11/23/2022]
Abstract
Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), we investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. Our results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, we found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics' parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, our findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing. Amusics displayed abnormal brain responses to music-syntactic irregularities. They also exhibited abnormal brain responses to language-syntactic irregularities. These impairments affect an early stage of syntactic processing not a later stage. Music and language involve similar cognitive mechanisms for processing syntax.
Collapse
Affiliation(s)
- Yanan Sun
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia.
| | - Xuejing Lu
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia; CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Tam Ho
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa 56126, Italy; School of Psychology, University of Sydney, New South Wales 2006, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| |
Collapse
|
23
|
D'Souza AA, Moradzadeh L, Wiseheart M. Musical training, bilingualism, and executive function: working memory and inhibitory control. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2018; 3:11. [PMID: 29670934 PMCID: PMC5893660 DOI: 10.1186/s41235-018-0095-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Accepted: 02/26/2018] [Indexed: 11/10/2022]
Abstract
The current study investigated whether long-term experience in music or a second language is associated with enhanced cognitive functioning. Early studies suggested the possibility of a cognitive advantage from musical training and bilingualism but have failed to be replicated by recent findings. Further, each form of expertise has been independently investigated leaving it unclear whether any benefits are specifically caused by each skill or are a result of skill learning in general. To assess whether cognitive benefits from training exist, and how unique they are to each training domain, the current study compared musicians and bilinguals to each other, plus to individuals who had expertise in both skills, or neither. Young adults (n = 153) were categorized into one of four groups: monolingual musician; bilingual musician; bilingual non-musician; and monolingual non-musician. Multiple tasks per cognitive ability were used to examine the coherency of any training effects. Results revealed that musically trained individuals, but not bilinguals, had enhanced working memory. Neither skill had enhanced inhibitory control. The findings confirm previous associations between musicians and improved cognition and extend existing evidence to show that benefits are narrower than expected but can be uniquely attributed to music compared to another specialized auditory skill domain. The null bilingual effect despite a music effect in the same group of individuals challenges the proposition that young adults are at a performance ceiling and adds to increasing evidence on the lack of a bilingual advantage on cognition.
Collapse
Affiliation(s)
- Annalise A D'Souza
- 1Department of Psychology, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada.,2LaMarsh Centre for Child and Youth Research, York University, Toronto, ON Canada
| | - Linda Moradzadeh
- 1Department of Psychology, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada.,2LaMarsh Centre for Child and Youth Research, York University, Toronto, ON Canada
| | - Melody Wiseheart
- 1Department of Psychology, York University, 4700 Keele Street, Toronto, ON M3J 1P3 Canada.,2LaMarsh Centre for Child and Youth Research, York University, Toronto, ON Canada
| |
Collapse
|
24
|
The right inferior frontal gyrus processes nested non-local dependencies in music. Sci Rep 2018; 8:3822. [PMID: 29491454 PMCID: PMC5830458 DOI: 10.1038/s41598-018-22144-9] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 02/16/2018] [Indexed: 12/01/2022] Open
Abstract
Complex auditory sequences known as music have often been described as hierarchically structured. This permits the existence of non-local dependencies, which relate elements of a sequence beyond their temporal sequential order. Previous studies in music have reported differential activity in the inferior frontal gyrus (IFG) when comparing regular and irregular chord-transitions based on theories in Western tonal harmony. However, it is unclear if the observed activity reflects the interpretation of hierarchical structure as the effects are confounded by local irregularity. Using functional magnetic resonance imaging (fMRI), we found that violations to non-local dependencies in nested sequences of three-tone musical motifs in musicians elicited increased activity in the right IFG. This is in contrast to similar studies in language which typically report the left IFG in processing grammatical syntax. Effects of increasing auditory working demands are moreover reflected by distributed activity in frontal and parietal regions. Our study therefore demonstrates the role of the right IFG in processing non-local dependencies in music, and suggests that hierarchical processing in different cognitive domains relies on similar mechanisms that are subserved by domain-selective neuronal subpopulations.
Collapse
|
25
|
Roncaglia-Denissen MP, Bouwer FL, Honing H. Decision Making Strategy and the Simultaneous Processing of Syntactic Dependencies in Language and Music. Front Psychol 2018; 9:38. [PMID: 29441035 PMCID: PMC5797648 DOI: 10.3389/fpsyg.2018.00038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 01/10/2018] [Indexed: 11/29/2022] Open
Abstract
Despite differences in their function and domain-specific elements, syntactic processing in music and language is believed to share cognitive resources. This study aims to investigate whether the simultaneous processing of language and music share the use of a common syntactic processor or more general attentional resources. To investigate this matter we tested musicians and non-musicians using visually presented sentences and aurally presented melodies containing syntactic local and long-distance dependencies. Accuracy rates and reaction times of participants' responses were collected. In both sentences and melodies, unexpected syntactic anomalies were introduced. This is the first study to address the processing of local and long-distance dependencies in language and music combined while reducing the effect of sensory memory. Participants were instructed to focus on language (language session), music (music session), or both (dual session). In the language session, musicians and non-musicians performed comparably in terms of accuracy rates and reaction times. As expected, groups' differences appeared in the music session, with musicians being more accurate in their responses than non-musicians and only the latter showing an interaction between the accuracy rates for music and language syntax. In the dual session musicians were overall more accurate than non-musicians. However, both groups showed comparable behavior, by displaying an interaction between the accuracy rates for language and music syntax responses. In our study, accuracy rates seem to better capture the interaction between language and music syntax; and this interaction seems to indicate the use of distinct, however, interacting mechanisms as part of decision making strategy. This interaction seems to be subject of an increase of attentional load and domain proficiency. Our study contributes to the long-lasting debate about the commonalities between language and music by providing evidence for their interaction at a more domain-general level.
Collapse
Affiliation(s)
- M. P. Roncaglia-Denissen
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Fleur L. Bouwer
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
26
|
Slater J, Ashley R, Tierney A, Kraus N. Got Rhythm? Better Inhibitory Control Is Linked with More Consistent Drumming and Enhanced Neural Tracking of the Musical Beat in Adult Percussionists and Nonpercussionists. J Cogn Neurosci 2017; 30:14-24. [PMID: 28949825 DOI: 10.1162/jocn_a_01189] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Musical rhythm engages motor and reward circuitry that is important for cognitive control, and there is evidence for enhanced inhibitory control in musicians. We recently revealed an inhibitory control advantage in percussionists compared with vocalists, highlighting the potential importance of rhythmic expertise in mediating this advantage. Previous research has shown that better inhibitory control is associated with less variable performance in simple sensorimotor synchronization tasks; however, this relationship has not been examined through the lens of rhythmic expertise. We hypothesize that the development of rhythm skills strengthens inhibitory control in two ways: by fine-tuning motor networks through the precise coordination of movements "in time" and by activating reward-based mechanisms, such as predictive processing and conflict monitoring, which are involved in tracking temporal structure in music. Here, we assess adult percussionists and nonpercussionists on inhibitory control, selective attention, basic drumming skills (self-paced, paced, and continuation drumming), and cortical evoked responses to an auditory stimulus presented on versus off the beat of music. Consistent with our hypotheses, we find that better inhibitory control is correlated with more consistent drumming and enhanced neural tracking of the musical beat. Drumming variability and the neural index of beat alignment each contribute unique predictive power to a regression model, explaining 57% of variance in inhibitory control. These outcomes present the first evidence that enhanced inhibitory control in musicians may be mediated by rhythmic expertise and provide a foundation for future research investigating the potential for rhythm-based training to strengthen cognitive function.
Collapse
|
27
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
28
|
Slevc LR, Faroqi-Shah Y, Saxena S, Okada BM. Preserved processing of musical structure in a person with agrammatic aphasia. Neurocase 2016; 22:505-511. [PMID: 27112951 DOI: 10.1080/13554794.2016.1177090] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Evidence for shared processing of structure (or syntax) in language and in music conflicts with neuropsychological dissociations between the two. However, while harmonic structural processing can be impaired in patients with spared linguistic syntactic abilities (Peretz, I. (1993). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56. doi:10.1080/02643299308253455), evidence for the opposite dissociation-preserved harmonic processing despite agrammatism-is largely lacking. Here, we report one such case: HV, a former musician with Broca's aphasia and agrammatic speech, was impaired in making linguistic, but not musical, acceptability judgments. Similarly, she showed no sensitivity to linguistic structure, but normal sensitivity to musical structure, in implicit priming tasks. To our knowledge, this is the first non-anecdotal report of a patient with agrammatic aphasia demonstrating preserved harmonic processing abilities, supporting claims that aspects of musical and linguistic structure rely on distinct neural mechanisms.
Collapse
Affiliation(s)
- L Robert Slevc
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Yasmeen Faroqi-Shah
- b Department of Hearing and Speech Sciences , University of Maryland , College Park , Maryland , USA
| | - Sadhvi Saxena
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Brooke M Okada
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| |
Collapse
|
29
|
Slevc LR, Davey NS, Buschkuehl M, Jaeggi SM. Tuning the mind: Exploring the connections between musical ability and executive functions. Cognition 2016; 152:199-211. [PMID: 27107499 DOI: 10.1016/j.cognition.2016.03.017] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2015] [Revised: 02/11/2016] [Accepted: 03/23/2016] [Indexed: 01/13/2023]
Abstract
A growing body of research suggests that musical experience and ability are related to a variety of cognitive abilities, including executive functioning (EF). However, it is not yet clear if these relationships are limited to specific components of EF, limited to auditory tasks, or reflect very general cognitive advantages. This study investigated the existence and generality of the relationship between musical ability and EFs by evaluating the musical experience and ability of a large group of participants and investigating whether this predicts individual differences on three different components of EF - inhibition, updating, and switching - in both auditory and visual modalities. Musical ability predicted better performance on both auditory and visual updating tasks, even when controlling for a variety of potential confounds (age, handedness, bilingualism, and socio-economic status). However, musical ability was not clearly related to inhibitory control and was unrelated to switching performance. These data thus show that cognitive advantages associated with musical ability are not limited to auditory processes, but are limited to specific aspects of EF. This supports a process-specific (but modality-general) relationship between musical ability and non-musical aspects of cognition.
Collapse
Affiliation(s)
- L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD 20742, USA.
| | - Nicholas S Davey
- Department of Psychology, University of Maryland, College Park, MD 20742, USA
| | | | | |
Collapse
|
30
|
Benz S, Sellaro R, Hommel B, Colzato LS. Music Makes the World Go Round: The Impact of Musical Training on Non-musical Cognitive Functions-A Review. Front Psychol 2016; 6:2023. [PMID: 26779111 PMCID: PMC4703819 DOI: 10.3389/fpsyg.2015.02023] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 12/17/2015] [Indexed: 01/08/2023] Open
Abstract
Musical training is becoming increasingly popular as a topic for scientific research. Here we review the available studies investigating whether and to which degree musical experience generalizes to cognitive functions unrelated to music abilities in healthy humans. In general, it seems that musical training is associated with enhancing effects, even if sometimes only restricted to the auditory domain, on various cognitive functions spanning from executive control to creativity. We conclude that musical engagement may be a useful cognitive training to promote cognitive enhancement, but more research using longitudinal studies and taking into account individual differences is necessary to determine actual benefits.
Collapse
Affiliation(s)
- Sarah Benz
- Institute of Experimental Psychology, Heinrich-Heine University Düsseldorf, Germany
| | - Roberta Sellaro
- Cognitive Psychology Unit and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| | - Bernhard Hommel
- Cognitive Psychology Unit and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| | - Lorenza S Colzato
- Cognitive Psychology Unit and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| |
Collapse
|
31
|
Van de Cavey J, Hartsuiker RJ. Is there a domain-general cognitive structuring system? Evidence from structural priming across music, math, action descriptions, and language. Cognition 2016; 146:172-84. [DOI: 10.1016/j.cognition.2015.09.013] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 09/14/2015] [Accepted: 09/16/2015] [Indexed: 11/30/2022]
|
32
|
Heffner CC, Slevc LR. Prosodic Structure as a Parallel to Musical Structure. Front Psychol 2015; 6:1962. [PMID: 26733930 PMCID: PMC4687474 DOI: 10.3389/fpsyg.2015.01962] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2015] [Accepted: 12/07/2015] [Indexed: 11/13/2022] Open
Abstract
What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.
Collapse
Affiliation(s)
- Christopher C. Heffner
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Linguistics, University of Maryland, College ParkMD, USA
- Department of Hearing and Speech Sciences, University of Maryland, College ParkMD, USA
| | - L. Robert Slevc
- Program in Neuroscience and Cognitive Science, University of Maryland, College ParkMD, USA
- Department of Psychology, University of Maryland, College ParkMD, USA
| |
Collapse
|
33
|
Jung H, Sontag S, Park YS, Loui P. Rhythmic Effects of Syntax Processing in Music and Language. Front Psychol 2015; 6:1762. [PMID: 26635672 PMCID: PMC4655243 DOI: 10.3389/fpsyg.2015.01762] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2015] [Accepted: 11/03/2015] [Indexed: 11/15/2022] Open
Abstract
Music and language are human cognitive and neural functions that share many structural similarities. Past theories posit a sharing of neural resources between syntax processing in music and language (Patel, 2003), and a dynamic attention network that governs general temporal processing (Large and Jones, 1999). Both make predictions about music and language processing over time. Experiment 1 of this study investigates the relationship between rhythmic expectancy and musical and linguistic syntax in a reading time paradigm. Stimuli (adapted from Slevc et al., 2009) were sentences broken down into segments; each sentence segment was paired with a musical chord and presented at a fixed inter-onset interval. Linguistic syntax violations appeared in a garden-path design. During the critical region of the garden-path sentence, i.e., the particular segment in which the syntactic unexpectedness was processed, expectancy violations for language, music, and rhythm were each independently manipulated: musical expectation was manipulated by presenting out-of-key chords and rhythmic expectancy was manipulated by perturbing the fixed inter-onset interval such that the sentence segments and musical chords appeared either early or late. Reading times were recorded for each sentence segment and compared for linguistic, musical, and rhythmic expectancy. Results showed main effects of rhythmic expectancy and linguistic syntax expectancy on reading time. There was also an effect of rhythm on the interaction between musical and linguistic syntax: effects of violations in musical and linguistic syntax showed significant interaction only during rhythmically expected trials. To test the effects of our experimental design on rhythmic and linguistic expectancies, independently of musical syntax, Experiment 2 used the same experimental paradigm, but the musical factor was eliminated—linguistic stimuli were simply presented silently, and rhythmic expectancy was manipulated at the critical region. Experiment 2 replicated effects of rhythm and language, without an interaction. Together, results suggest that the interaction of music and language syntax processing depends on rhythmic expectancy, and support a merging of theories of music and language syntax processing with dynamic models of attentional entrainment.
Collapse
Affiliation(s)
- Harim Jung
- Music, Imaging, and Neural Dynamics Lab, Psychology and Neuroscience and Behavior, Wesleyan University Middletown, CT, USA
| | - Samuel Sontag
- Music, Imaging, and Neural Dynamics Lab, Psychology and Neuroscience and Behavior, Wesleyan University Middletown, CT, USA
| | - YeBin S Park
- Music, Imaging, and Neural Dynamics Lab, Psychology and Neuroscience and Behavior, Wesleyan University Middletown, CT, USA
| | - Psyche Loui
- Music, Imaging, and Neural Dynamics Lab, Psychology and Neuroscience and Behavior, Wesleyan University Middletown, CT, USA
| |
Collapse
|
34
|
Kunert R, Willems RM, Casasanto D, Patel AD, Hagoort P. Music and Language Syntax Interact in Broca's Area: An fMRI Study. PLoS One 2015; 10:e0141069. [PMID: 26536026 PMCID: PMC4633113 DOI: 10.1371/journal.pone.0141069] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Accepted: 09/17/2015] [Indexed: 12/31/2022] Open
Abstract
Instrumental music and language are both syntactic systems, employing complex, hierarchically-structured sequences built using implicit structural norms. This organization allows listeners to understand the role of individual words or tones in the context of an unfolding sentence or melody. Previous studies suggest that the brain mechanisms of syntactic processing may be partly shared between music and language. However, functional neuroimaging evidence for anatomical overlap of brain activity involved in linguistic and musical syntactic processing has been lacking. In the present study we used functional magnetic resonance imaging (fMRI) in conjunction with an interference paradigm based on sung sentences. We show that the processing demands of musical syntax (harmony) and language syntax interact in Broca’s area in the left inferior frontal gyrus (without leading to music and language main effects). A language main effect in Broca’s area only emerged in the complex music harmony condition, suggesting that (with our stimuli and tasks) a language effect only becomes visible under conditions of increased demands on shared neural resources. In contrast to previous studies, our design allows us to rule out that the observed neural interaction is due to: (1) general attention mechanisms, as a psychoacoustic auditory anomaly behaved unlike the harmonic manipulation, (2) error processing, as the language and the music stimuli contained no structural errors. The current results thus suggest that two different cognitive domains—music and language—might draw on the same high level syntactic integration resources in Broca’s area.
Collapse
Affiliation(s)
- Richard Kunert
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
- * E-mail:
| | - Roel M. Willems
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| | - Daniel Casasanto
- Psychology Department, University of Chicago, Chicago, Illinois, United States of America
| | | | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behavior, Nijmegen, The Netherlands
| |
Collapse
|
35
|
LaCroix AN, Diaz AF, Rogalsky C. The relationship between the neural computations for speech and music perception is context-dependent: an activation likelihood estimate study. Front Psychol 2015; 6:1138. [PMID: 26321976 PMCID: PMC4531212 DOI: 10.3389/fpsyg.2015.01138] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 07/22/2015] [Indexed: 11/30/2022] Open
Abstract
The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.
Collapse
Affiliation(s)
- Arianna N LaCroix
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Alvaro F Diaz
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| | - Corianne Rogalsky
- Communication Neuroimaging and Neuroscience Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, AZ, USA
| |
Collapse
|
36
|
Kunert R, Slevc LR. A Commentary on: "Neural overlap in processing music and speech". Front Hum Neurosci 2015; 9:330. [PMID: 26089792 PMCID: PMC4452821 DOI: 10.3389/fnhum.2015.00330] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2015] [Accepted: 05/22/2015] [Indexed: 11/24/2022] Open
Affiliation(s)
- Richard Kunert
- Neurobiology of Language, Max Planck Institute for Psycholinguistics Nijmegen, Netherlands ; Neurobiology of Language, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | - L Robert Slevc
- Language and Music Cognition Lab, Department of Psychology, University of Maryland College Park, MD, USA
| |
Collapse
|