1
|
Bellmann OT, Asano R. Neural correlates of musical timbre: an ALE meta-analysis of neuroimaging data. Front Neurosci 2024; 18:1373232. [PMID: 38952924 PMCID: PMC11215185 DOI: 10.3389/fnins.2024.1373232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/29/2024] [Indexed: 07/03/2024] Open
Abstract
Timbre is a central aspect of music that allows listeners to identify musical sounds and conveys musical emotion, but also allows for the recognition of actions and is an important structuring property of music. The former functions are known to be implemented in a ventral auditory stream in processing musical timbre. While the latter functions are commonly attributed to areas in a dorsal auditory processing stream in other musical domains, its involvement in musical timbre processing is so far unknown. To investigate if musical timbre processing involves both dorsal and ventral auditory pathways, we carried out an activation likelihood estimation (ALE) meta-analysis of 18 experiments from 17 published neuroimaging studies on musical timbre perception. We identified consistent activations in Brodmann areas (BA) 41, 42, and 22 in the bilateral transverse temporal gyri, the posterior superior temporal gyri and planum temporale, in BA 40 of the bilateral inferior parietal lobe, in BA 13 in the bilateral posterior Insula, and in BA 13 and 22 in the right anterior insula and superior temporal gyrus. The vast majority of the identified regions are associated with the dorsal and ventral auditory processing streams. We therefore propose to frame the processing of musical timbre in a dual-stream model. Moreover, the regions activated in processing timbre show similarities to the brain regions involved in processing several other fundamental aspects of music, indicating possible shared neural bases of musical timbre and other musical domains.
Collapse
Affiliation(s)
| | - Rie Asano
- Systematic Musicology, Institute for Musicology, University of Cologne, Cologne, Germany
| |
Collapse
|
2
|
Zhan X, Lang J, Yang LZ, Li H. Modeling the association between functional connectivity and lateralization with the activity flow framework. Brain Res 2024; 1830:148831. [PMID: 38412885 DOI: 10.1016/j.brainres.2024.148831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 02/21/2024] [Accepted: 02/23/2024] [Indexed: 02/29/2024]
Abstract
The human brain is localized and distributed. On the one hand, each cognitive function tends to involve one hemisphere more than the other, also known as the principle of lateralization. On the other hand, interactions among brain regions in the form of functional connectivity (FC) are indispensable for intact function. Recent years have seen growing interest in the association between lateralization and FC. However, FC metrics vary from spurious correlation to causal associations. If lateralization manifests local processing and causal network interactions, more causally valid FC metrics should predict lateralization index (LI) better than FC based on simple correlations. The present study directly investigates this hypothesis within the activity flow framework to compare the association between lateralization and four brain connectivity metrics: correlation-based FC, multiple-regression FC, partial-correlation FC, and combinedFC. We propose two modeling approaches: the one-step approach, which models the relationship between LI and FC directly, and the two-step approach, which predicts the brain activation and calculates the LI. Our results indicated that multiple-regression FC, partial-correlation FC, and combinedFC could significantly improve the model prediction compared to correlation-based FC, which was consistent in a spatial working memory task (typically right-lateralized) and a language task (typically left-lateralized). The one-step and two-step approach yielded similar conclusions. In addition, the finding was replicated in a clinical sample of schizophrenia (SZ), bipolar disorder (BP), and attention deficit hyperactivity disorder (ADHD). The present study suggests that the causal interactions among brain regions help shape the lateralization pattern.
Collapse
Affiliation(s)
- Xue Zhan
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, PR China; University of Science and Technology of China, Hefei 230026, PR China
| | - Jinwei Lang
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, PR China; University of Science and Technology of China, Hefei 230026, PR China
| | - Li-Zhuang Yang
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, PR China; Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei 230031, PR China.
| | - Hai Li
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, PR China; Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei 230031, PR China.
| |
Collapse
|
3
|
Gerrits R. Variability in Hemispheric Functional Segregation Phenotypes: A Review and General Mechanistic Model. Neuropsychol Rev 2024; 34:27-40. [PMID: 36576683 DOI: 10.1007/s11065-022-09575-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 07/15/2022] [Accepted: 11/16/2022] [Indexed: 12/29/2022]
Abstract
Many functions of the human brain are organized asymmetrically and are subject to strong population biases. Some tasks, like speaking and making complex hand movements, exhibit left hemispheric dominance, whereas others, such as spatial processing and recognizing faces, favor the right hemisphere. While pattern of preference implies the existence of a stereotypical way of distributing functions between the hemispheres, an ever-increasing body of evidence indicates that not everyone follows this pattern of hemispheric functional segregation. On the contrary, the review conducted in this article shows that departures from the standard hemispheric division of labor are routinely observed and assume many distinct forms, each having a different prevalence rate. One of the key challenges in human neuroscience is to model this variability. By integrating well-established and recently emerged ideas about the mechanisms that underlie functional lateralization, the current article proposes a general mechanistic model that explains the observed distribution of segregation phenotypes and generates new testable hypotheses.
Collapse
Affiliation(s)
- Robin Gerrits
- Department of Experimental Psychology, Ghent University, Ghent, Belgium.
| |
Collapse
|
4
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
5
|
Martin KC, Seydell-Greenwald A, Turkeltaub PE, Chambers CE, Giannetti M, Dromerick AW, Carpenter JL, Berl MM, Gaillard WD, Newport EL. One right can make a left: sentence processing in the right hemisphere after perinatal stroke. Cereb Cortex 2023; 33:11257-11268. [PMID: 37859521 PMCID: PMC10690853 DOI: 10.1093/cercor/bhad362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 09/08/2023] [Indexed: 10/21/2023] Open
Abstract
When brain regions that are critical for a cognitive function in adulthood are irreversibly damaged at birth, what patterns of plasticity support the successful development of that function in an alternative location? Here we investigate the consistency of language organization in the right hemisphere (RH) after a left hemisphere (LH) perinatal stroke. We analyzed fMRI data collected during an auditory sentence comprehension task on 14 people with large cortical LH perinatal arterial ischemic strokes (left hemisphere perinatal stroke (LHPS) participants) and 11 healthy sibling controls using a "top voxel" approach that allowed us to compare the same number of active voxels across each participant and in each hemisphere for controls. We found (1) LHPS participants consistently recruited the same RH areas that were a mirror-image of typical LH areas, and (2) the RH areas recruited in LHPS participants aligned better with the strongly activated LH areas of the typically developed brains of control participants (when flipped images were compared) than the weakly activated RH areas. Our findings suggest that the successful development of language processing in the RH after a LH perinatal stroke may in part depend on recruiting an arrangement of frontotemporal areas reflective of the typical dominant LH.
Collapse
Affiliation(s)
- Kelly C Martin
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Catherine E Chambers
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Margot Giannetti
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Alexander W Dromerick
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| | - Jessica L Carpenter
- Division of Pediatric Neurology, Departments of Pediatrics and Neurology, University of Maryland School of Medicine, Baltimore MD 21201, United States
| | - Madison M Berl
- Children’s National Hospital and Center for Neuroscience, Washington, DC 20010, United States
| | - William D Gaillard
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- Children’s National Hospital and Center for Neuroscience, Washington, DC 20010, United States
| | - Elissa L Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057, United States
- MedStar National Rehabilitation Hospital, Washington, DC 20010, United States
| |
Collapse
|
6
|
Seydell-Greenwald A, Wang X, Newport EL, Bi Y, Striem-Amit E. Spoken language processing activates the primary visual cortex. PLoS One 2023; 18:e0289671. [PMID: 37566582 PMCID: PMC10420367 DOI: 10.1371/journal.pone.0289671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Primary visual cortex (V1) is generally thought of as a low-level sensory area that primarily processes basic visual features. Although there is evidence for multisensory effects on its activity, these are typically found for the processing of simple sounds and their properties, for example spatially or temporally-congruent simple sounds. However, in congenitally blind individuals, V1 is involved in language processing, with no evidence of major changes in anatomical connectivity that could explain this seemingly drastic functional change. This is at odds with current accounts of neural plasticity, which emphasize the role of connectivity and conserved function in determining a neural tissue's role even after atypical early experiences. To reconcile what appears to be unprecedented functional reorganization with known accounts of plasticity limitations, we tested whether V1's multisensory roles include responses to spoken language in sighted individuals. Using fMRI, we found that V1 in normally sighted individuals was indeed activated by comprehensible spoken sentences as compared to an incomprehensible reversed speech control condition, and more strongly so in the left compared to the right hemisphere. Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery. Last, this activation did not stem from increased attention to the auditory onset of words, nor was it correlated with attentional arousal ratings, making general attention accounts an unlikely explanation. Together these findings suggest that V1 responds to spoken language even in sighted individuals, reflecting the binding of multisensory high-level signals, potentially to predict visual input. This capability might be the basis for the strong V1 language activation observed in people born blind, re-affirming the notion that plasticity is guided by pre-existing connectivity and abilities in the typically developed brain.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ella Striem-Amit
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
7
|
Ma Y, Yu K, Yin S, Li L, Li P, Wang R. Attention Modulates the Role of Speakers' Voice Identity and Linguistic Information in Spoken Word Processing: Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1678-1693. [PMID: 37071787 DOI: 10.1044/2023_jslhr-22-00420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE The human voice usually contains two types of information: linguistic and identity information. However, whether and how linguistic information interacts with identity information remains controversial. This study aimed to explore the processing of identity and linguistic information during spoken word processing by considering the modulation of attention. METHOD We conducted two event-related potentials (ERPs) experiments in the study. Different speakers (self, friend, and unfamiliar speakers) and emotional words (positive, negative, and neutral words) were used to manipulate the identity and linguistic information. With the manipulation, Experiment 1 explored the identity and linguistic information processing with a word decision task that requires participants' explicit attention to linguistic information. Experiment 2 further investigated the issue with a passive oddball paradigm that requires rare attention to either the identity or linguistic information. RESULTS Experiment 1 revealed an interaction among speaker, word type, and hemisphere in N400 amplitudes but not in N100 and P200, which suggests that identity information interacted with linguistic information at the later stage of spoken word processing. The mismatch negativity results of Experiment 2 showed no significant interaction between speaker and word pair, which indicates that identity and linguistic information were processed independently. CONCLUSIONS The identity information would interact with linguistic information during spoken word processing. However, the interaction was modulated by the task demands on attention involvement. We propose an attention-modulated explanation to explain the mechanism underlying identity and linguistic information processing. Implications of our findings are discussed in light of the integration and independence theories.
Collapse
Affiliation(s)
- Yunxiao Ma
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Keke Yu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Shuqi Yin
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| | - Li Li
- The Key Laboratory of Chinese Learning and International Promotion, and College of International Culture, South China Normal University, Guangzhou, China
| | - Ping Li
- Department of Chinese and Bilingual Studies, Faculty of Humanities, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Ruiming Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, Ministry of Education, & Center for Studies of Psychological Application, School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
8
|
Gong B, Li N, Li Q, Yan X, Chen J, Li L, Wu X, Wu C. The Mandarin Chinese auditory emotions stimulus database: A validated set of Chinese pseudo-sentences. Behav Res Methods 2023; 55:1441-1459. [PMID: 35641682 DOI: 10.3758/s13428-022-01868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
Abstract
Emotional prosody is fully embedded in language and can be influenced by the linguistic properties of a specific language. Considering the limitations of existing Chinese auditory stimulus database studies, we developed and validated an emotional auditory stimuli database composed of Chinese pseudo-sentences, recorded by six professional actors in Mandarin Chinese. Emotional expressions included happiness, sadness, anger, fear, disgust, pleasant surprise, and neutrality. All emotional categories were vocalized into two types of sentence patterns, declarative and interrogative. In addition, all emotional pseudo-sentences, except for neutral, were vocalized at two levels of emotional intensity: normal and strong. Each recording was validated with 40 native Chinese listeners in terms of the recognition accuracy of the intended emotion portrayal; finally, 4361 pseudo-sentence stimuli were included in the database. Validation of the database using a forced-choice recognition paradigm revealed high rates of emotional recognition accuracy. The detailed acoustic attributes of vocalization were provided and connected to the emotion recognition rates. This corpus could be a valuable resource for researchers and clinicians to explore the behavioral and neural mechanisms underlying emotion processing of the general population and emotional disturbances in neurological, psychiatric, and developmental disorders. The Mandarin Chinese auditory emotion stimulus database is available at the Open Science Framework ( https://osf.io/sfbm6/?view_only=e22a521e2a7d44c6b3343e11b88f39e3 ).
Collapse
Affiliation(s)
- Bingyan Gong
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Na Li
- Theatre Pedagogy Department, Central Academy of Drama, Beijing, 100710, China
| | - Qiuhong Li
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Xinyuan Yan
- School of Computing, University of Utah, Salt Lake City, UT, USA
| | - Jing Chen
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
| | - Xihong Wu
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China.
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China.
| | - Chao Wu
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
9
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
10
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
11
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
12
|
Newport EL, Seydell-Greenwald A, Landau B, Turkeltaub PE, Chambers CE, Martin KC, Rennert R, Giannetti M, Dromerick AW, Ichord RN, Carpenter JL, Berl MM, Gaillard WD. Language and developmental plasticity after perinatal stroke. Proc Natl Acad Sci U S A 2022; 119:e2207293119. [PMID: 36215488 PMCID: PMC9586296 DOI: 10.1073/pnas.2207293119] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
The mature human brain is lateralized for language, with the left hemisphere (LH) primarily responsible for sentence processing and the right hemisphere (RH) primarily responsible for processing suprasegmental aspects of language such as vocal emotion. However, it has long been hypothesized that in early life there is plasticity for language, allowing young children to acquire language in other cortical regions when LH areas are damaged. If true, what are the constraints on functional reorganization? Which areas of the brain can acquire language, and what happens to the functions these regions ordinarily perform? We address these questions by examining long-term outcomes in adolescents and young adults who, as infants, had a perinatal arterial ischemic stroke to the LH areas ordinarily subserving sentence processing. We compared them with their healthy age-matched siblings. All participants were tested on a battery of behavioral and functional imaging tasks. While stroke participants were impaired in some nonlinguistic cognitive abilities, their processing of sentences and of vocal emotion was normal and equal to that of their healthy siblings. In almost all, these abilities have both developed in the healthy RH. Our results provide insights into the remarkable ability of the young brain to reorganize language. Reorganization is highly constrained, with sentence processing almost always in the RH frontotemporal regions homotopic to their location in the healthy brain. This activation is somewhat segregated from RH emotion processing, suggesting that the two functions perform best when each has its own neural territory.
Collapse
Affiliation(s)
- Elissa L. Newport
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- 1To whom correspondence may be addressed.
| | - Anna Seydell-Greenwald
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Barbara Landau
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- cJohns Hopkins University, Baltimore, MD 21218
| | - Peter E. Turkeltaub
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Catherine E. Chambers
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Kelly C. Martin
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca Rennert
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Margot Giannetti
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Alexander W. Dromerick
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
| | - Rebecca N. Ichord
- dPerelman School of Medicine at the University of Pennsylvania and Children’s Hospital of Philadelphia, Philadelphia, PA 19104
| | | | - Madison M. Berl
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| | - William D. Gaillard
- aCenter for Brain Plasticity and Recovery, Georgetown University Medical Center, Georgetown University, Washington, DC 20057
- bMedStar National Rehabilitation Hospital, Washington, DC 20010
- eChildren’s National Hospital and Center for Neuroscience, Washington, DC 20010
| |
Collapse
|
13
|
Lipkin B, Tuckute G, Affourtit J, Small H, Mineroff Z, Kean H, Jouravlev O, Rakocevic L, Pritchett B, Siegelman M, Hoeflin C, Pongos A, Blank IA, Struhl MK, Ivanova A, Shannon S, Sathe A, Hoffmann M, Nieto-Castañón A, Fedorenko E. Probabilistic atlas for the language network based on precision fMRI data from >800 individuals. Sci Data 2022; 9:529. [PMID: 36038572 PMCID: PMC9424256 DOI: 10.1038/s41597-022-01645-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/09/2022] [Indexed: 11/13/2022] Open
Abstract
Two analytic traditions characterize fMRI language research. One relies on averaging activations across individuals. This approach has limitations: because of inter-individual variability in the locations of language areas, any given voxel/vertex in a common brain space is part of the language network in some individuals but in others, may belong to a distinct network. An alternative approach relies on identifying language areas in each individual using a functional ‘localizer’. Because of its greater sensitivity, functional resolution, and interpretability, functional localization is gaining popularity, but it is not always feasible, and cannot be applied retroactively to past studies. To bridge these disjoint approaches, we created a probabilistic functional atlas using fMRI data for an extensively validated language localizer in 806 individuals. This atlas enables estimating the probability that any given location in a common space belongs to the language network, and thus can help interpret group-level activation peaks and lesion locations, or select voxels/electrodes for analysis. More meaningful comparisons of findings across studies should increase robustness and replicability in language research. Measurement(s) | Brain activity measurement | Technology Type(s) | fMRI | Sample Characteristic - Organism | Homo sapiens |
Collapse
Affiliation(s)
- Benjamin Lipkin
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA. .,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Greta Tuckute
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Hannah Small
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Zachary Mineroff
- Human-computer Interaction Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Hope Kean
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Olessia Jouravlev
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Lara Rakocevic
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Brianna Pritchett
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Caitlyn Hoeflin
- Harris School of Public Policy, University of Chicago, Chicago, IL, USA
| | - Alvincé Pongos
- Department of Bioengineering, University of California, Berkeley, CA, USA
| | - Idan A Blank
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Melissa Kline Struhl
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Anna Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Steven Shannon
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aalok Sathe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Cambridge, MA, USA
| | - Alfonso Nieto-Castañón
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.,Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA. .,McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA. .,Department of Speech, Hearing, Bioscience, and Technology, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
14
|
Ferrara K, Seydell-Greenwald A, Chambers CE, Newport EL, Landau B. Developmental changes in neural lateralization for visual-spatial function: Evidence from a line-bisection task. Dev Sci 2021; 25:e13217. [PMID: 34913543 DOI: 10.1111/desc.13217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 11/02/2021] [Accepted: 12/08/2021] [Indexed: 11/27/2022]
Abstract
Studies of hemispheric specialization have traditionally cast the left hemisphere as specialized for language and the right hemisphere for spatial function. Much of the supporting evidence for this separation of function comes from studies of healthy adults and those who have sustained lesions to the right or left hemisphere. However, we know little about the developmental origins of lateralization. Recent evidence suggests that the young brain represents language bilaterally, with 4-6-year-olds activating the left-hemisphere regions known to support language in adults as well as homotopic regions in the right hemisphere. This bilateral pattern changes over development, converging on left-hemispheric activation in late childhood. In the present study, we ask whether this same developmental trajectory is observed in a spatial task that is strongly right-lateralized in adults-the line bisection (or "Landmark") task. We examined fMRI activation among children ages 5-11 years as they were asked to judge which end of a bisected vertical line was longer. We found that young children showed bilateral activation, with activation in the same areas of the right hemisphere as has been shown among adults, as well as in the left hemisphere homotopic regions. By age 10, activation was right-lateralized. This strongly resembles the developmental trajectory for language, moving from bilateral to lateralized activation. We discuss potential underlying mechanisms and suggest that understanding the development of lateralization for a range of cognitive functions can play a crucial role in understanding general principles of how and why the brain comes to lateralize certain functions.
Collapse
Affiliation(s)
- Katrina Ferrara
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, District of Columbia, USA.,Intellectual and Developmental Disabilities Research Center, Children's National Health System, Washington, District of Columbia, USA
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, District of Columbia, USA
| | - Catherine E Chambers
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, District of Columbia, USA
| | - Elissa L Newport
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, District of Columbia, USA
| | - Barbara Landau
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, District of Columbia, USA.,Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
15
|
Valeriani D, Simonyan K. The dynamic connectome of speech control. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200256. [PMID: 34482717 DOI: 10.1098/rstb.2020.0256] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Speech production relies on the orchestrated control of multiple brain regions. The specific, directional influences within these networks remain poorly understood. We used regression dynamic causal modelling to infer the whole-brain directed (effective) connectivity from functional magnetic resonance imaging data of 36 healthy individuals during the production of meaningful English sentences and meaningless syllables. We identified that the two dynamic connectomes have distinct architectures that are dependent on the complexity of task production. The speech was regulated by a dynamic neural network, the most influential nodes of which were centred around superior and inferior parietal areas and influenced the whole-brain network activity via long-ranging coupling with primary sensorimotor, prefrontal, temporal and insular regions. By contrast, syllable production was controlled by a more compressed, cost-efficient network structure, involving sensorimotor cortico-subcortical integration via superior parietal and cerebellar network hubs. These data demonstrate the mechanisms by which the neural network reorganizes the connectivity of its influential regions, from supporting the fundamental aspects of simple syllabic vocal motor output to multimodal information processing of speech motor output. This article is part of the theme issue 'Vocal learning in animals and humans'.
Collapse
Affiliation(s)
- Davide Valeriani
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, 243 Charles Street, Boston, MA 02114, USA
| | - Kristina Simonyan
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, 243 Charles Street, Boston, MA 02114, USA.,Department of Neurology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
| |
Collapse
|
16
|
Sihvonen AJ, Sammler D, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, Särkämö T. Right ventral stream damage underlies both poststroke aprosodia and amusia. Eur J Neurol 2021; 29:873-882. [PMID: 34661326 DOI: 10.1111/ene.15148] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 10/07/2021] [Accepted: 10/09/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND PURPOSE This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, University of Queensland, Brisbane, Queensland, Australia
| | - Daniela Sammler
- Research Group "Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Spain.,Department of Cognition, Development, and Education Psychology, University of Barcelona, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
17
|
Brodrick BB, Adler-Neal AL, Palka JM, Mishra V, Aslan S, McAdams CJ. Structural brain differences in recovering and weight-recovered adult outpatient women with anorexia nervosa. J Eat Disord 2021; 9:108. [PMID: 34479625 PMCID: PMC8414694 DOI: 10.1186/s40337-021-00466-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/23/2021] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND Anorexia nervosa is a complex psychiatric illness that includes severe low body weight with cognitive distortions and altered eating behaviors. Brain structures, including cortical thicknesses in many regions, are reduced in underweight patients who are acutely ill with anorexia nervosa. However, few studies have examined adult outpatients in the process of recovering from anorexia nervosa. Evaluating neurobiological problems at different physiological stages of anorexia nervosa may facilitate our understanding of the recovery process. METHODS Magnetic resonance imaging (MRI) images from 37 partially weight-restored women with anorexia nervosa (pwAN), 32 women with a history of anorexia nervosa maintaining weight restoration (wrAN), and 41 healthy control women were analyzed using FreeSurfer. Group differences in brain structure, including cortical thickness, areas, and volumes, were compared using a series of factorial f-tests, including age as a covariate, and correcting for multiple comparisons with the False Discovery Rate method. RESULTS The pwAN and wrAN cohorts differed from each other in body mass index, eating disorder symptoms, and social problem solving orientations, but not depression or self-esteem. Relative to the HC cohort, eight cortical thicknesses were thinner for the pwAN cohort; these regions were predominately right-sided and in the cingulate and frontal lobe. One of these regions, the right pars orbitalis, was also thinner for the wrAN cohort. One region, the right parahippocampal gyrus, was thicker in the pwAN cohort. One volume, the right cerebellar white matter, was reduced in the pwAN cohort. There were no differences in global white matter, gray matter, or subcortical volumes across the cohorts. CONCLUSIONS Many regional structural differences were observed in the pwAN cohort with minimal differences in the wrAN cohort. These data support a treatment focus on achieving and sustaining full weight restoration to mitigate possible neurobiological sequela of AN. In addition, the regions showing cortical thinning are similar to structural changes reported elsewhere for suicide attempts, anxiety disorders, and autistic spectrum disorder. Understanding how brain structure and function are related to clinical symptoms expressed during the course of recovering from AN is needed.
Collapse
Affiliation(s)
- Brooks B Brodrick
- Department of Psychiatry, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Suite BL6.110, Dallas, TX, 75390-9070, USA
- Department of Internal Medicine, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX, 75390-9070, USA
| | - Adrienne L Adler-Neal
- Department of Psychiatry, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Suite BL6.110, Dallas, TX, 75390-9070, USA
| | - Jayme M Palka
- Department of Psychiatry, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Suite BL6.110, Dallas, TX, 75390-9070, USA
| | | | - Sina Aslan
- Department of Psychiatry, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Suite BL6.110, Dallas, TX, 75390-9070, USA
- Advance MRI LLC, Frisco, TX, 75034, USA
| | - Carrie J McAdams
- Department of Psychiatry, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Suite BL6.110, Dallas, TX, 75390-9070, USA.
| |
Collapse
|
18
|
Durfee AZ, Sheppard SM, Blake ML, Hillis AE. Lesion loci of impaired affective prosody: A systematic review of evidence from stroke. Brain Cogn 2021; 152:105759. [PMID: 34118500 PMCID: PMC8324538 DOI: 10.1016/j.bandc.2021.105759] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 05/06/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Affective prosody, or the changes in rate, rhythm, pitch, and loudness that convey emotion, has long been implicated as a function of the right hemisphere (RH), yet there is a dearth of literature identifying the specific neural regions associated with its processing. The current systematic review aimed to evaluate the evidence on affective prosody localization in the RH. One hundred and ninety articles from 1970 to February 2020 investigating affective prosody comprehension and production in patients with focal brain damage were identified via database searches. Eleven articles met inclusion criteria, passed quality reviews, and were analyzed for affective prosody localization. Acute, subacute, and chronic lesions demonstrated similar profile characteristics. Localized right antero-superior (i.e., dorsal stream) regions contributed to affective prosody production impairments, whereas damage to more postero-lateral (i.e., ventral stream) regions resulted in affective prosody comprehension deficits. This review provides support that distinct RH regions are vital for affective prosody comprehension and production, aligning with literature reporting RH activation for affective prosody processing in healthy adults as well. The impact of study design on resulting interpretations is discussed.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States.
| | - Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences and Disorders, Chapman University Crean College of Health and Behavioral Sciences, Irvine, CA 92618, United States
| | - Margaret L Blake
- Department of Communication Sciences and Disorders, University of Houston College of Liberal Arts and Social Sciences, Houston, TX 77204, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
19
|
Gergely A, Tóth K, Faragó T, Topál J. Is it all about the pitch? Acoustic determinants of dog-directed speech preference in domestic dogs, Canis familiaris. Anim Behav 2021. [DOI: 10.1016/j.anbehav.2021.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
20
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
21
|
O'Connell K, Marsh AA, Edwards DF, Dromerick AW, Seydell-Greenwald A. Emotion recognition impairments and social well-being following right-hemisphere stroke. Neuropsychol Rehabil 2021; 32:1337-1355. [PMID: 33615994 PMCID: PMC8379297 DOI: 10.1080/09602011.2021.1888756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Accurately recognizing and responding to the emotions of others is essential for proper social communication and helps bind strong relationships that are particularly important for stroke survivors. Emotion recognition typically engages cortical areas that are predominantly right-lateralized including superior temporal and inferior frontal gyri - regions frequently impacted by right-hemisphere stroke. Since prior work already links right-hemisphere stroke to deficits in emotion recognition, this research aims to extend these findings to determine whether impaired emotion recognition after right-hemisphere stroke is associated with worse social well-being outcomes. Eighteen right-hemisphere stroke patients (≥6 months post-stroke) and 21 neurologically healthy controls completed a multimodal emotion recognition test (Geneva Emotion Recognition Test - Short) and reported engagement in social/non-social activities and levels of social support. Right-hemisphere stroke was associated with worse emotion recognition accuracy, though not all patients exhibited impairment. In line with hypotheses, emotion recognition impairments were associated with greater loss of social activities after stroke, an effect that could not be attributed to stroke severity or loss of non-social activities. Impairments were also linked to reduced patient-reported social support. Results implicate emotion recognition difficulties as a potential antecedent of social withdrawal after stroke and warrant future research to test emotion recognition training post-stroke.
Collapse
Affiliation(s)
- Katherine O'Connell
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, USA
| | - Abigail A Marsh
- Department of Psychology, Georgetown University, Washington, DC, USA
| | - Dorothy Farrar Edwards
- Department of Kinesiology and Medicine, University of Wisconsin-Madison, Madison, WI, USA
| | - Alexander W Dromerick
- MedStar National Rehabilitation Hospital, Washington, DC, USA.,Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
22
|
Weed E, Fusaroli R. Acoustic Measures of Prosody in Right-Hemisphere Damage: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1762-1775. [PMID: 32432947 DOI: 10.1044/2020_jslhr-19-00241] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of the study was to use systematic review and meta-analysis to quantitatively assess the currently available acoustic evidence for prosodic production impairments as a result of right-hemisphere damage (RHD), as well as to develop methodological recommendations for future studies. Method We systematically reviewed papers reporting acoustic features of prosodic production in RHD in order to identify shortcomings in the literature and make recommendations for future studies. We estimated the meta-analytic effect size of the acoustic features. We extracted standardized mean differences from 16 papers and estimated aggregated effect sizes using hierarchical Bayesian regression models. Results RHD did present reduced fundamental frequency variation, but the trait was shared with left-hemisphere damage. RHD also presented evidence for increased pause duration. No meta-analytic evidence for an effect of prosody type (emotional vs. linguistic) was found. Conclusions Taken together, the currently available acoustic data show only a weak specific effect of RHD on prosody production. However, the results are not definitive, as more reliable analyses are hindered by small sample sizes, lack of detail on lesion location, and divergent measuring techniques. We propose recommendations to overcome these issues: Cumulative science practices (e.g., open data and code sharing), more nuanced speech signal processing techniques, and the integration of acoustic measures and perceptual judgments are recommended to more effectively investigate prosody in RHD.
Collapse
Affiliation(s)
- Ethan Weed
- School of Communication and Culture, Aarhus University, Denmark
| | | |
Collapse
|