1
|
Chen Y, Wang T, Ding H. Effect of Age and Gender on Categorical Perception of Vocal Emotion Under Tonal Language Background. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-17. [PMID: 39418571 DOI: 10.1044/2024_jslhr-23-00716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
PURPOSE Categorical perception (CP) manifests in various aspects of human cognition. While there is mounting evidence for CP in facial emotions, CP in vocal emotions remains understudied. The current study attempted to test whether individuals with a tonal language background perceive vocal emotions categorically and to examine how factors such as gender and age influence the plasticity of these perceptual categories. METHOD This study examined the identification and discrimination performance of 24 Mandarin-speaking children (14 boys and 10 girls) and 32 adults (16 males and 16 females) when they were presented with three vocal emotion continua. Speech stimuli in each continuum consisted of 11 resynthesized Mandarin disyllabic words. RESULTS CP phenomena were detected when Mandarin participants perceived vocal emotions. We further found the modulating effect of age and gender in vocal emotion categorization. CONCLUSIONS Our results demonstrate for the first time that a categorical strategy is used by Mandarin speakers when perceiving vocal emotions. Furthermore, our findings reveal that the categorization ability of vocal emotions follows a prolonged course of development and the maturation patterns differ across genders. This study opens a promising line of research for investigating how sensory features are mapped to higher order perception and provides implications for our understanding of clinical populations characterized by altered emotional processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27204057.
Collapse
Affiliation(s)
- Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Ting Wang
- School of Foreign Languages, Tongji University, Shanghai, China
- Center for Speech and Language Processing, Tongji University, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| |
Collapse
|
2
|
Morningstar M, Billetdeaux KA, Mattson WI, Gilbert AC, Nelson EE, Hoskinson KR. Neural response to vocal emotional intensity in youth. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01224-6. [PMID: 39300012 DOI: 10.3758/s13415-024-01224-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2024] [Indexed: 09/22/2024]
Abstract
Previous research has identified regions of the brain that are sensitive to emotional intensity in faces, with some evidence for developmental differences in this pattern of response. However, comparable understanding of how the brain tracks linear variations in emotional prosody is limited-especially in youth samples. The current study used novel stimuli (morphing emotional prosody from neutral to anger/happiness in linear increments) to investigate whether neural response to vocal emotion was parametrically modulated by emotional intensity and whether there were age-related changes in this effect. Participants aged 8-21 years (n = 56, 52% female) completed a vocal emotion recognition task, in which they identified the intended emotion in morphed recordings of vocal prosody, while undergoing functional magnetic resonance imaging. Parametric analyses of whole-brain response to morphed stimuli found that activation in the bilateral superior temporal gyrus (STG) scaled to emotional intensity in angry (but not happy) voices. Multivariate region-of-interest analyses revealed the same pattern in the right amygdala. Sensitivity to emotional intensity did not vary by participants' age. These findings provide evidence for the linear parameterization of emotional intensity in angry vocal prosody within the bilateral STG and right amygdala. Although findings should be replicated, the current results also suggest that this pattern of neural sensitivity may not be subject to strong developmental influences.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3L3, Canada.
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada.
| | - K A Billetdeaux
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - A C Gilbert
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Canada
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| | - K R Hoskinson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
3
|
Morningstar M, Hughes C, French RC, Grannis C, Mattson WI, Nelson EE. Functional connectivity during facial and vocal emotion recognition: Preliminary evidence for dissociations in developmental change by nonverbal modality. Neuropsychologia 2024; 202:108946. [PMID: 38945440 DOI: 10.1016/j.neuropsychologia.2024.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/15/2024] [Accepted: 06/27/2024] [Indexed: 07/02/2024]
Abstract
The developmental trajectory of emotion recognition (ER) skills is thought to vary by nonverbal modality, with vocal ER becoming mature later than facial ER. To investigate potential neural mechanisms contributing to this dissociation at a behavioural level, the current study examined whether youth's neural functional connectivity during vocal and facial ER tasks showed differential developmental change across time. Youth ages 8-19 (n = 41) completed facial and vocal ER tasks while undergoing functional magnetic resonance imaging, at two timepoints (1 year apart; n = 36 for behavioural data, n = 28 for neural data). Partial least squares analyses revealed that functional connectivity during ER is both distinguishable by modality (with different patterns of connectivity for facial vs. vocal ER) and across time-with changes in connectivity being particularly pronounced for vocal ER. ER accuracy was greater for faces than voices, and positively associated with age; although task performance did not change appreciably across a 1-year period, changes in latent functional connectivity patterns across time predicted participants' ER accuracy at Time 2. Taken together, these results suggest that vocal and facial ER are supported by distinguishable neural correlates that may undergo different developmental trajectories. Our findings are also preliminary evidence that changes in network integration may support the development of ER skills in childhood and adolescence.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Canada; Centre for Neuroscience Studies, Queen's University, Canada.
| | - C Hughes
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Canada
| | - R C French
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA
| | - C Grannis
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Pediatrics, Ohio State University Wexner College of Medicine, Columbus, OH, USA
| |
Collapse
|
4
|
Pasquinelli R, Tessier AM, Karas Z, Hu X, Kovelman I. The Development of Left Hemisphere Lateralization for Sentence-Level Prosodic Processing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1365-1377. [PMID: 36944046 PMCID: PMC10187959 DOI: 10.1044/2022_jslhr-22-00103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/23/2022] [Accepted: 12/23/2022] [Indexed: 05/18/2023]
Abstract
PURPOSE The fine-tuning of linguistic prosody in later childhood is poorly understood, and its neurological processing is even less well studied. In particular, it is unknown if grammatical processing of prosody is left- or right-lateralized in childhood versus adulthood and how phonological working memory might modulate such lateralization. Furthermore, it is virtually unknown how prosody develops neurologically among children with cochlear implants (CIs). METHOD Normal-hearing (NH) children ages 6-12 years and NH adults ages 18-28 years completed a functional near-infrared spectroscopy neuroimaging task, during which they heard sentence pairs and judged whether the sentences did or did not differ in their overall prosody (declarative, question, with or without narrow focus). Children also completed standard measures of expressive and receptive language. RESULTS Age group differences emerged; children exhibited stronger bilateral temporoparietal activity but reduced left frontal activation. Furthermore, children's performance on a nonword repetition test was significantly associated with activation in the left inferior frontal gyrus-an area that was generally more activated in adults than in children. CONCLUSIONS The prosody-related findings are generally consistent with prior neurodevelopmental works on sentence comprehension, especially those involving syntax and semantics, which have also noted a developmental shift from bilateral temporal to left inferior frontal regions typically associated with increased sensitivity to sentence structure. The findings thus inform theoretical perspectives on brain and language development and have implications for studying the effects of CIs on neurodevelopmental processes for sentence prosody. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22255996.
Collapse
Affiliation(s)
- Rennie Pasquinelli
- Department of Psychology, University of Michigan, Ann Arbor
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD
| | | | - Zachary Karas
- Department of Psychology, University of Michigan, Ann Arbor
| | - Xiaosu Hu
- Department of Psychology, University of Michigan, Ann Arbor
| | | |
Collapse
|
5
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
6
|
Morningstar M, Grannis C, Mattson WI, Nelson EE. Functional patterns of neural activation during vocal emotion recognition in youth with and without refractory epilepsy. Neuroimage Clin 2022; 34:102966. [PMID: 35182929 PMCID: PMC8859003 DOI: 10.1016/j.nicl.2022.102966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/12/2022] [Accepted: 02/11/2022] [Indexed: 01/10/2023]
Abstract
Epilepsy has been associated with deficits in the social cognitive ability to decode others' nonverbal cues to infer their emotional intent (emotion recognition). Studies have begun to identify potential neural correlates of these deficits, but have focused primarily on one type of nonverbal cue (facial expressions) to the detriment of other crucial social signals that inform the tenor of social interactions (e.g., tone of voice). Less is known about how individuals with epilepsy process these forms of social stimuli, with a particular gap in knowledge about representation of vocal cues in the developing brain. The current study compared vocal emotion recognition skills and functional patterns of neural activation to emotional voices in youth with and without refractory focal epilepsy. We made novel use of inter-subject pattern analysis to determine brain areas in which activation to emotional voices was predictive of epilepsy status. Results indicated that youth with epilepsy were comparatively less able to infer emotional intent in vocal expressions than their typically developing peers. Activation to vocal emotional expressions in regions of the mentalizing and/or default mode network (e.g., right temporo-parietal junction, right hippocampus, right medial prefrontal cortex, among others) differentiated youth with and without epilepsy. These results are consistent with emerging evidence that pediatric epilepsy is associated with altered function in neural networks subserving social cognitive abilities. Our results contribute to ongoing efforts to understand the neural markers of social cognitive deficits in pediatric epilepsy, in order to better tailor and funnel interventions to this group of youth at risk for poor social outcomes.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Kingston, ON, Canada; Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States.
| | - C Grannis
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - W I Mattson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States
| | - E E Nelson
- Center for Biobehavioral Health, The Research Institute at Nationwide Children's Hospital, Columbus, OH, United States; Department of Pediatrics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
7
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
8
|
Morningstar M, Mattson WI, Singer S, Venticinque JS, Nelson EE. Children and adolescents' neural response to emotional faces and voices: Age-related changes in common regions of activation. Soc Neurosci 2020; 15:613-629. [PMID: 33017278 DOI: 10.1080/17470919.2020.1832572] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The perception of facial and vocal emotional expressions engages overlapping regions of the brain. However, at a behavioral level, the ability to recognize the intended emotion in both types of nonverbal cues follows a divergent developmental trajectory throughout childhood and adolescence. The current study a) identified regions of common neural activation to facial and vocal stimuli in 8- to 19-year-old typically-developing adolescents, and b) examined age-related changes in blood-oxygen-level dependent (BOLD) response within these areas. Both modalities elicited activation in an overlapping network of subcortical regions (insula, thalamus, dorsal striatum), visual-motor association areas, prefrontal regions (inferior frontal cortex, dorsomedial prefrontal cortex), and the right superior temporal gyrus. Within these regions, increased age was associated with greater frontal activation to voices, but not faces. Results suggest that processing facial and vocal stimuli elicits activation in common areas of the brain in adolescents, but that age-related changes in response within these regions may vary by modality.
Collapse
Affiliation(s)
- M Morningstar
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA.,Department of Psychology, Queen's University , Kingston, ON, Canada
| | - W I Mattson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - S Singer
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - J S Venticinque
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA
| |
Collapse
|