1
|
Hashimoto RI, Okada R, Aoki R, Nakamura M, Ohta H, Itahashi T. Functional alterations of lateral temporal cortex for processing voice prosody in adults with autism spectrum disorder. Cereb Cortex 2024; 34:bhae363. [PMID: 39270675 DOI: 10.1093/cercor/bhae363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/17/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.
Collapse
Affiliation(s)
- Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
| | - Rieko Okada
- Faculty of Intercultural Japanese Studies, Otemae University, 6-42 Ochayasho-cho, Nishinomiya-shi Hyogo 662-8552, Japan
| | - Ryuta Aoki
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, 54 Shogoin-Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Motoaki Nakamura
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Haruhisa Ohta
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| |
Collapse
|
2
|
Ong JH, Zhao C, Bacon A, Leung FYN, Veic A, Wang L, Jiang C, Liu F. The Relationship Between Autism and Pitch Perception is Modulated by Cognitive Abilities. J Autism Dev Disord 2024; 54:3400-3411. [PMID: 37642868 PMCID: PMC11362365 DOI: 10.1007/s10803-023-06075-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2023] [Indexed: 08/31/2023]
Abstract
Previous studies reported mixed findings on autistic individuals' pitch perception relative to neurotypical (NT) individuals. We investigated whether this may be partly due to individual differences in cognitive abilities by comparing their performance on various pitch perception tasks on a large sample (n = 164) of autistic and NT children and adults. Our findings revealed that: (i) autistic individuals either showed similar or worse performance than NT individuals on the pitch tasks; (ii) cognitive abilities were associated with some pitch task performance; and (iii) cognitive abilities modulated the relationship between autism diagnosis and pitch perception on some tasks. Our findings highlight the importance of taking an individual differences approach to understand the strengths and weaknesses of pitch processing in autism.
Collapse
Affiliation(s)
- Jia Hoong Ong
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- Department of Psychology, School of Social Sciences, Nottingham Trent University, Nottingham, UK
| | - Chen Zhao
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Alex Bacon
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | | | - Anamarija Veic
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
| |
Collapse
|
3
|
Liu M, Teng X, Jiang J. Instrumental music training relates to intensity assessment but not emotional prosody recognition in Mandarin. PLoS One 2024; 19:e0309432. [PMID: 39213300 PMCID: PMC11364251 DOI: 10.1371/journal.pone.0309432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Building on research demonstrating the benefits of music training for emotional prosody recognition in nontonal languages, this study delves into its unexplored influence on tonal languages. In tonal languages, the acoustic similarity between lexical tones and music, along with the dual role of pitch in conveying lexical and affective meanings, create a unique interplay. We evaluated 72 participants, half of whom had extensive instrumental music training, with the other half serving as demographically matched controls. All participants completed an online test consisting of 210 Chinese pseudosentences, each designed to express one of five emotions: happiness, sadness, fear, anger, or neutrality. Our robust statistical analyses, which included effect size estimates and Bayesian factors, revealed that music and nonmusic groups exhibit similar abilities in identifying the emotional prosody of various emotions. However, the music group attributed higher intensity ratings to emotional prosodies of happiness, fear, and anger compared to the nonmusic group. These findings suggest that while instrumental music training is not related to emotional prosody recognition, it does appear to be related to perceived emotional intensity. This dissociation between emotion recognition and intensity evaluation adds a new piece to the puzzle of the complex relationship between music training and emotion perception in tonal languages.
Collapse
Affiliation(s)
- Mengting Liu
- Department of Art, Harbin Conservatory of Music, Harbin, China
| | - Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
4
|
Day TC, Malik I, Boateng S, Hauschild KM, Lerner MD. Vocal Emotion Recognition in Autism: Behavioral Performance and Event-Related Potential (ERP) Response. J Autism Dev Disord 2024; 54:1235-1248. [PMID: 36694007 DOI: 10.1007/s10803-023-05898-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2023] [Indexed: 01/25/2023]
Abstract
Autistic youth display difficulties in emotion recognition, yet little research has examined behavioral and neural indices of vocal emotion recognition (VER). The current study examines behavioral and event-related potential (N100, P200, Late Positive Potential [LPP]) indices of VER in autistic and non-autistic youth. Participants (N = 164) completed an emotion recognition task, the Diagnostic Analyses of Nonverbal Accuracy (DANVA-2) which included VER, during EEG recording. The LPP amplitude was larger in response to high intensity VER, and social cognition predicted VER errors. Verbal IQ, not autism, was related to VER errors. An interaction between VER intensity and social communication impairments revealed these impairments were related to larger LPP amplitudes during low intensity VER. Taken together, differences in VER may be due to higher order cognitive processes, not basic, early perception (N100, P200), and verbal cognitive abilities may underlie behavioral, yet occlude neural, differences in VER processing.
Collapse
Affiliation(s)
- Talena C Day
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Isha Malik
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | - Sydney Boateng
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA
| | | | - Matthew D Lerner
- Psychology Department, Stony Brook University, Stony Brook, Psychology B-354, Stony Brook, NY, 11794-2500, USA.
| |
Collapse
|
5
|
Holmberg J, Linander I, Södersten M, Karlsson F. Exploring Motives and Perceived Barriers for Voice Modification: The Views of Transgender and Gender-Diverse Voice Clients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023:1-14. [PMID: 37263019 DOI: 10.1044/2023_jslhr-23-00042] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE To date, transgender and gender-diverse voice clients' perceptions and individual goals have been missing in discussions and research on gender-affirming voice therapy. Little is, therefore, known about the client's expectations of therapy outcomes and how these are met by treatments developed from views of vocal gender as perceived by cisgender persons. This study aimed to explore clients' individual motives and perceived barriers to undertaking gender-affirming voice therapy. METHOD Individual, semistructured interviews with 15 transgender and gender-diverse voice clients considering voice therapy were conducted and explored using qualitative content analysis. RESULTS Three themes were identified during the analysis of the participants' narratives. In the first theme, "the incongruent voice setting the rules," the contribution of the voice on the experienced gender dysphoria is put in focus. The second theme, "to reach a voice of my own choice," centers around anticipated personal gains using a modified voice. The third theme, "a voice out of reach," relates to worries and restricting factors for not being able to reach one's set goals for voice modification. CONCLUSIONS The interviews clearly indicate a need for a person-centered voice therapy that starts from the individuals' expressed motives for modifying the voice yet also are affirmative of anticipated difficulties related to voice modification. We recommend that these themes should form the basis of the pretherapy joint discussion between the voice client and the speech-language pathologist to ensure therapy goals that are realistic and relevant to the client.
Collapse
Affiliation(s)
- Jenny Holmberg
- Department of Clinical Sciences, Umeå University, Sweden
- Umeå Centre for Gender Studies, Umeå University, Sweden
| | - Ida Linander
- Umeå Centre for Gender Studies, Umeå University, Sweden
- Department of Epidemiology and Global Health, Umeå University, Sweden
| | - Maria Södersten
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
- Medical Unit Speech-Language Pathology, Karolinska University Hospital, Stockholm, Sweden
| | | |
Collapse
|
6
|
Uscătescu LC, Kronbichler M, Said-Yürekli S, Kronbichler L, Calhoun V, Corbera S, Bell M, Pelphrey K, Pearlson G, Assaf M. Intrinsic neural timescales in autism spectrum disorder and schizophrenia. A replication and direct comparison study. SCHIZOPHRENIA (HEIDELBERG, GERMANY) 2023; 9:18. [PMID: 36997542 PMCID: PMC10063601 DOI: 10.1038/s41537-023-00344-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 03/06/2023] [Indexed: 04/03/2023]
Abstract
Intrinsic neural timescales (INT) reflect the duration for which brain areas store information. A posterior-anterior hierarchy of increasingly longer INT has been revealed in both typically developed individuals (TD), as well as persons diagnosed with autism spectrum disorder (ASD) and schizophrenia (SZ), though INT are, overall, shorter in both patient groups. In the present study, we aimed to replicate previously reported group differences by comparing INT of TD to ASD and SZ. We partially replicated the previously reported result, showing reduced INT in the left lateral occipital gyrus and the right post-central gyrus in SZ compared to TD. We also directly compared the INT of the two patient groups and found that these same two areas show significantly reduced INT in SZ compared to ASD. Previously reported correlations between INT and symptom severity were not replicated in the current project. Our findings serve to circumscribe the brain areas that can potentially play a determinant role in observed sensory peculiarities in ASD and SZ.
Collapse
Affiliation(s)
| | - Martin Kronbichler
- Centre for Cognitive Neuroscience & Department of Psychology, Paris-Lodron University of Salzburg, Salzburg, Austria
- Neuroscience Institute, Christian-Doppler Medical University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Sarah Said-Yürekli
- Centre for Cognitive Neuroscience & Department of Psychology, Paris-Lodron University of Salzburg, Salzburg, Austria
- Neuroscience Institute, Christian-Doppler Medical University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Lisa Kronbichler
- Centre for Cognitive Neuroscience & Department of Psychology, Paris-Lodron University of Salzburg, Salzburg, Austria
- Neuroscience Institute, Christian-Doppler Medical University Hospital, Paracelsus Medical University, Salzburg, Austria
- Department of Psychiatry, Psychotherapy & Psychosomatics, Christian-Doppler University Hospital, Paracelsus Medical University, Salzburg, Austria
| | - Vince Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| | - Silvia Corbera
- Central Connecticut State University, Department of Psychological Science, New Britain, CT, USA
| | - Morris Bell
- Yale University, School of Medicine, Department of Psychiatry, New Haven, CT, USA
| | - Kevin Pelphrey
- University of Virginia, Department of Neurology, Charlottesville, VA, USA
| | - Godfrey Pearlson
- Olin Neuropsychiatry Research Center, Institute of Living, Hartford, CT, USA
- Yale University, School of Medicine, Department of Psychiatry, New Haven, CT, USA
| | - Michal Assaf
- Olin Neuropsychiatry Research Center, Institute of Living, Hartford, CT, USA
- Yale University, School of Medicine, Department of Psychiatry, New Haven, CT, USA
| |
Collapse
|
7
|
Gonçalves AM, Monteiro P. Autism Spectrum Disorder and auditory sensory alterations: a systematic review on the integrity of cognitive and neuronal functions related to auditory processing. J Neural Transm (Vienna) 2023; 130:325-408. [PMID: 36914900 PMCID: PMC10033482 DOI: 10.1007/s00702-023-02595-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/17/2023] [Indexed: 03/15/2023]
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition with a wide spectrum of symptoms, mainly characterized by social, communication, and cognitive impairments. Latest diagnostic criteria according to DSM-5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, 2013) now include sensory issues among the four restricted/repetitive behavior features defined as "hyper- or hypo-reactivity to sensory input or unusual interest in sensory aspects of environment". Here, we review auditory sensory alterations in patients with ASD. Considering the updated diagnostic criteria for ASD, we examined research evidence (2015-2022) of the integrity of the cognitive function in auditory-related tasks, the integrity of the peripheral auditory system, and the integrity of the central nervous system in patients diagnosed with ASD. Taking into account the different approaches and experimental study designs, we reappraise the knowledge on auditory sensory alterations and reflect on how these might be linked with behavior symptomatology in ASD.
Collapse
Affiliation(s)
- Ana Margarida Gonçalves
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal
- ICVS/3B's-PT Government Associate Laboratory, 4710-057, Braga/Guimarães, Portugal
| | - Patricia Monteiro
- Life and Health Sciences Research Institute, School of Medicine, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal.
- ICVS/3B's-PT Government Associate Laboratory, 4710-057, Braga/Guimarães, Portugal.
- Experimental Biology Unit, Department of Biomedicine, Faculty of Medicine, University of Porto, Porto, Portugal.
| |
Collapse
|
8
|
Leung FYN, Stojanovik V, Micai M, Jiang C, Liu F. Emotion recognition in autism spectrum disorder across age groups: A cross-sectional investigation of various visual and auditory communicative domains. Autism Res 2023; 16:783-801. [PMID: 36727629 DOI: 10.1002/aur.2896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023]
Abstract
Previous research on emotion processing in autism spectrum disorder (ASD) has predominantly focused on human faces and speech prosody, with little attention paid to other domains such as nonhuman faces and music. In addition, emotion processing in different domains was often examined in separate studies, making it challenging to evaluate whether emotion recognition difficulties in ASD generalize across domains and age cohorts. The present study investigated: (i) the recognition of basic emotions (angry, scared, happy, and sad) across four domains (human faces, face-like objects, speech prosody, and song) in 38 autistic and 38 neurotypical (NT) children, adolescents, and adults in a forced-choice labeling task, and (ii) the impact of pitch and visual processing profiles on this ability. Results showed similar recognition accuracy between the ASD and NT groups across age groups for all domains and emotion types, although processing speed was slower in the ASD compared to the NT group. Age-related differences were seen in both groups, which varied by emotion, domain, and performance index. Visual processing style was associated with facial emotion recognition speed and pitch perception ability with auditory emotion recognition in the NT group but not in the ASD group. These findings suggest that autistic individuals may employ different emotion processing strategies compared to NT individuals, and that emotion recognition difficulties as manifested by slower response times may result from a generalized, rather than a domain-specific underlying mechanism that governs emotion recognition processes across domains in ASD.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Martina Micai
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
9
|
Singing ability is related to vocal emotion recognition: Evidence for shared sensorimotor processing across speech and music. Atten Percept Psychophys 2023; 85:234-243. [PMID: 36380148 DOI: 10.3758/s13414-022-02613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 11/16/2022]
Abstract
The ability to recognize emotion in speech is a critical skill for social communication. Motivated by previous work that has shown that vocal emotion recognition accuracy varies by musical ability, the current study addressed this relationship using a behavioral measure of musical ability (i.e., singing) that relies on the same effector system used for vocal prosody production. In the current study, participants completed a musical production task that involved singing four-note novel melodies. To measure pitch perception, we used a simple pitch discrimination task in which participants indicated whether a target pitch was higher or lower than a comparison pitch. We also used self-report measures to address language and musical background. We report that singing ability, but not self-reported musical experience nor pitch discrimination ability, was a unique predictor of vocal emotion recognition accuracy. These results support a relationship between processes involved in vocal production and vocal perception, and suggest that sensorimotor processing of the vocal system is recruited for processing vocal prosody.
Collapse
|
10
|
Chen Y, Tang E, Ding H, Zhang Y. Auditory Pitch Perception in Autism Spectrum Disorder: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4866-4886. [PMID: 36450443 DOI: 10.1044/2022_jslhr-22-00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Pitch plays an important role in auditory perception of music and language. This study provides a systematic review with meta-analysis to investigate whether individuals with autism spectrum disorder (ASD) have enhanced pitch processing ability and to identify the potential factors associated with processing differences between ASD and neurotypicals. METHOD We conducted a systematic search through six major electronic databases focusing on the studies that used nonspeech stimuli to provide a qualitative and quantitative assessment across existing studies on pitch perception in autism. We identified potential participant- and methodology-related moderators and conducted metaregression analyses using mixed-effects models. RESULTS On the basis of 22 studies with a total of 464 participants with ASD, we obtained a small-to-medium positive effect size (g = 0.26) in support of enhanced pitch perception in ASD. Moreover, the mean age and nonverbal IQ of participants were found to significantly moderate the between-studies heterogeneity. CONCLUSIONS Our study provides the first meta-analysis on auditory pitch perception in ASD and demonstrates the existence of different developmental trajectories between autistic individuals and neurotypicals. In addition to age, nonverbal ability is found to be a significant contributor to the lower level/local processing bias in ASD. We highlight the need for further investigation of pitch perception in ASD under challenging listening conditions. Future neurophysiological and brain imaging studies with a longitudinal design are also needed to better understand the underlying neural mechanisms of atypical pitch processing in ASD and to help guide auditory-based interventions for improving language and social functioning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21614271.
Collapse
Affiliation(s)
- Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
11
|
Haigh SM, Brosseau P, Eack SM, Leitman DI, Salisbury DF, Behrmann M. Hyper-Sensitivity to Pitch and Poorer Prosody Processing in Adults With Autism: An ERP Study. Front Psychiatry 2022; 13:844830. [PMID: 35693971 PMCID: PMC9174755 DOI: 10.3389/fpsyt.2022.844830] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 04/20/2022] [Indexed: 01/30/2023] Open
Abstract
Individuals with autism typically experience a range of symptoms, including abnormal sensory sensitivities. However, there are conflicting reports on the sensory profiles that characterize the sensory experience in autism that often depend on the type of stimulus. Here, we examine early auditory processing to simple changes in pitch and later auditory processing of more complex emotional utterances. We measured electroencephalography in 24 adults with autism and 28 controls. First, tones (1046.5Hz/C6, 1108.7Hz/C#6, or 1244.5Hz/D#6) were repeated three times or nine times before the pitch changed. Second, utterances of delight or frustration were repeated three or six times before the emotion changed. In response to the simple pitched tones, the autism group exhibited larger mismatch negativity (MMN) after nine standards compared to controls and produced greater trial-to-trial variability (TTV). In response to the prosodic utterances, the autism group showed smaller P3 responses when delight changed to frustration compared to controls. There was no significant correlation between ERPs to pitch and ERPs to prosody. Together, this suggests that early auditory processing is hyper-sensitive in autism whereas later processing of prosodic information is hypo-sensitive. The impact the different sensory profiles have on perceptual experience in autism may be key to identifying behavioral treatments to reduce symptoms.
Collapse
Affiliation(s)
- Sarah M. Haigh
- Department of Psychology and Institute for Neuroscience, University of Nevada, Reno, NV, United States
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Pat Brosseau
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Shaun M. Eack
- School of Social Work, University of Pittsburgh, Pittsburgh, PA, United States
| | - David I. Leitman
- Division of Translational Research, National Institute of Mental Health, Bethesda, MD, United States
| | - Dean F. Salisbury
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, United States
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
12
|
Schelinski S, Tabas A, von Kriegstein K. Altered processing of communication signals in the subcortical auditory sensory pathway in autism. Hum Brain Mapp 2022; 43:1955-1972. [PMID: 35037743 PMCID: PMC8933247 DOI: 10.1002/hbm.25766] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 12/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Alejandro Tabas
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
13
|
Zhang M, Chen Y, Lin Y, Ding H, Zhang Y. Multichannel Perception of Emotion in Speech, Voice, Facial Expression, and Gesture in Individuals With Autism: A Scoping Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1435-1449. [PMID: 35316079 DOI: 10.1044/2022_jslhr-21-00438] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Numerous studies have identified individuals with autism spectrum disorder (ASD) with deficits in unichannel emotion perception and multisensory integration. However, only limited research is available on multichannel emotion perception in ASD. The purpose of this review was to seek conceptual clarification, identify knowledge gaps, and suggest directions for future research. METHOD We conducted a scoping review of the literature published between 1989 and 2021, following the 2005 framework of Arksey and O'Malley. Data relating to study characteristics, task characteristics, participant information, and key findings on multichannel processing of emotion in ASD were extracted for the review. RESULTS Discrepancies were identified regarding multichannel emotion perception deficits, which are related to participant age, developmental level, and task demand. Findings are largely consistent regarding the facilitation and compensation of congruent multichannel emotional cues and the interference and disruption of incongruent signals. Unlike controls, ASD individuals demonstrate an overreliance on semantics rather than prosody to decode multichannel emotion. CONCLUSIONS The existing literature on multichannel emotion perception in ASD is limited, dispersed, and disassociated, focusing on a variety of topics with a wide range of methodologies. Further research is necessary to quantitatively examine the impact of methodological choice on performance outcomes. An integrated framework of emotion, language, and cognition is needed to examine the mutual influences between emotion and language as well as the cross-linguistic and cross-cultural differences. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.19386176.
Collapse
Affiliation(s)
- Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
14
|
Leung FYN, Sin J, Dawson C, Ong JH, Zhao C, Veić A, Liu F. Emotion recognition across visual and auditory modalities in autism spectrum disorder: A systematic review and meta-analysis. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2021.101000] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
15
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. Electroencephalographic Correlate of Mexican Spanish Emotional Speech Processing in Autism Spectrum Disorder: To a Social Story and Robot-Based Intervention. Front Hum Neurosci 2021; 15:626146. [PMID: 33716696 PMCID: PMC7952538 DOI: 10.3389/fnhum.2021.626146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 02/08/2021] [Indexed: 12/04/2022] Open
Abstract
Socio-emotional impairments are key symptoms of Autism Spectrum Disorders. This work proposes to analyze the neuronal activity related to the discrimination of emotional prosodies in autistic children (aged 9 to 11-year-old) as follows. Firstly, a database for single words uttered in Mexican Spanish by males, females, and children will be created. Then, optimal acoustic features for emotion characterization will be extracted, followed of a cubic kernel function Support Vector Machine (SVM) in order to validate the speech corpus. As a result, human-specific acoustic properties of emotional voice signals will be identified. Secondly, those identified acoustic properties will be modified to synthesize the recorded human emotional voices. Thirdly, both human and synthesized utterances will be used to study the electroencephalographic correlate of affective prosody processing in typically developed and autistic children. Finally, and on the basis of the outcomes, synthesized voice-enhanced environments will be created to develop an intervention based on social-robot and Social StoryTM for autistic children to improve affective prosodies discrimination. This protocol has been registered at BioMed Central under the following number: ISRCTN18117434.
Collapse
Affiliation(s)
- Mathilde Marie Duville
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| | - Luz Maria Alonso-Valerdi
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| | - David I Ibarra-Zarate
- Neuroengineering and Neuroacoustics Research Group, Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| |
Collapse
|
16
|
Weiss MW, Sharda M, Lense M, Hyde KL, Trehub SE. Enhanced Memory for Vocal Melodies in Autism Spectrum Disorder and Williams Syndrome. Autism Res 2021; 14:1127-1133. [PMID: 33398938 DOI: 10.1002/aur.2462] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 11/12/2020] [Accepted: 12/15/2020] [Indexed: 12/27/2022]
Abstract
Adults and children with typical development (TD) remember vocal melodies (without lyrics) better than instrumental melodies, which is attributed to the biological and social significance of human vocalizations. Here we asked whether children with autism spectrum disorder (ASD), who have persistent difficulties with communication and social interaction, and adolescents and adults with Williams syndrome (WS), who are highly sociable, even indiscriminately friendly, exhibit a memory advantage for vocal melodies like that observed in individuals with TD. We tested 26 children with ASD, 26 adolescents and adults with WS of similar mental age, and 26 children with TD on their memory for vocal and instrumental (piano, marimba) melodies. After exposing them to 12 unfamiliar folk melodies with different timbres, we required them to indicate whether each of 24 melodies (half heard previously) was old (heard before) or new (not heard before) during an unexpected recognition test. Although the groups successfully distinguished the old from the new melodies, they differed in overall memory. Nevertheless, they exhibited a comparable advantage for vocal melodies. In short, individuals with ASD and WS show enhanced processing of socially significant auditory signals in the context of music. LAY SUMMARY: Typically developing children and adults remember vocal melodies better than instrumental melodies. In this study, we found that children with Autistic Spectrum Disorder, who have severe social processing deficits, and children and adults with Williams syndrome, who are highly sociable, exhibit comparable memory advantages for vocal melodies. The results have implications for musical interventions with these populations.
Collapse
Affiliation(s)
- Michael W Weiss
- International Laboratory for Brain, Music, and Sound Research, Montreal, Quebec, Canada
- Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Megha Sharda
- International Laboratory for Brain, Music, and Sound Research, Montreal, Quebec, Canada
- Department of Psychology, University of Montreal, Montreal, Quebec, Canada
| | - Miriam Lense
- Department of Otolaryngology - Head and Neck Surgery, Vanderbilt University Medical Center; Vanderbilt Kennedy Center; Vanderbilt Brain Institute, Nashville, Tennessee, USA
| | - Krista L Hyde
- International Laboratory for Brain, Music, and Sound Research, Montreal, Quebec, Canada
- Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Sandra E Trehub
- Department of Psychology, University of Toronto Mississauga, Mississauga, Ontario, Canada
| |
Collapse
|
17
|
Brief Report: Speech-in-Noise Recognition and the Relation to Vocal Pitch Perception in Adults with Autism Spectrum Disorder and Typical Development. J Autism Dev Disord 2020; 50:356-363. [PMID: 31583624 DOI: 10.1007/s10803-019-04244-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We tested the ability to recognise speech-in-noise and its relation to the ability to discriminate vocal pitch in adults with high-functioning autism spectrum disorder (ASD) and typically developed adults (matched pairwise on age, sex, and IQ). Typically developed individuals understood speech in higher noise levels as compared to the ASD group. Within the control group but not within the ASD group, better speech-in-noise recognition abilities were significantly correlated with better vocal pitch discrimination abilities. Our results show that speech-in-noise recognition is restricted in people with ASD. We speculate that perceptual impairments such as difficulties in vocal pitch perception might be relevant in explaining these difficulties in ASD.
Collapse
|
18
|
Abstract
We propose a novel feedforward neural network (FFNN)-based speech emotion recognition system built on three layers: A base layer where a set of speech features are evaluated and classified; a middle layer where a speech matrix is built based on the classification scores computed in the base layer; a top layer where an FFNN- and a rule-based classifier are used to analyze the speech matrix and output the predicted emotion. The system offers 80.75% accuracy for predicting the six basic emotions and surpasses other state-of-the-art methods when tested on emotion-stimulated utterances. The method is robust and the fastest in the literature, computing a stable prediction in less than 78 s and proving attractive for replacing questionnaire-based methods and for real-time use. A set of correlations between several speech features (intensity contour, speech rate, pause rate, and short-time energy) and the evaluated emotions is determined, which enhances previous similar studies that have not analyzed these speech features. Using these correlations to improve the system leads to a 6% increase in accuracy. The proposed system can be used to improve human–computer interfaces, in computer-mediated education systems, for accident prevention, and for predicting mental disorders and physical diseases.
Collapse
|