1
|
Hashimoto RI, Okada R, Aoki R, Nakamura M, Ohta H, Itahashi T. Functional alterations of lateral temporal cortex for processing voice prosody in adults with autism spectrum disorder. Cereb Cortex 2024; 34:bhae363. [PMID: 39270675 DOI: 10.1093/cercor/bhae363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/17/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.
Collapse
Affiliation(s)
- Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
| | - Rieko Okada
- Faculty of Intercultural Japanese Studies, Otemae University, 6-42 Ochayasho-cho, Nishinomiya-shi Hyogo 662-8552, Japan
| | - Ryuta Aoki
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, 54 Shogoin-Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Motoaki Nakamura
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Haruhisa Ohta
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| |
Collapse
|
2
|
Janes A, McClay E, Gurm M, Boucher TQ, Yeung HH, Iarocci G, Scheerer NE. Predicting Social Competence in Autistic and Non-Autistic Children: Effects of Prosody and the Amount of Speech Input. J Autism Dev Disord 2024:10.1007/s10803-024-06363-w. [PMID: 38703251 DOI: 10.1007/s10803-024-06363-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/17/2024] [Indexed: 05/06/2024]
Abstract
PURPOSE Autistic individuals often face challenges perceiving and expressing emotions, potentially stemming from differences in speech prosody. Here we explore how autism diagnoses between groups, and measures of social competence within groups may be related to, first, children's speech characteristics (both prosodic features and amount of spontaneous speech), and second, to these two factors in mothers' speech to their children. METHODS Autistic (n = 21) and non-autistic (n = 18) children, aged 7-12 years, participated in a Lego-building task with their mothers, while conversational speech was recorded. Mean F0, pitch range, pitch variability, and amount of spontaneous speech were calculated for each child and their mother. RESULTS The results indicated no differences in speech characteristics across autistic and non-autistic children, or across their mothers, suggesting that conversational context may have large effects on whether differences between autistic and non-autistic populations are found. However, variability in social competence within the group of non-autistic children (but not within autistic children) was predictive of children's mean F0, pitch range and pitch variability. The amount of spontaneous speech produced by mothers (but not their prosody) predicted their autistic children's social competence, which may suggest a heightened impact of scaffolding for mothers of autistic children. CONCLUSION Together, results suggest complex interactions between context, social competence, and adaptive parenting strategies in driving prosodic differences in children's speech.
Collapse
Affiliation(s)
- Alyssa Janes
- Graduate Program in Health and Rehabilitation Sciences, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.
- School of Communication Sciences and Disorders, Western University, 1151 Richmond Street, London, ON, N6A 3K7, Canada.
| | - Elise McClay
- Department of Linguistics, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Mandeep Gurm
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Troy Q Boucher
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC, V5A 1S6, Canada
| | - Nichole E Scheerer
- Psychology Department, Wilfrid Laurier University, 75 University Ave W, Waterloo, ON, N2L3C5, Canada
| |
Collapse
|
3
|
Ochi K, Kojima M, Ono N, Kuroda M, Owada K, Sagayama S, Yamasue H. Objective assessment of autism spectrum disorder based on performance in structured interpersonal acting-out tasks with prosodic stability and variability. Autism Res 2024; 17:395-409. [PMID: 38151701 DOI: 10.1002/aur.3080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 12/01/2023] [Indexed: 12/29/2023]
Abstract
In this study, we sought to objectively and quantitatively characterize the prosodic features of autism spectrum disorder (ASD) via the characteristics of prosody in a newly developed structured speech experiment. Male adults with high-functioning ASD and age/intelligence-matched men with typical development (TD) were asked to read 29 brief scripts aloud in response to preceding auditory stimuli. To investigate whether (1) highly structured acting-out tasks can uncover the prosodic of difference between those with ASD and TD, and (2) the prosodic stableness and flexibleness can be used for objective automatic assessment of ASD, we compared prosodic features such as fundamental frequency, intensity, and mora duration. The results indicate that individuals with ASD exhibit stable pitch registers or volume levels in some affective vocal-expression scenarios, such as those involving anger or sadness, compared with TD and those with TD. However, unstable prosody was observed in some timing control or emphasis tasks in the participants with ASD. Automatic classification of the ASD and TD groups using a support vector machine (SVM) with speech features exhibited an accuracy of 90.4%. A machine learning-based assessment of the degree of ASD core symptoms using support vector regression (SVR) also had good performance. These results may inform the development of a new easy-to-use assessment tool for ASD core symptoms using recorded audio signals.
Collapse
Affiliation(s)
- Keiko Ochi
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Masaki Kojima
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Nobutaka Ono
- Graduate School of Systems Design, Tokyo Metropolitan University, Tokyo, Japan
| | - Miho Kuroda
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Keiho Owada
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | | | - Hidenori Yamasue
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Department of Psychiatry, Hamamatsu University School of Medicine, Hamamatsu City, Japan
| |
Collapse
|
4
|
Ma W, Xu L, Zhang H, Zhang S. Can Natural Speech Prosody Distinguish Autism Spectrum Disorders? A Meta-Analysis. Behav Sci (Basel) 2024; 14:90. [PMID: 38392443 PMCID: PMC10886261 DOI: 10.3390/bs14020090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 01/21/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Natural speech plays a pivotal role in communication and interactions between human beings. The prosody of natural speech, due to its high ecological validity and sensitivity, has been acoustically analyzed and more recently utilized in machine learning to identify individuals with autism spectrum disorders (ASDs). In this meta-analysis, we evaluated the findings of empirical studies on acoustic analysis and machine learning techniques to provide statistically supporting evidence for adopting natural speech prosody for ASD detection. Using a random-effects model, the results observed moderate-to-large pooled effect sizes for pitch-related parameters in distinguishing individuals with ASD from their typically developing (TD) counterparts. Specifically, the standardized mean difference (SMD) values for pitch mean, pitch range, pitch standard deviation, and pitch variability were 0.3528, 0.6744, 0.5735, and 0.5137, respectively. However, the differences between the two groups in temporal features could be unreliable, as the SMD values for duration and speech rate were only 0.0738 and -0.0547. Moderator analysis indicated task types were unlikely to influence the final results, whereas age groups showed a moderating role in pooling pitch range differences. Furthermore, promising accuracy rates on ASD identification were shown in our analysis of multivariate machine learning studies, indicating averaged sensitivity and specificity of 75.51% and 80.31%, respectively. In conclusion, these findings shed light on the efficacy of natural prosody in identifying ASD and offer insights for future investigations in this line of research.
Collapse
Affiliation(s)
- Wen Ma
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Lele Xu
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Shurui Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| |
Collapse
|
5
|
Plank IS, Koehler JC, Nelson AM, Koutsouleris N, Falter-Wagner CM. Automated extraction of speech and turn-taking parameters in autism allows for diagnostic classification using a multivariable prediction model. Front Psychiatry 2023; 14:1257569. [PMID: 38025455 PMCID: PMC10658003 DOI: 10.3389/fpsyt.2023.1257569] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/20/2023] [Indexed: 12/01/2023] Open
Abstract
Autism spectrum disorder (ASD) is diagnosed on the basis of speech and communication differences, amongst other symptoms. Since conversations are essential for building connections with others, it is important to understand the exact nature of differences between autistic and non-autistic verbal behaviour and evaluate the potential of these differences for diagnostics. In this study, we recorded dyadic conversations and used automated extraction of speech and interactional turn-taking features of 54 non-autistic and 26 autistic participants. The extracted speech and turn-taking parameters showed high potential as a diagnostic marker. A linear support vector machine was able to predict the dyad type with 76.2% balanced accuracy (sensitivity: 73.8%, specificity: 78.6%), suggesting that digitally assisted diagnostics could significantly enhance the current clinical diagnostic process due to their objectivity and scalability. In group comparisons on the individual and dyadic level, we found that autistic interaction partners talked slower and in a more monotonous manner than non-autistic interaction partners and that mixed dyads consisting of an autistic and a non-autistic participant had increased periods of silence, and the intensity, i.e. loudness, of their speech was more synchronous.
Collapse
Affiliation(s)
- I. S. Plank
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - J. C. Koehler
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - A. M. Nelson
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| | - N. Koutsouleris
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
- Max Planck Institute of Psychiatry, Munich, Germany
- Institute of Psychiatry, Psychology and Neuroscience, King’s College, London, United Kingdom
| | - C. M. Falter-Wagner
- Department of Psychiatry and Psychotherapy, LMU University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
6
|
Yu L, Huang D, Wang S, Zhang Y. Reduced Neural Specialization for Word-level Linguistic Prosody in Children with Autism. J Autism Dev Disord 2023; 53:4351-4367. [PMID: 36038793 DOI: 10.1007/s10803-022-05720-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 10/15/2022]
Abstract
Children with autism often show atypical brain lateralization for speech and language processing, however, it is unclear what linguistic component contributes to this phenomenon. Here we measured event-related potential (ERP) responses in 21 school-age autistic children and 25 age-matched neurotypical (NT) peers during listening to word-level prosodic stimuli. We found that both groups displayed larger late negative response (LNR) amplitude to native prosody than to nonnative prosody; however, unlike the NT group exhibiting left-lateralized LNR distinction of prosodic phonology, the autism group showed no evidence of LNR lateralization. Moreover, in both groups, the LNR effects were only present for prosodic phonology but not for phoneme-free prosodic acoustics. These results extended the findings of inadequate neural specialization for language in autism to sub-lexical prosodic structures.
Collapse
Affiliation(s)
- Luodi Yu
- Center for Autism Research, School of Education, Guangzhou University, Wenyi Bldg, Guangzhou, China.
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Dan Huang
- Guangzhou Rehabilitation & Research Center for Children with ASD, Guangzhou Cana School, Guangzhou, China
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University) , Ministry of Education, Guangzhou, China.
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
7
|
Gibson MT, Schmidt-Kassow M, Paulmann S. How neurotypical listeners recognize emotions expressed through vocal cues by speakers with high-functioning autism. PLoS One 2023; 18:e0293233. [PMID: 37874793 PMCID: PMC10597502 DOI: 10.1371/journal.pone.0293233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 10/03/2023] [Indexed: 10/26/2023] Open
Abstract
We conducted an investigation to explore how neurotypical (NT) listeners perceive the emotional tone of voice in sentences spoken by individuals with high-functioning autism spectrum disorders (ASD) and NT speakers. The investigation included both male and female speakers from both groups. In Study 1, NT listeners were asked to identify the emotional prosody (anger, fear, happiness, surprise or neutral) conveyed by the speakers. Results revealed that emotional expressions produced by male ASD speakers were generally less accurately recognized compared to male NT speakers. In contrast, emotions expressed by female ASD speakers were more accurately categorized compared to female NT speakers, except when expressing fear. This suggests that female ASD speakers may not express emotional prosody in the same way as their male counterparts. In Study 2, a subset of produced materials was rated for valence, voice modulation, and voice control to supplement Study 1 results: Female ASD speakers sounded less negative when expressing fear compared to female NT speakers. Male ASD speakers were perceived as less positive than NT speakers when expressing happiness. Voice modulation also differed between groups, showing a tendency for ASD speakers to follow different display rules for both positive emotions (happiness and surprise) tested. Finally, male ASD speakers were rated to use voice cues less appropriately compared to NT male speakers, an effect less pronounced for female ASD speakers. Together, the results imply that difficulties in social interactions among individuals with high-functioning ASD could be due to non-prototypical voice use of male ASD speakers and emphasize that female individuals do not show the same effects.
Collapse
Affiliation(s)
- Mindy T. Gibson
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Maren Schmidt-Kassow
- Department of Psychiatry, University Hospital, Goethe University Frankfurt, Frankfurt, Germany
| | - Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| |
Collapse
|
8
|
Lau JC, Losh M, Speights M. Differences in speech articulatory timing and associations with pragmatic language ability in autism. RESEARCH IN AUTISM SPECTRUM DISORDERS 2023; 102:102118. [PMID: 37484484 PMCID: PMC10358876 DOI: 10.1016/j.rasd.2023.102118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Background Speech articulation difficulties have not traditionally been considered to be a feature of Autism Spectrum Disorder (ASD). In contrast, speech prosodic differences have been widely reported in ASD, and may even be expressed in subtle form among clinically unaffected first-degree relatives, representing the expression of underlying genetic liability. Some evidence has challenged this traditional dichotomy, suggesting that differences in speech articulatory mechanisms may be evident in ASD, and potentially related to perceived prosodic differences. Clinical measurement of articulatory skills has traditionally been phoneme-based, rather than by acoustic measurement of motor control. Subtle differences in articulatory/motor control, prosodic characteristics (acoustic), and pragmatic language ability (linguistic) may each be contributors to differences perceived by listeners, but the interrelationship is unclear. In this study, we examined the articulatory aspects of this relationship, in speech samples from individuals with ASD and their parents during narration. Method Using Speechmark® analysis, we examined articulatory landmarks, fine-grained representations of articulatory timing as series of laryngeal and vocal-tract gestures pertaining to prosodic elements crucial for conveying pragmatic information. Results Results revealed articulatory timing differences in individuals with ASD but not their parents, suggesting that although potentially not influenced by broader genetic liability to ASD, subtle articulatory differences may indeed be evident in ASD as the recent literature indicates. A follow-up path analysis detected associations between articulatory timing differences and prosody, and subsequently, pragmatic language ability. Conclusion Together, results suggest a complex relationship where subtle differences in articulatory timing may result in atypical acoustic signals, and serve as a distal mechanistic contributor to pragmatic language ability ASD.
Collapse
Affiliation(s)
- Joseph C.Y. Lau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, U.S.A
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, U.S.A
| | - Marisha Speights
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, U.S.A
| |
Collapse
|
9
|
Gong B, Li N, Li Q, Yan X, Chen J, Li L, Wu X, Wu C. The Mandarin Chinese auditory emotions stimulus database: A validated set of Chinese pseudo-sentences. Behav Res Methods 2023; 55:1441-1459. [PMID: 35641682 DOI: 10.3758/s13428-022-01868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
Abstract
Emotional prosody is fully embedded in language and can be influenced by the linguistic properties of a specific language. Considering the limitations of existing Chinese auditory stimulus database studies, we developed and validated an emotional auditory stimuli database composed of Chinese pseudo-sentences, recorded by six professional actors in Mandarin Chinese. Emotional expressions included happiness, sadness, anger, fear, disgust, pleasant surprise, and neutrality. All emotional categories were vocalized into two types of sentence patterns, declarative and interrogative. In addition, all emotional pseudo-sentences, except for neutral, were vocalized at two levels of emotional intensity: normal and strong. Each recording was validated with 40 native Chinese listeners in terms of the recognition accuracy of the intended emotion portrayal; finally, 4361 pseudo-sentence stimuli were included in the database. Validation of the database using a forced-choice recognition paradigm revealed high rates of emotional recognition accuracy. The detailed acoustic attributes of vocalization were provided and connected to the emotion recognition rates. This corpus could be a valuable resource for researchers and clinicians to explore the behavioral and neural mechanisms underlying emotion processing of the general population and emotional disturbances in neurological, psychiatric, and developmental disorders. The Mandarin Chinese auditory emotion stimulus database is available at the Open Science Framework ( https://osf.io/sfbm6/?view_only=e22a521e2a7d44c6b3343e11b88f39e3 ).
Collapse
Affiliation(s)
- Bingyan Gong
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Na Li
- Theatre Pedagogy Department, Central Academy of Drama, Beijing, 100710, China
| | - Qiuhong Li
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Xinyuan Yan
- School of Computing, University of Utah, Salt Lake City, UT, USA
| | - Jing Chen
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
| | - Xihong Wu
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China.
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China.
| | - Chao Wu
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
10
|
Patel SP, Landau E, Martin GE, Rayburn C, Elahi S, Fragnito G, Losh M. A profile of prosodic speech differences in individuals with autism spectrum disorder and first-degree relatives. JOURNAL OF COMMUNICATION DISORDERS 2023; 102:106313. [PMID: 36804204 PMCID: PMC10395513 DOI: 10.1016/j.jcomdis.2023.106313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 02/04/2023] [Accepted: 02/06/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Impairments in prosody (e.g., intonation, stress) are among the most notable communication characteristics of individuals with autism spectrum disorder (ASD) and can significantly impact communicative interactions. Evidence suggests that differences in prosody may be evident among first-degree relatives of autistic individuals, indicating that genetic liability to ASD is expressed through prosodic variation, along with subclinical traits referred to as the broad autism phenotype (BAP). This study aimed to further characterize prosodic profiles associated with ASD and the BAP to better understand the clinical and etiologic significance of prosodic differences. METHOD Autistic individuals, their parents, and respective control groups completed the Profiling Elements of Prosody in Speech-Communication (PEPS-C), an assessment of receptive and expressive prosody. Responses to expressive subtests were further examined using acoustic analyses. Relationships between PEPS-C performance, acoustic measurements, and pragmatic language ability in conversation were assessed to understand how differences in prosody might contribute to broader ASD-related pragmatic profiles. RESULTS In ASD, receptive prosody deficits were observed in contrastive stress. With regard to expressive prosody, both the ASD and ASD Parent groups exhibited reduced accuracy in imitation, lexical stress, and contrastive stress expression compared to respective control groups, though no acoustic differences were noted. In ASD and Control groups, lower accuracy across several PEPS-C subtests and acoustic measurements related to increased pragmatic language violations. In parents, acoustic measurements were tied to broader pragmatic language and personality traits of the BAP. CONCLUSION Overlapping areas of expressive prosody differences were identified in ASD and parents, providing evidence that prosody is an important language-related ability that may be impacted by genetic risk of ASD.
Collapse
Affiliation(s)
- Shivani P Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA
| | - Emily Landau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA
| | - Gary E Martin
- Department of Communication Sciences and Disorders, St. John's University, Staten Island, New York, USA
| | - Claire Rayburn
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA
| | - Saadia Elahi
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA
| | - Gabrielle Fragnito
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL 60208, USA.
| |
Collapse
|
11
|
Leung FYN, Stojanovik V, Micai M, Jiang C, Liu F. Emotion recognition in autism spectrum disorder across age groups: A cross-sectional investigation of various visual and auditory communicative domains. Autism Res 2023; 16:783-801. [PMID: 36727629 DOI: 10.1002/aur.2896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023]
Abstract
Previous research on emotion processing in autism spectrum disorder (ASD) has predominantly focused on human faces and speech prosody, with little attention paid to other domains such as nonhuman faces and music. In addition, emotion processing in different domains was often examined in separate studies, making it challenging to evaluate whether emotion recognition difficulties in ASD generalize across domains and age cohorts. The present study investigated: (i) the recognition of basic emotions (angry, scared, happy, and sad) across four domains (human faces, face-like objects, speech prosody, and song) in 38 autistic and 38 neurotypical (NT) children, adolescents, and adults in a forced-choice labeling task, and (ii) the impact of pitch and visual processing profiles on this ability. Results showed similar recognition accuracy between the ASD and NT groups across age groups for all domains and emotion types, although processing speed was slower in the ASD compared to the NT group. Age-related differences were seen in both groups, which varied by emotion, domain, and performance index. Visual processing style was associated with facial emotion recognition speed and pitch perception ability with auditory emotion recognition in the NT group but not in the ASD group. These findings suggest that autistic individuals may employ different emotion processing strategies compared to NT individuals, and that emotion recognition difficulties as manifested by slower response times may result from a generalized, rather than a domain-specific underlying mechanism that governs emotion recognition processes across domains in ASD.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Martina Micai
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
12
|
Walter A, Martz E, Weibel S, Weiner L. Tackling emotional processing in adults with attention deficit hyperactivity disorder and attention deficit hyperactivity disorder + autism spectrum disorder using emotional and action verbal fluency tasks. Front Psychiatry 2023; 14:1098210. [PMID: 36816409 PMCID: PMC9928945 DOI: 10.3389/fpsyt.2023.1098210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 01/09/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) are two neurodevelopmental conditions with neuropsychological, social, emotional, and psychopathological similarities. Both are characterized by executive dysfunction, emotion dysregulation (ED), and psychiatric comorbidities. By focusing on emotions and embodied cognition, this study aims to improve the understanding of overlapping symptoms between ADHD and ASD through the use of verbal fluency tasks. Methods Fifty-two adults with ADHD, 13 adults with ADHD + ASD and 24 neurotypical (NT) participants were recruited in this study. A neuropsychological evaluation, including different verbal fluency conditions (e.g. emotional and action), was proposed. Subjects also completed several self-report questionnaires, such as scales measuring symptoms of ED. Results Compared to NT controls, adults with ADHD + ASD produced fewer anger-related emotions. Symptoms of emotion dysregulation were associated with an increased number of actions verbs and emotions produced in ADHD. Discussion The association between affective language of adults with ADHD and symptoms of emotion dysregulation may reflect their social maladjustment. Moreover, the addition of ADHD + ASD conditions may reflect more severe affective dysfunction.
Collapse
Affiliation(s)
- Amélia Walter
- Institut des Neurosciences Cellulaires et Intégratives, Centre National de la Recherche Scientifique (UPR 3212), Strasbourg University, Strasbourg, France
| | - Emilie Martz
- Institut National de la Santé et de la Recherche Médicale U1114, Strasbourg, France
| | - Sébastien Weibel
- Institut National de la Santé et de la Recherche Médicale U1114, Strasbourg, France
- Department of Psychiatry, University Hospital of Strasbourg, Strasbourg, France
| | - Luisa Weiner
- Department of Psychiatry, University Hospital of Strasbourg, Strasbourg, France
- Laboratoire de Psychologie des Cognitions, University of Strasbourg, Strasbourg, France
| |
Collapse
|
13
|
Patel SP, Cole J, Lau JCY, Fragnito G, Losh M. Verbal entrainment in autism spectrum disorder and first-degree relatives. Sci Rep 2022; 12:11496. [PMID: 35798758 PMCID: PMC9262979 DOI: 10.1038/s41598-022-12945-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 05/19/2022] [Indexed: 11/09/2022] Open
Abstract
Entrainment, the unconscious process leading to coordination between communication partners, is an important dynamic human behavior that helps us connect with one another. Difficulty developing and sustaining social connections is a hallmark of autism spectrum disorder (ASD). Subtle differences in social behaviors have also been noted in first-degree relatives of autistic individuals and may express underlying genetic liability to ASD. In-depth examination of verbal entrainment was conducted to examine disruptions to entrainment as a contributing factor to the language phenotype in ASD. Results revealed distinct patterns of prosodic and lexical entrainment in individuals with ASD. Notably, subtler entrainment differences in prosodic and syntactic entrainment were identified in parents of autistic individuals. Findings point towards entrainment, particularly prosodic entrainment, as a key process linked to social communication difficulties in ASD and reflective of genetic liability to ASD.
Collapse
Affiliation(s)
- Shivani P Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Jennifer Cole
- Department of Linguistics, Northwestern University, Evanston, IL, USA
| | - Joseph C Y Lau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Gabrielle Fragnito
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA.
| |
Collapse
|
14
|
Lau JCY, Patel S, Kang X, Nayar K, Martin GE, Choy J, Wong PCM, Losh M. Cross-linguistic patterns of speech prosodic differences in autism: A machine learning study. PLoS One 2022; 17:e0269637. [PMID: 35675372 PMCID: PMC9176813 DOI: 10.1371/journal.pone.0269637] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 05/24/2022] [Indexed: 11/19/2022] Open
Abstract
Differences in speech prosody are a widely observed feature of Autism Spectrum Disorder (ASD). However, it is unclear how prosodic differences in ASD manifest across different languages that demonstrate cross-linguistic variability in prosody. Using a supervised machine-learning analytic approach, we examined acoustic features relevant to rhythmic and intonational aspects of prosody derived from narrative samples elicited in English and Cantonese, two typologically and prosodically distinct languages. Our models revealed successful classification of ASD diagnosis using rhythm-relative features within and across both languages. Classification with intonation-relevant features was significant for English but not Cantonese. Results highlight differences in rhythm as a key prosodic feature impacted in ASD, and also demonstrate important variability in other prosodic properties that appear to be modulated by language-specific differences, such as intonation.
Collapse
Affiliation(s)
- Joseph C. Y. Lau
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Shivani Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Xin Kang
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Research Centre for Language, Cognition and Language Application, Chongqing University, Chongqing, China
- School of Foreign Languages and Cultures, Chongqing University, Chongqing, China
| | - Kritika Nayar
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| | - Gary E. Martin
- Department of Communication Sciences and Disorders, St. John’s University, Staten Island, New York, United States of America
| | - Jason Choy
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
| | - Patrick C. M. Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong S.A.R., China
- Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong S.A.R., China
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, United States of America
| |
Collapse
|
15
|
Sinvani RT, Sapir S. Sentence vs. Word Perception by Young Healthy Females: Toward a Better Understanding of Emotion in Spoken Language. Front Glob Womens Health 2022; 3:829114. [PMID: 35692948 PMCID: PMC9174644 DOI: 10.3389/fgwh.2022.829114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Expression and perception of emotions by voice are fundamental for basic mental health stability. Since different languages interpret results differently, studies should be guided by the relationship between speech complexity and the emotional perception. The aim of our study was therefore to analyze the efficiency of speech stimuli, word vs. sentence, as it relates to the accuracy of four different categories of emotions: anger, sadness, happiness, and neutrality. To this end, a total of 2,235 audio clips were presented to 49 females, native Hebrew speakers, aged 20–30 years (M = 23.7; SD = 2.13). Participants were asked to judge audio utterances according to one of four emotional categories: anger, sadness, happiness, and neutrality. Simulated voice samples were consisting of words and meaningful sentences, provided by 15 healthy young females Hebrew native speakers. Generally, word vs. sentence was not originally accepted as a means of emotional recognition of voice; However, introducing a variety of speech utterances revealed a different perception. Thus, the emotional conveyance provided new, even higher precision to our findings: Anger emotions produced a higher impact to the single word (χ2 = 10.21, p < 0.01) as opposed to the sentence, while sadness was identified more accurately with a sentence (χ2 = 3.83, p = 0.05). Our findings resulted in a better understanding of how speech types can interpret perception, as a part of mental health.
Collapse
Affiliation(s)
- Rachel-Tzofia Sinvani
- School of Occupational Therapy, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
- *Correspondence: Rachel-Tzofia Sinvani
| | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
16
|
Leung FYN, Sin J, Dawson C, Ong JH, Zhao C, Veić A, Liu F. Emotion recognition across visual and auditory modalities in autism spectrum disorder: A systematic review and meta-analysis. DEVELOPMENTAL REVIEW 2022. [DOI: 10.1016/j.dr.2021.101000] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
17
|
Asghari SZ, Farashi S, Bashirian S, Jenabi E. Distinctive prosodic features of people with autism spectrum disorder: a systematic review and meta-analysis study. Sci Rep 2021; 11:23093. [PMID: 34845298 PMCID: PMC8630064 DOI: 10.1038/s41598-021-02487-6] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 11/16/2021] [Indexed: 12/26/2022] Open
Abstract
In this systematic review, we analyzed and evaluated the findings of studies on prosodic features of vocal productions of people with autism spectrum disorder (ASD) in order to recognize the statistically significant, most confirmed and reliable prosodic differences distinguishing people with ASD from typically developing individuals. Using suitable keywords, three major databases including Web of Science, PubMed and Scopus, were searched. The results for prosodic features such as mean pitch, pitch range and variability, speech rate, intensity and voice duration were extracted from eligible studies. The pooled standard mean difference between ASD and control groups was extracted or calculated. Using I2 statistic and Cochrane Q-test, between-study heterogeneity was evaluated. Furthermore, publication bias was assessed using funnel plot and its significance was evaluated using Egger's and Begg's tests. Thirty-nine eligible studies were retrieved (including 910 and 850 participants for ASD and control groups, respectively). This systematic review and meta-analysis showed that ASD group members had a significantly larger mean pitch (SMD = - 0.4, 95% CI [- 0.70, - 0.10]), larger pitch range (SMD = - 0.78, 95% CI [- 1.34, - 0.21]), longer voice duration (SMD = - 0.43, 95% CI [- 0.72, - 0.15]), and larger pitch variability (SMD = - 0.46, 95% CI [- 0.84, - 0.08]), compared with typically developing control group. However, no significant differences in pitch standard deviation, voice intensity and speech rate were found between groups. Chronological age of participants and voice elicitation tasks were two sources of between-study heterogeneity. Furthermore, no publication bias was observed during analyses (p > 0.05). Mean pitch, pitch range, pitch variability and voice duration were recognized as the prosodic features reliably distinguishing people with ASD from TD individuals.
Collapse
Affiliation(s)
| | - Sajjad Farashi
- Autism Spectrum Disorders Research Center, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Saeid Bashirian
- Department of Public Health, School of Health, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Ensiyeh Jenabi
- Autism Spectrum Disorders Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
18
|
Duville MM, Alonso-Valerdi LM, Ibarra-Zarate DI. The Mexican Emotional Speech Database (MESD): elaboration and assessment based on machine learning . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1644-1647. [PMID: 34891601 DOI: 10.1109/embc46164.2021.9629934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The Mexican Emotional Speech Database is presented along with the evaluation of its reliability based on machine learning analysis. The database contains 864 voice recordings with six different prosodies: anger, disgust, fear, happiness, neutral, and sadness. Furthermore, three voice categories are included: female adult, male adult, and child. The following emotion recognition was reached for each category: 89.4%, 93.9% and 83.3% accuracy on female, male and child voices, respectively.Clinical Relevance - Mexican Emotional Speech Database is a contribution to healthcare emotional speech data and can be used to help objective diagnosis and disease characterization.
Collapse
|
19
|
Gong B, Li Q, Zhao Y, Wu C. Auditory emotion recognition deficits in schizophrenia: A systematic review and meta-analysis. Asian J Psychiatr 2021; 65:102820. [PMID: 34482183 DOI: 10.1016/j.ajp.2021.102820] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 08/24/2021] [Indexed: 01/11/2023]
Abstract
BACKGROUND Auditory emotion recognition (AER) deficits refer to the abnormal identification and interpretation of tonal or prosodic features that transmit emotional information in sounds or speech. Evidence suggests that AER deficits are related to the pathology of schizophrenia. However, the effect size of the deficit in specific emotional category recognition in schizophrenia and its association with psychotic symptoms have never been evaluated through a meta-analysis. METHODS A systematic search for literature published in English or Chinese until November 30, 2020 was conducted in PubMed, Embase, Web of Science, PsychINFO, and China National Knowledge Infrastructure (CNKI), WanFang and Weip Databases. AER differences between patients and healthy controls (HCs) were assessed by the standardized mean differences (SMDs). Subgroup analyses were conducted for the type of emotional stimuli and the diagnosis of schizophrenia or schizoaffective disorders (Sch/SchA). Meta-regression analyses were performed to assess the influence of patients' age, sex, illness duration, antipsychotic dose, positive and negative symptoms on the study SMDs. RESULTS Eighteen studies containing 615 psychosis (Sch/SchA) and 488 HCs were included in the meta-analysis. Patients exhibited moderate deficits in recognizing the neutral, happy, sad, angry, fear, disgust, and surprising emotion. Neither the semantic information in the auditory stimuli nor the diagnosis subtype affected AER deficits in schizophrenia. Sadness, anger, and disgust AER deficits were each positively associated with negative symptoms in schizophrenia. CONCLUSIONS Patients with schizophrenia have moderate AER deficits, which were associated with negative symptoms. Rehabilitation focusing on improving AER abilities may help improve negative symptoms and the long-term prognosis of schizophrenia.
Collapse
Affiliation(s)
- Bingyan Gong
- Peking University School of Nursing, Beijing 100191, China
| | - Qiuhong Li
- Peking University School of Nursing, Beijing 100191, China
| | - Yiran Zhao
- Peking University School of Nursing, Beijing 100191, China
| | - Chao Wu
- Peking University School of Nursing, Beijing 100191, China.
| |
Collapse
|
20
|
Williams GL, Wharton T, Jagoe C. Mutual (Mis)understanding: Reframing Autistic Pragmatic "Impairments" Using Relevance Theory. Front Psychol 2021; 12:616664. [PMID: 33995177 PMCID: PMC8117104 DOI: 10.3389/fpsyg.2021.616664] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 03/22/2021] [Indexed: 11/30/2022] Open
Abstract
A central diagnostic and anecdotal feature of autism is difficulty with social communication. We take the position that communication is a two-way, intersubjective phenomenon-as described by the double empathy problem-and offer up relevance theory (a cognitive account of utterance interpretation) as a means of explaining such communication difficulties. Based on a set of proposed heuristics for successful and rapid interpretation of intended meaning, relevance theory positions communication as contingent on shared-and, importantly, mutually recognized-"relevance." Given that autistic and non-autistic people may have sometimes markedly different embodied experiences of the world, we argue that what is most salient to each interlocutor may be mismatched. Relevance theory would predict that where this salient information is not (mutually) recognized or adjusted for, mutual understanding may be more effortful to achieve. This paper presents the findings from a small-scale, linguistic ethnographic study of autistic communication featuring eight core autistic participants. Each core autistic participant engaged in three naturalistic conversations around the topic of loneliness with: (1) a familiar, chosen conversation partner; (2) a non-autistic stranger and (3) an autistic stranger. Relevance theory is utilized as a frame for the linguistic analysis of the interactions. Mutual understanding was unexpectedly high across all types of conversation pairings. In conversations involving two autistic participants, flow, rapport and intersubjective attunement were significantly increased and in three instances, autistic interlocutors appeared to experience improvements in their individual communicative competence contrasted with their other conversations. The findings have the potential to guide future thinking about how, in practical terms, communication between autistic and non-autistic people in both personal and public settings might be improved.
Collapse
Affiliation(s)
- Gemma L. Williams
- School of Humanities, University of Brighton, Brighton, United Kingdom
| | - Tim Wharton
- School of Humanities, University of Brighton, Brighton, United Kingdom
| | - Caroline Jagoe
- School of Linguistic, Speech and Communication Sciences, Trinity College Dublin, University of Dublin, Dublin, Ireland
| |
Collapse
|
21
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
22
|
Lehnert-LeHouillier H, Terrazas S, Sandoval S. Prosodic Entrainment in Conversations of Verbal Children and Teens on the Autism Spectrum. Front Psychol 2020; 11:582221. [PMID: 33132991 PMCID: PMC7578392 DOI: 10.3389/fpsyg.2020.582221] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 09/16/2020] [Indexed: 11/13/2022] Open
Abstract
Unusual speech prosody has long been recognized as a characteristic feature of the speech of individuals diagnosed with Autism Spectrum Disorders (ASD). However, research to determine the exact nature of this difference in speech prosody is still ongoing. Many individuals with verbal autism perform well on tasks testing speech prosody. Nonetheless, their expressive prosody is judged to be unusual by others. We propose that one aspect of this perceived difference in speech prosody in individuals with ASD may be due to a deficit in the ability to entrain-or become more similar-to their conversation partners in prosodic features over the course of a conversation. In order to investigate this hypothesis, 24 children and teens between the ages of 9 and 15 years participated in our study. Twelve of the participants had previously been diagnosed with ASD and the other 12 participants were matched to the ASD participants in age, gender, and non-verbal IQ scores. All participants completed a goal-directed conversation task, which was subsequently analyzed acoustically. Our results suggest (1) that youth diagnosed with ASD entrain less to their conversation partners compared to their neurotypical peers-in fact, children and teens diagnosed with ASD tend to dis-entrain from their conversation partners while their neurotypical peers tend to converge to their conversation partners' prosodic features. (2) Although age interacts differently with prosodic entrainment in youth with and without ASD, this difference is attributable to the entrainment behavior of the conversation partners rather than to those with ASD. (3) Better language skill is negatively correlated with prosodic entrainment for both youth with and without ASD. The observed differences in prosodic entrainment in children and teens with ASD may not only contribute to the perceived unusual prosody in youth with ASD but are also likely to be indicative of their difficulties in social communication, which constitutes a core challenge for individuals with ASD.
Collapse
Affiliation(s)
| | - Susana Terrazas
- Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM, United States
| | - Steven Sandoval
- Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM, United States
| |
Collapse
|
23
|
Morrison KE, DeBrabander KM, Jones DR, Faso DJ, Ackerman RA, Sasson NJ. Outcomes of real-world social interaction for autistic adults paired with autistic compared to typically developing partners. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2019; 24:1067-1080. [PMID: 31823656 DOI: 10.1177/1362361319892701] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Differences in social communication and interaction styles between autistic and typically developing have been studied in isolation and not in the context of real-world social interaction. The current study addresses this "blind spot" by examining whether real-world social interaction quality for autistic adults differs when interacting with typically developing relative to autistic partners. Participants (67 autism spectrum disorder, 58 typically developing) were assigned to one of three dyadic partnerships (autism-autism: n = 22; typically developing-typically developing: n = 23; autism-typically developing: n = 25; 55 complete dyads, 15 partial dyads) in which they completed a 5-min unstructured conversation with an unfamiliar person and then assessed the quality of the interaction and their impressions of their partner. Although autistic adults were rated as more awkward, less attractive, and less socially warm than typically developing adults by both typically developing and autistic partners, only typically developing adults expressed greater interest in future interactions with typically developing relative to autistic partners. In contrast, autistic participants trended toward an interaction preference for other autistic adults and reported disclosing more about themselves to autistic compared to typically developing partners. These results suggest that social affiliation may increase for autistic adults when partnered with other autistic people, and support reframing social interaction difficulties in autism as a relational rather than an individual impairment.
Collapse
|
24
|
Sorensen T, Zane E, Feng T, Narayanan S, Grossman R. Cross-Modal Coordination of Face-Directed Gaze and Emotional Speech Production in School-Aged Children and Adolescents with ASD. Sci Rep 2019; 9:18301. [PMID: 31797950 PMCID: PMC6892887 DOI: 10.1038/s41598-019-54587-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 11/14/2019] [Indexed: 11/10/2022] Open
Abstract
Autism spectrum disorder involves persistent difficulties in social communication. Although these difficulties affect both verbal and nonverbal communication, there are no quantitative behavioral studies to date investigating the cross-modal coordination of verbal and nonverbal communication in autism. The objective of the present study was to characterize the dynamic relation between speech production and facial expression in children with autism and to establish how face-directed gaze modulates this cross-modal coordination. In a dynamic mimicry task, experiment participants watched and repeated neutral and emotional spoken sentences with accompanying facial expressions. Analysis of audio and motion capture data quantified cross-modal coordination between simultaneous speech production and facial expression. Whereas neurotypical children produced emotional sentences with strong cross-modal coordination and produced neutral sentences with weak cross-modal coordination, autistic children produced similar levels of cross-modal coordination for both neutral and emotional sentences. An eyetracking analysis revealed that cross-modal coordination of speech production and facial expression was greater when the neurotypical child spent more time looking at the face, but weaker when the autistic child spent more time looking at the face. In sum, social communication difficulties in autism spectrum disorder may involve deficits in cross-modal coordination. This finding may inform how autistic individuals are perceived in their daily conversations.
Collapse
Affiliation(s)
- Tanner Sorensen
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA.
| | - Emily Zane
- Department of Communication Sciences and Disorders, Emerson College, Boston, MA, 02116, USA
| | - Tiantian Feng
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA
| | - Shrikanth Narayanan
- Signal Analysis and Interpretation Laboratory, Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, 90089, USA
| | - Ruth Grossman
- Department of Communication Sciences and Disorders, Emerson College, Boston, MA, 02116, USA
| |
Collapse
|
25
|
Interoceptive awareness mitigates deficits in emotional prosody recognition in Autism. Biol Psychol 2019; 146:107711. [DOI: 10.1016/j.biopsycho.2019.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Revised: 04/04/2019] [Accepted: 05/31/2019] [Indexed: 12/13/2022]
|
26
|
Grossman RB, Mertens J, Zane E. Perceptions of self and other: Social judgments and gaze patterns to videos of adolescents with and without autism spectrum disorder. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2019; 23:846-857. [PMID: 30014714 PMCID: PMC6403013 DOI: 10.1177/1362361318788071] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Neurotypical adults often form negative first impressions of individuals with autism spectrum disorder and are less interested in engaging with them socially. In contrast, individuals with autism spectrum disorder actively seek out the company of others who share their diagnosis. It is not clear, however, whether individuals with autism spectrum disorder form more positive first impressions of autistic peers when diagnosis is not explicitly shared. We asked adolescents with and without autism spectrum disorder to watch brief video clips of adolescents with and without autism spectrum disorder and answer questions about their impressions of the individuals in the videos. Questions were related to participants' perceptions of the social skills of the individuals in the video, as well as their own willingness to interact with that person. We also measured gaze patterns to the faces, eyes, and mouths of adolescents in the video stimuli. Both participant groups spent less time gazing at videos of autistic adolescents. Regardless of diagnostic group, all participants provided more negative judgments of autistic than neurotypical adolescents in the videos. These data indicate that, without being explicitly informed of a shared diagnosis, adolescents with autism spectrum disorder form negative first impressions of autistic adolescents that are similar to, or lower than, those formed by neurotypical peers.
Collapse
Affiliation(s)
- Ruth B Grossman
- 1 Emerson College, USA
- 2 University of Massachusetts Medical School, USA
| | | | | |
Collapse
|
27
|
Morrison KE, DeBrabander KM, Faso DJ, Sasson NJ. Variability in first impressions of autistic adults made by neurotypical raters is driven more by characteristics of the rater than by characteristics of autistic adults. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2019; 23:1817-1829. [PMID: 30848682 DOI: 10.1177/1362361318824104] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Previous work indicates that first impressions of autistic adults are more favorable when neurotypical raters know their clinical diagnosis and have high understanding about autism, suggesting that social experiences of autistic adults are affected by the knowledge and beliefs of the neurotypical individuals they encounter. Here, we examine these patterns in more detail by assessing variability in first impression ratings of autistic adults (N = 20) by neurotypical raters (N = 505). Variability in ratings was driven more by characteristics of raters than those of autistic adults, particularly for items related to "intentions to interact." Specifically, variability in rater stigma toward autism and autism knowledge contributed to first impression ratings. Only ratings of "awkwardness" were driven more by characteristics of the autistic adults than characteristics of the raters. Furthermore, although first impressions of autistic adults generally improved when raters were informed of their autism status, providing a diagnosis worsened impressions made by neurotypical raters with high stigma toward autism. Variations in how the diagnosis was labeled (e.g. "autistic" vs "has autism") did not affect results. These findings indicate a large role of neurotypical perceptions and biases in shaping the social experiences for autistic adults that may be improved by reducing stigma and increasing acceptance.
Collapse
|
28
|
Nicolaidis C, Milton D, Sasson NJ, Sheppard E(L, Yergeau M. An Expert Discussion on Autism and Empathy. AUTISM IN ADULTHOOD 2019; 1:4-11. [PMID: 36600690 PMCID: PMC8992804 DOI: 10.1089/aut.2018.29000.cjn] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Affiliation(s)
| | - Damian Milton
- Intellectual and Developmental Disabilities, University of Kent, United Kingdom
| | - Noah J. Sasson
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, Texas
| | | | - Melanie Yergeau
- College of Literature, Science, and the Arts, University of Michigan, Ann Arbor, Michigan
| |
Collapse
|
29
|
Denmark T, Atkinson J, Campbell R, Swettenham J. Signing with the Face: Emotional Expression in Narrative Production in Deaf Children with Autism Spectrum Disorder. J Autism Dev Disord 2019; 49:294-306. [PMID: 30267252 PMCID: PMC6331500 DOI: 10.1007/s10803-018-3756-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/29/2022]
Abstract
This study examined facial expressions produced during a British Sign Language (BSL) narrative task (Herman et al., International Journal of Language and Communication Disorders 49(3):343-353, 2014) by typically developing deaf children and deaf children with autism spectrum disorder. The children produced BSL versions of a video story in which two children are seen to enact a language-free scenario where one tricks the other. This task encourages elicitation of facial acts signalling intention and emotion, since the protagonists showed a range of such expressions during the events portrayed. Results showed that typically developing deaf children produced facial expressions which closely aligned with native adult signers' BSL narrative versions of the task. Children with ASD produced fewer targeted expressions and showed qualitative differences in the facial actions that they produced.
Collapse
Affiliation(s)
- Tanya Denmark
- Division of Psychology and Language Science, Department of Language and Cognition, University College London, London, UK.
- Division of Psychology and Language Science, Deafness, Cognition and Language Research Centre, University College London, 2 Wakefield Street, Chandler House, London, WC1N 9PF, UK.
| | - Joanna Atkinson
- Division of Psychology and Language Science, Deafness, Cognition and Language Research Centre, University College London, 2 Wakefield Street, Chandler House, London, WC1N 9PF, UK
| | - Ruth Campbell
- Division of Psychology and Language Science, Deafness, Cognition and Language Research Centre, University College London, 2 Wakefield Street, Chandler House, London, WC1N 9PF, UK
| | - John Swettenham
- Division of Psychology and Language Science, Department of Language and Cognition, University College London, London, UK
| |
Collapse
|
30
|
Mencattini A, Mosciano F, Comes MC, Di Gregorio T, Raguso G, Daprati E, Ringeval F, Schuller B, Di Natale C, Martinelli E. An emotional modulation model as signature for the identification of children developmental disorders. Sci Rep 2018; 8:14487. [PMID: 30262838 PMCID: PMC6160482 DOI: 10.1038/s41598-018-32454-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 08/06/2018] [Indexed: 12/15/2022] Open
Abstract
In recent years, applications like Apple's Siri or Microsoft's Cortana have created the illusion that one can actually "chat" with a machine. However, a perfectly natural human-machine interaction is far from real as none of these tools can empathize. This issue has raised an increasing interest in speech emotion recognition systems, as the possibility to detect the emotional state of the speaker. This possibility seems relevant to a broad number of domains, ranging from man-machine interfaces to those of diagnostics. With this in mind, in the present work, we explored the possibility of applying a precision approach to the development of a statistical learning algorithm aimed at classifying samples of speech produced by children with developmental disorders(DD) and typically developing(TD) children. Under the assumption that acoustic features of vocal production could not be efficiently used as a direct marker of DD, we propose to apply the Emotional Modulation function(EMF) concept, rather than running analyses on acoustic features per se to identify the different classes. The novel paradigm was applied to the French Child Pathological & Emotional Speech Database obtaining a final accuracy of 0.79, with maximum performance reached in recognizing language impairment (0.92) and autism disorder (0.82).
Collapse
Affiliation(s)
- Arianna Mencattini
- Department of Electronic Engineering, University of Rome Tor Vergata, via del Politecnico 1, 00133, Roma, Italy
| | - Francesco Mosciano
- Department of Electronic Engineering, University of Rome Tor Vergata, via del Politecnico 1, 00133, Roma, Italy
| | - Maria Colomba Comes
- Department of Electronic Engineering, University of Rome Tor Vergata, via del Politecnico 1, 00133, Roma, Italy
| | - Tania Di Gregorio
- Faculty of Science MM.FF.NN., University of Bari, Aldo Moro, University Campus Ernesto Quagliariello, Via Edoardo Orabona 4, 70126, Bari, Italy
| | - Grazia Raguso
- Faculty of Science MM.FF.NN., University of Bari, Aldo Moro, University Campus Ernesto Quagliariello, Via Edoardo Orabona 4, 70126, Bari, Italy
| | - Elena Daprati
- Department of Systems Medicine, CBMS, University of Rome Tor Vergata, via Montpellier 1, 00133, Roma, Italy
| | - Fabien Ringeval
- Laboratoire d'Informatique de Grenoble, Université Grenoble Alpes, 38401, St Martin d'Hères, France
| | - Bjorn Schuller
- GLAM - Group on Language, Audio & Music, Imperial College London, SW7 2AZ, London, UK
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, 86159, Augsburg, Germany
| | | | - Eugenio Martinelli
- Department of Electronic Engineering, University of Rome Tor Vergata, via del Politecnico 1, 00133, Roma, Italy.
| |
Collapse
|