1
|
Assaneo MF. Reply to: The timing of speech-to-speech synchronization is governed by the P-center. Commun Biol 2025; 8:231. [PMID: 39948423 PMCID: PMC11825936 DOI: 10.1038/s42003-025-07546-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 01/13/2025] [Indexed: 02/16/2025] Open
Affiliation(s)
- M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
2
|
Zhu M, Chen F, Chen W, Zhang Y. The Impact of Executive Functions and Musicality on Speech Auditory-Motor Synchronization in Adults Who Stutter. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:54-68. [PMID: 39680799 DOI: 10.1044/2024_jslhr-24-00141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2024]
Abstract
PURPOSE Stuttering is a neurodevelopmental disorder that disrupts the timing and rhythmic flow of speech production. There is growing evidence indicating that abnormal interactions between the auditory and motor cortices contribute to the development of stuttering. The present study investigated speech auditory-motor synchronization in stuttering adults and the influential factors behind it as compared to individuals without stuttering. METHOD Sixteen Mandarin-speaking adults with stuttering and 19 fluent controls, who were matched for age, gender, and years of musical training, participated in the current study. Their ability to synchronize vocal speech production with accelerating auditory sequences was assessed using the spontaneous speech-to-speech synchronization test (SSS test). Additionally, all participants conducted a series of standardized behavioral tests to evaluate their musicality and executive functions. RESULTS Stutterers achieved significantly lower phase locking values in the SSS test compared to nonstuttering controls, indicating a potential rhythmic processing deficit in developmental stuttering. Moreover, the strength of speech auditory-motor synchronization in stutterers was significantly associated with their performance in tasks such as digit span and nonword repetition. This finding further emphasizes the strong link between rhythmic processing and working memory. CONCLUSIONS This study provides compelling evidence for the speech rhythmic deficit in individuals with stuttering by incorporating auditory-motor processes. It would offer valuable insights into the intricate relationship between language and the brain and shed light on the potential benefits of cognitive training for speech intervention in individuals with stuttering difficulties. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27984362.
Collapse
Affiliation(s)
- Min Zhu
- School of Foreign Languages, Hunan University, Changsha, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Weiping Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, The University of Minnesota, Twin Cities
| |
Collapse
|
3
|
Plastira MN, Michaelides MP, Avraamides MN. Music and speech time perception of musically trained individuals: The effects of audio type, duration of musical training, and rhythm perception. Q J Exp Psychol (Hove) 2024; 77:1835-1845. [PMID: 37742039 PMCID: PMC11373153 DOI: 10.1177/17470218231205857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
The perception of time is a subjective experience influenced by various factors such as individual psychology, external stimuli, and personal experiences, and it is often assessed with the use of the reproduction task that involves individuals estimating and reproducing the duration of specific time intervals. In the current study, we examined the ability of 97 musically trained participants to reproduce the durations of temporal intervals that were filled with music or speech stimuli. The results revealed a consistent pattern of durations being underestimated, and an association was observed between the duration of musical training and the level of accuracy in reproducing both music and speech tracks. In addition, speech tracks were overall reproduced more accurately, and as longer, than music tracks. Structural models suggested the presence of two, highly correlated, dimensions of time perception for speech and music stimuli that were related to the duration of musical training, but not with self-reported rhythm perception. The possible effects of arousal and pleasantness of stimuli on time perception are discussed within the framework of an internal clock model.
Collapse
Affiliation(s)
- Miria N Plastira
- Department of Psychology, University of Cyprus, Nicosia, Cyprus
- CYENS Centre of Excellence, Nicosia, Cyprus
| | | | - Marios N Avraamides
- Department of Psychology, University of Cyprus, Nicosia, Cyprus
- CYENS Centre of Excellence, Nicosia, Cyprus
| |
Collapse
|
4
|
Zhu M, Chen F, Shi C, Zhang Y. Amplitude envelope onset characteristics modulate phase locking for speech auditory-motor synchronization. Psychon Bull Rev 2024; 31:1661-1669. [PMID: 38227125 DOI: 10.3758/s13423-023-02446-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2023] [Indexed: 01/17/2024]
Abstract
The spontaneous speech-to-speech synchronization (SSS) test has been shown to be an effective behavioral method to estimate cortical speech auditory-motor coupling strength through phase-locking value (PLV) between auditory input and motor output. This study further investigated how amplitude envelope onset variations of the auditory speech signal may influence the speech auditory-motor synchronization. Sixty Mandarin-speaking adults listened to a stream of randomly presented syllables at an increasing speed while concurrently whispering in synchrony with the rhythm of the auditory stimuli whose onset consistency was manipulated, consisting of aspirated, unaspirated, and mixed conditions. The participants' PLVs for the three conditions in the SSS test were derived and compared. Results showed that syllable rise time affected the speech auditory-motor synchronization in a bifurcated fashion. Specifically, PLVs were significantly higher in the temporally more consistent conditions (aspirated or unaspirated) than those in the less consistent condition (mixed) for high synchronizers. In contrast, low synchronizers tended to be immune to the onset consistency. Overall, these results validated how syllable onset consistency in the rise time of amplitude envelope may modulate the strength of speech auditory-motor coupling. This study supports the application of the SSS test to examine individual differences in the integration of perception and production systems, which has implications for those with speech and language disorders that have difficulty with processing speech onset characteristics such as rise time.
Collapse
Affiliation(s)
- Min Zhu
- School of Foreign Languages, Hunan University, Changsha, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China.
| | - Chenxin Shi
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, The University of Minnesota, Twin Cities, MN, USA.
| |
Collapse
|
5
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
6
|
Gómez Varela I, Orpella J, Poeppel D, Ripolles P, Assaneo MF. Syllabic rhythm and prior linguistic knowledge interact with individual differences to modulate phonological statistical learning. Cognition 2024; 245:105737. [PMID: 38342068 DOI: 10.1016/j.cognition.2024.105737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 01/30/2024] [Accepted: 01/31/2024] [Indexed: 02/13/2024]
Abstract
Phonological statistical learning - our ability to extract meaningful regularities from spoken language - is considered critical in the early stages of language acquisition, in particular for helping to identify discrete words in continuous speech. Most phonological statistical learning studies use an experimental task introduced by Saffran et al. (1996), in which the syllables forming the words to be learned are presented continuously and isochronously. This raises the question of the extent to which this purportedly powerful learning mechanism is robust to the kinds of rhythmic variability that characterize natural speech. Here, we tested participants with arhythmic, semi-rhythmic, and isochronous speech during learning. In addition, we investigated how input rhythmicity interacts with two other factors previously shown to modulate learning: prior knowledge (syllable order plausibility with respect to participants' first language) and learners' speech auditory-motor synchronization ability. We show that words are extracted by all learners even when the speech input is completely arhythmic. Interestingly, high auditory-motor synchronization ability increases statistical learning when the speech input is temporally more predictable but only when prior knowledge can also be used. This suggests an additional mechanism for learning based on predictions not only about when but also about what upcoming speech will be.
Collapse
Affiliation(s)
- Ireri Gómez Varela
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico
| | - Joan Orpella
- Department of Psychology, New York University, New York, NY, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA; Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany; Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Pablo Ripolles
- Department of Psychology, New York University, New York, NY, USA; Center for Language, Music and Emotion (CLaME), New York University, New York, NY, USA; Music and Audio Research Lab (MARL), New York University, New York, NY, USA; Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico.
| |
Collapse
|
7
|
Lee HH, Groves K, Ripollés P, Carrasco M. Audiovisual integration in the McGurk effect is impervious to music training. Sci Rep 2024; 14:3262. [PMID: 38332159 PMCID: PMC10853564 DOI: 10.1038/s41598-024-53593-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/01/2024] [Indexed: 02/10/2024] Open
Abstract
The McGurk effect refers to an audiovisual speech illusion where the discrepant auditory and visual syllables produce a fused percept between the visual and auditory component. However, little is known about how individual differences contribute to the McGurk effect. Here, we examined whether music training experience-which involves audiovisual integration-can modulate the McGurk effect. Seventy-three participants completed the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire to evaluate their music expertise on a continuous scale. Gold-MSI considers participants' daily-life exposure to music learning experiences (formal and informal), instead of merely classifying people into different groups according to how many years they have been trained in music. Participants were instructed to report, via a 3-alternative forced choice task, "what a person said": /Ba/, /Ga/ or /Da/. The experiment consisted of 96 audiovisual congruent trials and 96 audiovisual incongruent (McGurk) trials. We observed no significant correlations between the susceptibility of the McGurk effect and the different subscales of the Gold-MSI (active engagement, perceptual abilities, music training, singing abilities, emotion) or the general musical sophistication composite score. Together, these findings suggest that music training experience does not modulate audiovisual integration in speech as reflected by the McGurk effect.
Collapse
Affiliation(s)
- Hsing-Hao Lee
- Department of Psychology, New York University, New York, USA.
| | - Karleigh Groves
- Department of Psychology, New York University, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, USA
- Music and Audio Research Lab (MARL), New York University, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, USA
- Music and Audio Research Lab (MARL), New York University, New York, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, USA
- Center for Neural Science, New York University, New York, USA
| |
Collapse
|
8
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
9
|
Sjuls GS, Vulchanova MD, Assaneo MF. Replication of population-level differences in auditory-motor synchronization ability in a Norwegian-speaking population. COMMUNICATIONS PSYCHOLOGY 2023; 1:47. [PMID: 39242904 PMCID: PMC11332004 DOI: 10.1038/s44271-023-00049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 12/05/2023] [Indexed: 09/09/2024]
Abstract
The Speech-to-Speech Synchronization test is a powerful tool in assessing individuals' auditory-motor synchronization ability, namely the ability to synchronize one's own utterances to the rhythm of an external speech signal. Recent studies using the test have revealed that participants fall into two distinct groups-high synchronizers and low synchronizers-with significant differences in their neural (structural and functional) underpinnings and outcomes on several behavioral tasks. Therefore, it is critical to assess the universality of the population-level distribution (indicating two groups rather than a normal distribution) across populations of speakers. Here we demonstrate that the previous results replicate with a Norwegian-speaking population, indicating that the test is generalizable beyond previously tested populations of native English- and German-speakers.
Collapse
Affiliation(s)
- Guro S Sjuls
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology, Department of Language and Literature, Trondheim, Norway.
| | - Mila D Vulchanova
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology, Department of Language and Literature, Trondheim, Norway
| | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Santiago de Querétaro, México
| |
Collapse
|
10
|
Mares C, Echavarría Solana R, Assaneo MF. Auditory-motor synchronization varies among individuals and is critically shaped by acoustic features. Commun Biol 2023; 6:658. [PMID: 37344562 DOI: 10.1038/s42003-023-04976-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 05/24/2023] [Indexed: 06/23/2023] Open
Abstract
The ability to synchronize body movements with quasi-regular auditory stimuli represents a fundamental trait in humans at the core of speech and music. Despite the long trajectory of the study of such ability, little attention has been paid to how acoustic features of the stimuli and individual differences can modulate auditory-motor synchrony. Here, by exploring auditory-motor synchronization abilities across different effectors and types of stimuli, we revealed that this capability is more restricted than previously assumed. While the general population can synchronize to sequences composed of the repetitions of the same acoustic unit, the synchrony in a subgroup of participants is impaired when the unit's identity varies across the sequence. In addition, synchronization in this group can be temporarily restored by being primed by a facilitator stimulus. Auditory-motor integration is stable across effectors, supporting the hypothesis of a central clock mechanism subserving the different articulators but critically shaped by the acoustic features of the stimulus and individual abilities.
Collapse
Affiliation(s)
- Cecilia Mares
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico
| | | | - M Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
11
|
Lubinus C, Keitel A, Obleser J, Poeppel D, Rimmele JM. Explaining flexible continuous speech comprehension from individual motor rhythms. Proc Biol Sci 2023; 290:20222410. [PMID: 36855868 PMCID: PMC9975658 DOI: 10.1098/rspb.2022.2410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/02/2023] Open
Abstract
When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (digit span) and higher linguistic, model-based sentence predictability-particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role.
Collapse
Affiliation(s)
- Christina Lubinus
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, UK
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience (in Cooperation with Max Planck Society), Frankfurt am Main, Germany
| | - Johanna M. Rimmele
- Department of Neuroscience and Department of Cognitive Neuropsychology, Max-Planck-Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
12
|
Chen Y, Tang E, Ding H, Zhang Y. Auditory Pitch Perception in Autism Spectrum Disorder: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4866-4886. [PMID: 36450443 DOI: 10.1044/2022_jslhr-22-00254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Pitch plays an important role in auditory perception of music and language. This study provides a systematic review with meta-analysis to investigate whether individuals with autism spectrum disorder (ASD) have enhanced pitch processing ability and to identify the potential factors associated with processing differences between ASD and neurotypicals. METHOD We conducted a systematic search through six major electronic databases focusing on the studies that used nonspeech stimuli to provide a qualitative and quantitative assessment across existing studies on pitch perception in autism. We identified potential participant- and methodology-related moderators and conducted metaregression analyses using mixed-effects models. RESULTS On the basis of 22 studies with a total of 464 participants with ASD, we obtained a small-to-medium positive effect size (g = 0.26) in support of enhanced pitch perception in ASD. Moreover, the mean age and nonverbal IQ of participants were found to significantly moderate the between-studies heterogeneity. CONCLUSIONS Our study provides the first meta-analysis on auditory pitch perception in ASD and demonstrates the existence of different developmental trajectories between autistic individuals and neurotypicals. In addition to age, nonverbal ability is found to be a significant contributor to the lower level/local processing bias in ASD. We highlight the need for further investigation of pitch perception in ASD under challenging listening conditions. Future neurophysiological and brain imaging studies with a longitudinal design are also needed to better understand the underlying neural mechanisms of atypical pitch processing in ASD and to help guide auditory-based interventions for improving language and social functioning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21614271.
Collapse
Affiliation(s)
- Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
13
|
Assaneo MF, Ripollés P, Tichenor SE, Yaruss JS, Jackson ES. The Relationship Between Auditory-Motor Integration, Interoceptive Awareness, and Self-Reported Stuttering Severity. Front Integr Neurosci 2022; 16:869571. [PMID: 35600224 PMCID: PMC9120354 DOI: 10.3389/fnint.2022.869571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/13/2022] [Indexed: 11/13/2022] Open
Abstract
Stuttering is a neurodevelopmental speech disorder associated with motor timing that differs from non-stutterers. While neurodevelopmental disorders impacted by timing are associated with compromised auditory-motor integration and interoception, the interplay between those abilities and stuttering remains unexplored. Here, we studied the relationships between speech auditory-motor synchronization (a proxy for auditory-motor integration), interoceptive awareness, and self-reported stuttering severity using remotely delivered assessments. Results indicate that in general, stutterers and non-stutterers exhibit similar auditory-motor integration and interoceptive abilities. However, while speech auditory-motor synchrony (i.e., integration) and interoceptive awareness were not related, speech synchrony was inversely related to the speaker’s perception of stuttering severity as perceived by others, and interoceptive awareness was inversely related to self-reported stuttering impact. These findings support claims that stuttering is a heterogeneous, multi-faceted disorder such that uncorrelated auditory-motor integration and interoception measurements predicted different aspects of stuttering, suggesting two unrelated sources of timing differences associated with the disorder.
Collapse
Affiliation(s)
- M. Florencia Assaneo
- Institute of Neurobiology, National Autonomous University of Mexico, Querétaro, Mexico
- *Correspondence: M. Florencia Assaneo Eric S. Jackson
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, NY, United States
- Music and Audio Research Lab, New York University, New York, NY, United States
- Center for Music, Language and Emotion, New York University, New York, NY, United States
| | - Seth E. Tichenor
- Department of Speech-Language Pathology, Duquesne University, Pittsburgh, PA, United States
| | - J. Scott Yaruss
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, United States
| | - Eric S. Jackson
- Department of Communicative Sciences and Disorders, New York University, New York, NY, United States
- *Correspondence: M. Florencia Assaneo Eric S. Jackson
| |
Collapse
|