1
|
Regener P, Heffer N, Love SA, Petrini K, Pollick F. Differences in audiovisual temporal processing in autistic adults are specific to simultaneity judgments. Autism Res 2024; 17:1041-1052. [PMID: 38661256 DOI: 10.1002/aur.3134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 04/02/2024] [Indexed: 04/26/2024]
Abstract
Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.
Collapse
Affiliation(s)
- Paula Regener
- Norwich Medical School, University of East Anglia, Norwich, UK
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Naomi Heffer
- School of Sciences, Bath Spa University, Bath, UK
- Department of Psychology, University of Bath, Bath, UK
| | - Scott A Love
- INRAE, CNRS, Université de Tours, PRC, Nouzilly, France
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, UK
- The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, UK
| | - Frank Pollick
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| |
Collapse
|
2
|
Chen L. Synesthetic Correspondence: An Overview. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:101-119. [PMID: 38270856 DOI: 10.1007/978-981-99-7611-9_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Intramodal and cross-modal perceptual grouping based on the spatial proximity and temporal closeness between multiple sensory stimuli, as an operational principle has built a coherent and meaningful representation of the multisensory event/object. To implement and investigate the cross-modal perceptual grouping, researchers have employed excellent paradigms of spatial/temporal ventriloquism and cross-modal dynamic capture and have revealed the conditional constraints as well as the functional facilitations among various correspondence of sensory properties, with featured behavioral evidence, computational framework as well as brain oscillation patterns. Typically, synesthetic correspondence as a special type of cross-modal correspondence can shape the efficiency and effect-size of cross-modal interaction. For example, factors such as pitch/loudness in the auditory dimension with size/brightness in the visual dimension could modulate the strength of the cross-modal temporal capture. The empirical behavioral findings, as well as psychophysical and neurophysiological evidence to address the cross-modal perceptual grouping and synesthetic correspondence, were summarized in this review. Finally, the potential applications (such as artificial synesthesia device) and how synesthetic correspondence interface with semantics (sensory linguistics), as well as the promising research questions in this field have been discussed.
Collapse
Affiliation(s)
- Lihan Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China.
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing, China.
- National Engineering Laboratory for Big Data Analysis and Applications, Peking University, Beijing, China.
| |
Collapse
|
3
|
Pulliam G, Feldman JI, Woynaroski TG. Audiovisual multisensory integration in individuals with reading and language impairments: A systematic review and meta-analysis. Neurosci Biobehav Rev 2023; 149:105130. [PMID: 36933815 PMCID: PMC10243286 DOI: 10.1016/j.neubiorev.2023.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023]
Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA
| | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; John A. Burns School of Medicine, University of Hawaii, Manoa, HI, USA
| |
Collapse
|
4
|
Wu H, Lu H, Lin Q, Zhang Y, Liu Q. Reduced audiovisual temporal sensitivity in Chinese children with dyslexia. Front Psychol 2023; 14:1126720. [PMID: 37151347 PMCID: PMC10157467 DOI: 10.3389/fpsyg.2023.1126720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Background Temporal processing deficits regarding audiovisual cross-modal stimuli could affect children's speed and accuracy of decoding. Aim To investigate the characteristics of audiovisual temporal sensitivity (ATS) in Chinese children, with and without developmental dyslexia and its impact on reading ability. Methods The audiovisual simultaneity judgment and temporal order judgment tasks were performed to investigate the ATS of 106 Chinese children (53 with dyslexia) aged 8 to 12 and 37 adults without a history of dyslexia. The predictive effect of children's audiovisual time binding window on their reading ability and the effects of extra cognitive processing in the temporal order judgment task on participants' ATS were also investigated. Outcomes and results With increasing inter-stimulus intervals, the percentage of synchronous responses in adults declined more rapidly than in children. Adults and typically developing children had significantly narrower time binding windows than children with dyslexia. The size of visual stimuli preceding auditory stimuli time binding window had a marginally significant predictive effect on children's reading fluency. Compared with the simultaneity judgment task, the extra cognitive processing of the temporal order judgment task affected children's ATS. Conclusion and implications The ATS of 8-12-year-old Chinese children is immature. Chinese children with dyslexia have lower ATS than their peers.
Collapse
Affiliation(s)
- Huiduo Wu
- College of Child Development and Education, Zhejiang Normal University, Hangzhou, China
| | - Haidan Lu
- Faculty of Education, East China Normal University, Shanghai, China
| | - Qing Lin
- Department of Preschool Education, China Women’s University, Beijing, China
| | - Yuhong Zhang
- The College of Education Science, Xinjiang Normal University, Urumqi, China
| | - Qiaoyun Liu
- Faculty of Education, East China Normal University, Shanghai, China
- *Correspondence: Qiaoyun Liu,
| |
Collapse
|
5
|
Meilleur A, Foster NEV, Coll SM, Brambati SM, Hyde KL. Unisensory and multisensory temporal processing in autism and dyslexia: A systematic review and meta-analysis. Neurosci Biobehav Rev 2020; 116:44-63. [PMID: 32544540 DOI: 10.1016/j.neubiorev.2020.06.013] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 06/01/2020] [Accepted: 06/08/2020] [Indexed: 12/28/2022]
Abstract
This study presents a comprehensive systematic review and meta-analysis of temporal processing in autism spectrum disorder (ASD) and developmental dyslexia (DD), two neurodevelopmental disorders in which temporal processing deficits have been highly researched. The results provide strong evidence for impairments in temporal processing in both ASD (g = 0.48) and DD (g = 0.82), as measured by judgments of temporal order and simultaneity. In individual analyses, multisensory temporal processing was impaired for both ASD and DD, and unisensory auditory, visual and tactile processing were all impaired in DD. In ASD, speech stimuli showed moderate impairment effect sizes, whereas nonspeech stimuli showed small effects. Greater reading and spelling skills in DD were associated with greater temporal precision. Temporal deficits did not show changes with age in either disorder. In addition to more clearly defining temporal impairments in ASD and DD, the results highlight common and distinct patterns of temporal processing between these disorders. Deficits are discussed in relation to existing theoretical models, and recommendations are made for future research.
Collapse
Affiliation(s)
- Alexa Meilleur
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Nicholas E V Foster
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| | - Sarah-Maude Coll
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| | - Simona M Brambati
- Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, 4545 Chemin Queen Mary, Montréal, QC, H3W 1W4, Canada
| | - Krista L Hyde
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| |
Collapse
|
6
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
7
|
Wan Y, Chen L. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation. Front Comput Neurosci 2018; 12:39. [PMID: 29922143 PMCID: PMC5996128 DOI: 10.3389/fncom.2018.00039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 05/16/2018] [Indexed: 11/18/2022] Open
Abstract
Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
Collapse
Affiliation(s)
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
8
|
Francisco AA, Groen MA, Jesse A, McQueen JM. Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. LEARNING AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.lindif.2017.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
9
|
Guo L, Bao M, Guan L, Chen L. Cognitive Styles Differentiate Crossmodal Correspondences Between Pitch Glide and Visual Apparent Motion. Multisens Res 2017; 30:363-385. [PMID: 31287072 DOI: 10.1163/22134808-00002556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Accepted: 02/13/2017] [Indexed: 11/19/2022]
Abstract
Crossmodal correspondences are the automatic associations that most people have between different basic sensory stimulus attributes, dimensions, or features. For instance, people often show a systematic tendency to associate moving objects with changing pitches. Cognitive styles are defined as an individual's consistent approach to think, perceive, and remember information, and they reflect qualitative rather than quantitative differences between individuals in their thinking processes. Here we asked whether cognitive styles played a role in modulating the crossmodal interaction. We used the visual Ternus display in our study, since it elicits two distinct apparent motion percepts: element motion (with a shorter interval between the two Ternus frames) and group motion (with a longer interval between the two frames). We examined the audiovisual correspondences between the visual Ternus movement directions (upward or downward) and the changes of pitches of concurrent glides (ascending frequency or descending frequency). Moreover, we measured the cognitive styles (with the Embedded Figure Test) for each participant. The results showed that congruent correspondence between pitch-ascending (decreasing) glides and moving upward (downward) visual directions led to a more dominant percept of 'element motion', and such an effect was typically observed in the field-independent group. Importantly, field-independent participants demonstrated a high efficiency for identifying the properties of audiovisual events and applying the crossmodal correspondence in crossmodal interaction. The results suggest cognitive styles could differentiate crossmodal correspondences in crossmodal interaction.
Collapse
Affiliation(s)
- Lu Guo
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ming Bao
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Luyang Guan
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China.,Key Laboratory of Machine Perception, Peking University, Beijing 100871, China
| |
Collapse
|