1
|
Poole D, Gowen E, Poliakoff E, Lambrechts A, Jones LA. When 2 become 1: Autistic simultaneity judgements about asynchronous audiovisual speech. Q J Exp Psychol (Hove) 2024; 77:1865-1882. [PMID: 37593957 DOI: 10.1177/17470218231197518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
It has been proposed that autistic people experience a temporal distortion whereby the temporal binding window of multisensory integration is extended. Research to date has focused on autistic children so whether these differences persist into adulthood remains unknown. In addition, the possibility that the previous observations have arisen from between-group differences in response bias, rather than perceptual differences, has not been addressed. Participants completed simultaneity judgements of audiovisual speech stimuli across a range of stimulus-onset asynchronies. Response times and accuracy data were fitted to a drift-diffusion model so that the drift rate (a measure of processing efficiency) and starting point (response bias) could be estimated. In Experiment 1, we tested a sample of non-autistic adults who completed the Autism Quotient questionnaire. Autism Quotient score was not correlated with either drift rate or response bias, nor were there between-group differences when splitting based on the first and third quantiles of scores. In Experiment 2, we compared the performance of autistic with a group of non-autistic adults. There were no between-group differences in either drift rate or starting point. The results of this study do not support the previous suggestion that autistic people have an extended temporal binding window for audiovisual speech. In addition, exploratory analysis revealed that operationalising the temporal binding window in different ways influenced whether a group difference was observed, which is an important consideration for future work.
Collapse
Affiliation(s)
- Daniel Poole
- School of Health Sciences, The University of Manchester, Manchester, UK
- Department of Psychology, University of Sheffield, Sheffield, UK
| | - Emma Gowen
- School of Health Sciences, The University of Manchester, Manchester, UK
| | - Ellen Poliakoff
- School of Health Sciences, The University of Manchester, Manchester, UK
| | - Anna Lambrechts
- Autism Research Group, City, University of London, London, UK
| | - Luke A Jones
- School of Health Sciences, The University of Manchester, Manchester, UK
| |
Collapse
|
2
|
Zhao J. Memory, attention and creativity as cognitive processes in musical performance: A case study of students and professionals among non-musicians and musicians. Atten Percept Psychophys 2024:10.3758/s13414-024-02944-0. [PMID: 39174815 DOI: 10.3758/s13414-024-02944-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2024] [Indexed: 08/24/2024]
Abstract
This research discusses that cognitive processes such as memory, attention and creativity differ in students and professionals, among musicians and non-musicians, dealing with musical performance. The purpose of the study was to evaluate and compare the role of memory, attention and creativity as cognitive processes in musical performance, focusing on the differences between non-musicians and musicians. The sample involved 400 individuals, students and professionals, specialising in music and economics. The research instruments used by the scholars were the Wechsler Memory Scale, the Conners Performance Test, and the Torrance Tests of Creative Thinking. Musical students possessed better-developed auditory and short-term memory, while professional musicians had better auditory, visual working and short-term memory. Analysis of attention reveals that music students score better than non-musicians on all four aspects: inattention, impulsivity, sustained attention, and vigilance. For professionals, the key aspects are impulsivity and sustained attention with better results revealed in musicians. Creative thinking was the only factor where the differences were statistically significant in all five scales and the findings proved that creativity was better developed among musicians. This study provides an in-depth analysis and adds new knowledge to existing literature and empirical data on the cognitive processes associated with musical performance, focusing on memory, attention and creativity. By examining the differences between non-musicians and musicians, as well as students and professionals, the study provides insight into how musical performance can be used as a way to develop these cognitive processes.
Collapse
Affiliation(s)
- Jingtao Zhao
- Mykola Lysenko Lviv National Academy of Music, Lviv Vocal Room, Ostapa Nyzhankivskoho srt., 5, Lviv, 79000, Ukraine.
| |
Collapse
|
3
|
Cavicchioli M, Santoni A, Chiappetta F, Deodato M, Di Dona G, Scalabrini A, Galli F, Ronconi L. Psychological dissociation and temporal integration/segregation across the senses: An experimental study. Conscious Cogn 2024; 124:103731. [PMID: 39096823 DOI: 10.1016/j.concog.2024.103731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/19/2024] [Accepted: 07/23/2024] [Indexed: 08/05/2024]
Abstract
There are no studies that have experimentally tested how temporal integration/segregation of sensory inputs might be linked to the emergence of dissociative experiences and alterations of emotional functioning. Thirty-six participants completed 3 sensory integration tasks. Psychometric thresholds were estimated as indexes of temporal integration/segregation processes. We collected self-report measures of pre-task trait levels of dissociation, as well as pre- post-task changes in both dissociation and emotionality. An independent sample of 21 subjects completed a control experiment administering the Attention Network Test. Results showed: (i) a significant increase of dissociative experiences after the completion of sensory integration tasks, but not after the ANT task; (ii) that subjective thresholds predicted the emergence of dissociative states; (iii) temporal integration efforts affected positive emotionality, which was explained by the extent of task-dependent dissociative states. The present findings reveal that dissociation could be understood in terms of an imbalance between "hyper-segregation" and "hyper-integration" processes.
Collapse
Affiliation(s)
- Marco Cavicchioli
- Department of Dynamic and Clinical Psychology, and Health Studies, Faculty of Medicine and Psychology, SAPIENZA University of Rome, Italy; Faculty of Psychology, Sigmund Freud University, Ripa di Porta Ticinese 77, Milan, Italy.
| | - Alessia Santoni
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | | | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Giuseppe Di Dona
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| | - Andrea Scalabrini
- Department of Human and Social Science, University of Bergamo, Mental Health, Bergamo, Italy
| | - Federica Galli
- Department of Dynamic and Clinical Psychology, and Health Studies, Faculty of Medicine and Psychology, SAPIENZA University of Rome, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
4
|
Vogler NW, Chen R, Virkler A, Tu VY, Gottfried JA, Geffen MN. Direct piriform-to-auditory cortical projections shape auditory-olfactory integration. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.11.602976. [PMID: 39071445 PMCID: PMC11275881 DOI: 10.1101/2024.07.11.602976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
In a real-world environment, the brain must integrate information from multiple sensory modalities, including the auditory and olfactory systems. However, little is known about the neuronal circuits governing how odors influence and modulate sound processing. Here, we investigated the mechanisms underlying auditory-olfactory integration using anatomical, electrophysiological, and optogenetic approaches, focusing on the auditory cortex as a key locus for cross-modal integration. First, retrograde and anterograde viral tracing strategies revealed a direct projection from the piriform cortex to the auditory cortex. Next, using in vivo electrophysiological recordings of neuronal activity in the auditory cortex of awake mice, we found that odor stimuli modulate auditory cortical responses to sound. Finally, we used in vivo optogenetic manipulations during electrophysiology to demonstrate that olfactory modulation in auditory cortex, specifically, odor-driven enhancement of sound responses, depends on direct input from the piriform cortex. Together, our results identify a novel cortical circuit shaping olfactory modulation in the auditory cortex, shedding new light on the neuronal mechanisms underlying auditory-olfactory integration.
Collapse
Affiliation(s)
- Nathan W. Vogler
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Ruoyi Chen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Alister Virkler
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Violet Y. Tu
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| | - Jay A. Gottfried
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Maria N. Geffen
- Department of Otorhinolaryngology, Perelman School of Medicine, University of Pennsylvania
| |
Collapse
|
5
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
6
|
Sasaoka T, Hirose K, Maekawa T, Inui T, Yamawaki S. The anterior cingulate cortex is involved in intero-exteroceptive integration for spatial image transformation of the self-body. Neuroimage 2024; 293:120634. [PMID: 38705431 DOI: 10.1016/j.neuroimage.2024.120634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 04/30/2024] [Accepted: 05/02/2024] [Indexed: 05/07/2024] Open
Abstract
Spatial image transformation of the self-body is a fundamental function of visual perspective-taking. Recent research underscores the significance of intero-exteroceptive information integration to construct representations of our embodied self. This raises the intriguing hypothesis that interoceptive processing might be involved in the spatial image transformation of the self-body. To test this hypothesis, the present study used functional magnetic resonance imaging to measure brain activity during an arm laterality judgment (ALJ) task. In this task, participants were tasked with discerning whether the outstretched arm of a human figure, viewed from the front or back, was the right or left hand. The reaction times for the ALJ task proved longer when the stimulus presented orientations of 0°, 90°, and 270° relative to the upright orientation, and when the front view was presented rather than the back view. Reflecting the increased reaction time, increased brain activity was manifested in a cluster centered on the dorsal anterior cingulate cortex (ACC), suggesting that the activation reflects the involvement of an embodied simulation in ALJ. Furthermore, this cluster of brain activity exhibited overlap with regions where the difference in activation between the front and back views positively correlated with the participants' interoceptive sensitivity, as assessed through the heartbeat discrimination task, within the pregenual ACC. These results suggest that the ACC plays an important role in integrating intero-exteroceptive cues to spatially transform the image of our self-body.
Collapse
Affiliation(s)
- Takafumi Sasaoka
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan.
| | - Kenji Hirose
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan; Center for Human Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Kita 12, Nishi 7, Kita-ku, Sapporo, Hokkaido 060-0812, Japan
| | - Toru Maekawa
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan
| | - Toshio Inui
- Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shigeto Yamawaki
- Center for Brain, Mind, and KANSEI Sciences Research, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima, Hiroshima 734-8551, Japan
| |
Collapse
|
7
|
Yan D, Seki A. The Role of Letter-Speech Sound Integration in Native and Second Language Reading: A Study in Native Japanese Readers Learning English. J Cogn Neurosci 2024; 36:1123-1140. [PMID: 38437176 DOI: 10.1162/jocn_a_02137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
The automatic activation of letter-speech sound (L-SS) associations is a vital step in typical reading acquisition. However, the contribution of L-SS integration during nonalphabetic native and alphabetic second language (L2) reading remains unclear. This study explored whether L-SS integration plays a similar role in a nonalphabetic language as in alphabetic languages and its contribution to L2 reading among native Japanese-speaking adults with varying English proficiency. A priming paradigm in Japanese and English was performed by presenting visual letters or symbols, followed by auditory sounds. We compared behavioral and event-related responses elicited by congruent letter-sound pairs, incongruent pairs, and baseline condition (symbol-sound pairs). The behavioral experiment revealed shorter RTs in the congruent condition for Japanese and English tasks, suggesting a facilitation effect of congruency. The ERP experiment results showed an increased early N1 response to Japanese congruent pairs compared to corresponding incongruent stimuli at the left frontotemporal electrodes. Interestingly, advanced English learners exhibited greater activities in bilateral but predominantly right-lateralized frontotemporal regions for the congruent condition within the N1 time window. Moreover, the enhancement of P2 response to congruent pairs was observed in intermediate English learners. These findings indicate that, despite deviations from native language processing, advanced speakers may successfully integrate letters and sounds during English reading, whereas intermediate learners may encounter difficulty in achieving L-SS integration when reading L2. Furthermore, our results suggest that L2 proficiency may affect the level of automaticity in L-SS integration, with the right P2 congruency effect playing a compensatory role for intermediate learners.
Collapse
Affiliation(s)
- Dongyang Yan
- Faculty of Education, Hokkaido University, Japan
| | - Ayumi Seki
- Faculty of Education, Hokkaido University, Japan
| |
Collapse
|
8
|
Appel M, Hasin D, Farah R, Horowitz-Kraus T. Greater utilization of executive functions networks when listening to stories with visual stimulation is related to lower reading abilities in children. Brain Cogn 2024; 177:106161. [PMID: 38696928 DOI: 10.1016/j.bandc.2024.106161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/31/2024] [Accepted: 04/16/2024] [Indexed: 05/04/2024]
Abstract
Narrative comprehension relies on basic sensory processing abilities, such as visual and auditory processing, with recent evidence for utilizing executive functions (EF), which are also engaged during reading. EF was previously related to the "supporter" of engaging the auditory and visual modalities in different cognitive tasks, with evidence of lower efficiency in this process among those with reading difficulties in the absence of a visual stimulus (i.e. while listening to stories). The current study aims to fill out the gap related to the level of reliance on these neural circuits while visual aids (pictures) are involved during story listening in relation to reading skills. Functional MRI data were collected from 44 Hebrew-speaking children aged 8-12 years while listening to stories with vs without visual stimuli (i.e., pictures). Functional connectivity of networks supporting reading was defined in each condition and compared between the conditions against behavioral reading measures. Lower reading skills were related to greater functional connectivity values between EF networks (default mode and memory networks), and between the auditory and memory networks for the stories with vs without the visual stimulation. A greater difference in functional connectivity between the conditions was related to lower reading scores. We conclude that lower reading skills in children may be related to a need for greater scaffolding, i.e., visual stimulation such as pictures describing the narratives when listening to stories, which may guide future intervention approaches.
Collapse
Affiliation(s)
- Michal Appel
- Department of Biomedical Engineering, Technion - IIT, Haifa, Israel
| | - Daria Hasin
- Department of Biomedical Engineering, Technion - IIT, Haifa, Israel
| | - Rola Farah
- Department of Biomedical Engineering, Technion - IIT, Haifa, Israel; Educational Neuroimaging Group, Faculty of Education in Science and Technology, Technion - IIT, Haifa, Israel
| | - Tzipi Horowitz-Kraus
- Department of Biomedical Engineering, Technion - IIT, Haifa, Israel; Educational Neuroimaging Group, Faculty of Education in Science and Technology, Technion - IIT, Haifa, Israel.
| |
Collapse
|
9
|
Noguchi Y. Audio-Visual Fission Illusion and Individual Alpha Frequency: Perspective on Buergers and Noppeney (2022). J Cogn Neurosci 2024; 36:700-705. [PMID: 36951569 DOI: 10.1162/jocn_a_01987] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2023]
Abstract
Integrating visual and auditory information is an important ability in various cognitive processes, although its neural mechanisms remain unclear. Several studies indicated a close relationship between one's temporal binding window (TBW) for audio-visual interaction and their alpha rhythm in the brain (individual alpha frequency or IAF). A recent study by Buergers and Noppeney [Buergers, S., & Noppeney, U. The role of alpha oscillations in temporal binding within and across the senses. Nature Human Behaviour, 6, 732-742, 2022], however, challenged this view using a new approach to analyze behavioral data. Conforming to the same procedures by Buergers and Noppeney, here, I analyzed the data of my previous study and examined a relationship between TBW and IAF. In contrast to Buergers and Noppeney, a significant correlation was found between occipital IAF and a new behavioral measure of TBW. Some possibilities that caused these opposing results, such as a variability of "alpha band" across studies and a large inter-individual difference in magnitude of the fission illusion, are discussed.
Collapse
|
10
|
Yang H, Cai B, Tan W, Luo L, Zhang Z. Pitch Improvement in Attentional Blink: A Study across Audiovisual Asymmetries. Behav Sci (Basel) 2024; 14:145. [PMID: 38392498 PMCID: PMC10885858 DOI: 10.3390/bs14020145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/07/2024] [Accepted: 02/16/2024] [Indexed: 02/24/2024] Open
Abstract
Attentional blink (AB) is a phenomenon in which the perception of a second target is impaired when it appears within 200-500 ms after the first target. Sound affects an AB and is accompanied by the appearance of an asymmetry during audiovisual integration, but it is not known whether this is related to the tonal representation of sound. The aim of the present study was to investigate the effect of audiovisual asymmetry on attentional blink and whether the presentation of pitch improves the ability to detect a target during an AB that is accompanied by audiovisual asymmetry. The results showed that as the lag increased, the subject's target recognition improved and the pitch produced further improvements. These improvements exhibited a significant asymmetry across the audiovisual channel. Our findings could contribute to better utilizations of audiovisual integration resources to improve attentional transients and auditory recognition decline, which could be useful in areas such as driving and education.
Collapse
Affiliation(s)
- Haoping Yang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
| | - Biye Cai
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Wenjie Tan
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
- Department of Physical Education, South China University of Technology, Guangzhou 518100, China
| | - Li Luo
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Zonghao Zhang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| |
Collapse
|
11
|
Wang L, Lin L, Ren J. The characteristics of audiovisual temporal integration in streaming-bouncing bistable motion perception: considering both implicit and explicit processing perspectives. Cereb Cortex 2023; 33:11541-11555. [PMID: 37874024 DOI: 10.1093/cercor/bhad388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/25/2023] Open
Abstract
This study explored the behavioral and neural activity characteristics of audiovisual temporal integration in motion perception from both implicit and explicit perspectives. The streaming-bouncing bistable paradigm (SB task) was employed to investigate implicit temporal integration, while the corresponding simultaneity judgment task (SJ task) was used to examine explicit temporal integration. The behavioral results revealed a negative correlation between implicit and explicit temporal processing. In the ERP results of both tasks, three neural phases (PD100, ND180, and PD290) in the fronto-central region were identified as reflecting integration effects and the auditory-evoked multisensory N1 component may serve as a primary component responsible for cross-modal temporal processing. However, there were significant differences between the VA ERPs in the SB and SJ tasks and the influence of speed on implicit and explicit integration effects also varied. The aforementioned results, building upon the validation of previous temporal renormalization theory, suggest that implicit and explicit temporal integration operate under distinct processing modes within a shared neural network. This underscores the brain's flexibility and adaptability in cross-modal temporal processing.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| |
Collapse
|
12
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
13
|
Feldman JI, Dunham K, DiCarlo GE, Cassidy M, Liu Y, Suzman E, Williams ZJ, Pulliam G, Kaiser S, Wallace MT, Woynaroski TG. A Randomized Controlled Trial for Audiovisual Multisensory Perception in Autistic Youth. J Autism Dev Disord 2023; 53:4318-4335. [PMID: 36028729 PMCID: PMC9417081 DOI: 10.1007/s10803-022-05709-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/24/2022]
Abstract
Differences in audiovisual integration are commonly observed in autism. Temporal binding windows (TBWs) of audiovisual speech can be trained (i.e., narrowed) in non-autistic adults; this study evaluated a computer-based perceptual training in autistic youth and assessed whether treatment outcomes varied according to individual characteristics. Thirty autistic youth aged 8-21 were randomly assigned to a brief perceptual training (n = 15) or a control condition (n = 15). At post-test, the perceptual training group did not differ, on average, on TBWs for trained and untrained stimuli and perception of the McGurk illusion compared to the control group. The training benefited youth with higher language and nonverbal IQ scores; the training caused widened TBWs in youth with co-occurring cognitive and language impairments.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA.
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Gabriella E DiCarlo
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Mass General Brigham Neurology Residency Program, Harvard Medical School, Boston, MA, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- National Institutes of Health, Bethesda, MD, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
| | - Evan Suzman
- Master's Program in Biomedical Science, Vanderbilt University, Nashville, TN, USA
- Southwestern School of Medicine, University of Texas, Dallas, TX, USA
| | - Zachary J Williams
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Sophia Kaiser
- Cognitive Studies Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
14
|
Al-youzbaki MU, Schormans AL, Allman BL. Past and present experience shifts audiovisual temporal perception in rats. Front Behav Neurosci 2023; 17:1287587. [PMID: 37908200 PMCID: PMC10613659 DOI: 10.3389/fnbeh.2023.1287587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/25/2023] [Indexed: 11/02/2023] Open
Abstract
Our brains have a propensity to integrate closely-timed auditory and visual stimuli into a unified percept; a phenomenon that is highly malleable based on prior sensory experiences, and is known to be altered in clinical populations. While the neural correlates of audiovisual temporal perception have been investigated using neuroimaging and electroencephalography techniques in humans, animal research will be required to uncover the underlying cellular and molecular mechanisms. Prior to conducting such mechanistic studies, it is important to first confirm the translational potential of any prospective animal model. Thus, in the present study, we conducted a series of experiments to determine if rats show the hallmarks of audiovisual temporal perception observed in neurotypical humans, and whether the rat behavioral paradigms could reveal when they experienced perceptual disruptions akin to those observed in neurodevelopmental disorders. After training rats to perform a temporal order judgment (TOJ) or synchrony judgment (SJ) task, we found that the rats' perception was malleable based on their past and present sensory experiences. More specifically, passive exposure to asynchronous audiovisual stimulation in the minutes prior to behavioral testing caused the rats' perception to predictably shift in the direction of the leading stimulus; findings which represent the first time that this form of audiovisual perceptual malleability has been reported in non-human subjects. Furthermore, rats performing the TOJ task also showed evidence of rapid recalibration, in which their audiovisual temporal perception on the current trial was predictably influenced by the timing lag between the auditory and visual stimuli in the preceding trial. Finally, by manipulating either experimental testing parameters or altering the rats' neurochemistry with a systemic injection of MK-801, we showed that the TOJ and SJ tasks could identify when the rats had difficulty judging the timing of audiovisual stimuli. These findings confirm that the behavioral paradigms are indeed suitable for future testing of rats with perceptual disruptions in audiovisual processing. Overall, our collective results highlight that rats represent an excellent animal model to study the cellular and molecular mechanisms underlying the acuity and malleability of audiovisual temporal perception, as they showcase the perceptual hallmarks commonly observed in humans.
Collapse
|
15
|
Jiang Z, An X, Liu S, Yin E, Yan Y, Ming D. Neural oscillations reflect the individual differences in the temporal perception of audiovisual speech. Cereb Cortex 2023; 33:10575-10583. [PMID: 37727958 DOI: 10.1093/cercor/bhad304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/02/2023] [Indexed: 09/21/2023] Open
Abstract
Multisensory integration occurs within a limited time interval between multimodal stimuli. Multisensory temporal perception varies widely among individuals and involves perceptual synchrony and temporal sensitivity processes. Previous studies explored the neural mechanisms of individual differences for beep-flash stimuli, whereas there was no study for speech. In this study, 28 subjects (16 male) performed an audiovisual speech/ba/simultaneity judgment task while recording their electroencephalography. We examined the relationship between prestimulus neural oscillations (i.e. the pre-pronunciation movement-related oscillations) and temporal perception. The perceptual synchrony was quantified using the Point of Subjective Simultaneity and temporal sensitivity using the Temporal Binding Window. Our results revealed dissociated neural mechanisms for individual differences in Temporal Binding Window and Point of Subjective Simultaneity. The frontocentral delta power, reflecting top-down attention control, is positively related to the magnitude of individual auditory leading Temporal Binding Windows (auditory Temporal Binding Windows; LTBWs), whereas the parieto-occipital theta power, indexing bottom-up visual temporal attention specific to speech, is negatively associated with the magnitude of individual visual leading Temporal Binding Windows (visual Temporal Binding Windows; RTBWs). In addition, increased left frontal and bilateral temporoparietal occipital alpha power, reflecting general attentional states, is associated with increased Points of Subjective Simultaneity. Strengthening attention abilities might improve the audiovisual temporal perception of speech and further impact speech integration.
Collapse
Affiliation(s)
- Zeliang Jiang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Xingwei An
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Shuang Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| | - Erwei Yin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Ye Yan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
- Defense Innovation Institute, Academy of Military Sciences (AMS), 100071 Beijing, China
- Tianjin Artificial Intelligence Innovation Center (TAIIC), 300457 Tianjin, China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, 300072 Tianjin, China
| |
Collapse
|
16
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
17
|
Cai XL, Pu CC, Zhou SZ, Wang Y, Huang J, Lui SSY, Møller A, Cheung EFC, Madsen KH, Xue R, Yu X, Chan RCK. Anterior cingulate glutamate levels associate with functional activation and connectivity during sensory integration in schizophrenia: a multimodal 1H-MRS and fMRI study. Psychol Med 2023; 53:4904-4914. [PMID: 35791929 DOI: 10.1017/s0033291722001817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND Glutamatergic dysfunction has been implicated in sensory integration deficits in schizophrenia, yet how glutamatergic function contributes to behavioural impairments and neural activities of sensory integration remains unknown. METHODS Fifty schizophrenia patients and 43 healthy controls completed behavioural assessments for sensory integration and underwent magnetic resonance spectroscopy (MRS) for measuring the anterior cingulate cortex (ACC) glutamate levels. The correlation between glutamate levels and behavioural sensory integration deficits was examined in each group. A subsample of 20 pairs of patients and controls further completed an audiovisual sensory integration functional magnetic resonance imaging (fMRI) task. Blood Oxygenation Level Dependent (BOLD) activation and task-dependent functional connectivity (FC) were assessed based on fMRI data. Full factorial analyses were performed to examine the Group-by-Glutamate Level interaction effects on fMRI measurements (group differences in correlation between glutamate levels and fMRI measurements) and the correlation between glutamate levels and fMRI measurements within each group. RESULTS We found that schizophrenia patients exhibited impaired sensory integration which was positively correlated with ACC glutamate levels. Multimodal analyses showed significantly Group-by-Glutamate Level interaction effects on BOLD activation as well as task-dependent FC in a 'cortico-subcortical-cortical' network (including medial frontal gyrus, precuneus, ACC, middle cingulate gyrus, thalamus and caudate) with positive correlations in patients and negative in controls. CONCLUSIONS Our findings indicate that ACC glutamate influences neural activities in a large-scale network during sensory integration, but the effects have opposite directionality between schizophrenia patients and healthy people. This implicates the crucial role of glutamatergic system in sensory integration processing in schizophrenia.
Collapse
Affiliation(s)
- Xin-Lu Cai
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
| | - Cheng-Cheng Pu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Shu-Zhe Zhou
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Yi Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jia Huang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Simon S Y Lui
- Department of Psychiatry, School of Clinical Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Arne Møller
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- Centre of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
- Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Aarhus, Denmark
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, China
| | - Kristoffer H Madsen
- Sino-Danish Centre for Education and Research, Beijing, China
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Amager and Hvidovre, Denmark
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Rong Xue
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Beijing Institute for Brain Disorders, Beijing, China
| | - Xin Yu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Diagnostic Radiology, the University of Hong Kong, Hong Kong Special Administrative Region, China
| |
Collapse
|
18
|
Zhou HY, Zhang YJ, Hu HX, Yan YJ, Wang LL, Lui SSY, Chan RCK. Neural correlates of audiovisual speech synchrony perception and its relationship with autistic traits. Psych J 2023; 12:514-523. [PMID: 36517928 DOI: 10.1002/pchj.624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 11/10/2022] [Indexed: 08/12/2023]
Abstract
The anterior insula (AI) has the central role in coordinating attention and integrating information from multiple sensory modalities. AI dysfunction may contribute to both sensory and social impairments in autism spectrum disorder (ASD). Little is known regarding the brain mechanisms that guide multisensory integration, and how such neural activity might be affected by autistic-like symptoms in the general population. In this study, 72 healthy young adults performed an audiovisual speech synchrony judgment (SJ) task during fMRI scanning. We aimed to investigate the SJ-related brain activations and connectivity, with a focus on the AI. Compared with synchronous speech, asynchrony perception triggered stronger activations in the bilateral AI, and other frontal-cingulate-parietal regions. In contrast, synchronous perception resulted in greater involvement of the primary auditory and visual areas, indicating multisensory validation and fusion. Moreover, the AI demonstrated a stronger connection with the anterior cingulate gyrus (ACC) in the audiovisual asynchronous (vs. synchronous) condition. To facilitate asynchrony detection, the AI may integrate auditory and visual speech stimuli, and generate a control signal to the ACC that further supports conflict-resolving and response selection. Correlation analysis, however, suggested that audiovisual synchrony perception and its related AI activation and connectivity did not significantly vary with different levels of autistic traits. These findings provide novel evidence for the neural mechanisms underlying multisensory temporal processing in healthy people. Future research should examine whether such findings would be extended to ASD patients.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yi-Jing Zhang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Hui-Xin Hu
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yong-Jie Yan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College of University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
| | - Ling-Ling Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Simon S Y Lui
- Department of Psychiatry, School of Clinical Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, Hong Kong, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
19
|
Ji H, Yu X, Xiao Z, Zhu H, Liu P, Lin H, Chen R, Hong Q. Features of Cognitive Ability and Central Auditory Processing of Preschool Children With Minimal and Mild Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1867-1888. [PMID: 37116308 DOI: 10.1044/2023_jslhr-22-00395] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
OBJECTIVE This study aimed to investigate the current status of cognitive development and central auditory processing development of preschool children with minimal and mild hearing loss (MMHL) in Nanjing, China. METHOD We recruited 34 children with MMHL and 45 children with normal hearing (NH). They completed a series of tests, including cognitive tests (i.e., Wechsler Preschool and Primary Scale of Intelligence and Continuous Performance Test), behavioral auditory tests (speech-in-noise [SIN] test and frequency pattern test), and objective electrophysiological audiometry (speech-evoked auditory brainstem response and cortical auditory evoked potential). In addition, teacher evaluations and demographic information and questionnaires completed by parents were collected. RESULTS Regarding cognitive ability, statistical differences in the verbal comprehensive index, full-scale intelligence quotient, and abnormal rate of attention test score were found between the MMHL group and the NH group. The children with MMHL performed poorer on the SIN test than the children with NH. As for the auditory electrophysiology of the two groups, the latency and amplitude of some waves of the speech-evoked auditory brainstem response and cortical auditory evoked potential were statistically different between the two groups. We attempted to explore the relationship between some key indicators of auditory processing and some key indicators of cognitive development. CONCLUSIONS Children with MMHL are already at increased developmental risk as early as preschool. They are more likely to have problems with attention and verbal comprehension than children with NH. This condition is not compensated with increasing age during the preschool years. The results suggest a possible relationship between the risk of cognitive deficit and divergence of auditory processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22670473.
Collapse
Affiliation(s)
- Hui Ji
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Xinyue Yu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Zhenglu Xiao
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Huiqin Zhu
- School of Pediatrics, Nanjing Medical University, Jiangsu, China
| | - Panting Liu
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| | - Huanxi Lin
- School of Nursing, Nanjing Medical University, Jiangsu, China
| | - Renjie Chen
- The Second Affiliated Hospital of Nanjing Medical University, Jiangsu, China
| | - Qin Hong
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Jiangsu, China
| |
Collapse
|
20
|
Long-term Tai Chi training reduces the fusion illusion in older adults. Exp Brain Res 2023; 241:517-526. [PMID: 36611123 DOI: 10.1007/s00221-023-06544-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 01/01/2023] [Indexed: 01/09/2023]
Abstract
Sound-induced flash illusion (SiFI) is an auditory-dominated audiovisual integration phenomenon that can be used as a reliable indicator of audiovisual integration. Although previous studies have found that Tai Chi exercise has a promoting effect on cognitive processing, such as executive functions, the effect of Tai Chi exercise on early perceptual processing has yet to be investigated. This study used the classic SiFI paradigm to investigate the effects of long-term Tai Chi exercise on multisensory integration in older adults. We compared older adults with long-term Tai Chi exercise experience with those with long-term walking exercise. The results showed that the accuracy of the Tai Chi group was higher than that of the control group under the fusion illusion condition, mainly due to the increased perceptual sensitivity to flashes. However, there was no significant difference between the two groups in the fission illusion. These results indicated that the fission and fusion illusions were affected differently by Tai Chi exercise, and this was attributable to the association of the participants' flash discriminability with them. The present study provides preliminary evidence that long-term Tai Chi exercise improves older adults' multisensory integration, which occurs in early perceptual processing.
Collapse
|
21
|
Chan AS, Ding Z, Lee TL, Sze SL, Cheung MC. Temporal processing deficit in children and adolescents with autism spectrum disorder: An online assessment. Digit Health 2023; 9:20552076231171500. [PMID: 37124327 PMCID: PMC10134192 DOI: 10.1177/20552076231171500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 04/06/2023] [Indexed: 05/02/2023] Open
Abstract
Objective The sensory deficit has been considered as one of the core features in children and adolescents with autism spectrum disorder (ASD). The present study aimed to examine the temporal processing of simple and more complex auditory inputs in ASD children and adolescents with an online assessment that can be conducted remotely. Methods One hundred fifty-eight children and adolescents aged 5-17 years participated in this study, including 79 ASD participants and 79 typically developing (TD) participants. The online assessment consisted of two temporal-order judgment tasks that required repeating the sequence of two pure tones or consonant-vowel (CV) syllabic pairs at varying interstimulus intervals (ISIs). Results Significantly lower accuracy rates were found in ASD than TD participants in the pure tone and the CV conditions with both short and long ISI. In addition, ASD participants (M = 245.97 ms) showed a significantly higher passing threshold than TD participants (M = 178.84 ms) in the CV task. Receiver operating characteristic analysis found that the age × ISI passing threshold composite yielded a sensitivity of 74.7% and a specificity of 50.6% at the cutoff point of -0.307 in differentiating ASD participants from TD participants. Conclusion In sum, children and adolescents with ASD showed impaired temporal processing of both simple and more complex auditory stimuli, and the online assessment seems to be sensitive in differentiating individuals with ASD from those with TD.
Collapse
Affiliation(s)
- Agnes S. Chan
- Neuropsychology Laboratory, Department
of Psychology, The Chinese University of Hong
Kong, Hong Kong, China
- Research Centre for Neuropsychological
Well-Being, The Chinese University of Hong
Kong, Hong Kong, China
- Agnes S. Chan, Neuropsychology Lab,
Department of Psychology, The Chinese University of Hong Kong, Shatin, NT, Hong
Kong, China.
| | - Zihan Ding
- Neuropsychology Laboratory, Department
of Psychology, The Chinese University of Hong
Kong, Hong Kong, China
| | - Tsz-lok Lee
- Neuropsychology Laboratory, Department
of Psychology, The Chinese University of Hong
Kong, Hong Kong, China
| | - Sophia L. Sze
- Neuropsychology Laboratory, Department
of Psychology, The Chinese University of Hong
Kong, Hong Kong, China
- Research Centre for Neuropsychological
Well-Being, The Chinese University of Hong
Kong, Hong Kong, China
| | - Mei-Chun Cheung
- Department of Social Work, The Chinese University of Hong
Kong, Hong Kong, China
| |
Collapse
|
22
|
Yang W, Li S, Guo A, Li Z, Yang X, Ren Y, Yang J, Wu J, Zhang Z. Auditory attentional load modulates the temporal dynamics of audiovisual integration in older adults: An ERPs study. Front Aging Neurosci 2022; 14:1007954. [PMID: 36325188 PMCID: PMC9618958 DOI: 10.3389/fnagi.2022.1007954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/23/2022] [Indexed: 12/02/2022] Open
Abstract
As older adults experience degenerations in perceptual ability, it is important to gain perception from audiovisual integration. Due to attending to one or more auditory stimuli, performing other tasks is a common challenge for older adults in everyday life. Therefore, it is necessary to probe the effects of auditory attentional load on audiovisual integration in older adults. The present study used event-related potentials (ERPs) and a dual-task paradigm [Go / No-go task + rapid serial auditory presentation (RSAP) task] to investigate the temporal dynamics of audiovisual integration. Behavioral results showed that both older and younger adults responded faster and with higher accuracy to audiovisual stimuli than to either visual or auditory stimuli alone. ERPs revealed weaker audiovisual integration under the no-attentional auditory load condition at the earlier processing stages and, conversely, stronger integration in the late stages. Moreover, audiovisual integration was greater in older adults than in younger adults at the following time intervals: 60–90, 140–210, and 430–530 ms. Notably, only under the low load condition in the time interval of 140–210 ms, we did find that the audiovisual integration of older adults was significantly greater than that of younger adults. These results delineate the temporal dynamics of the interactions with auditory attentional load and audiovisual integration in aging, suggesting that modulation of auditory attentional load affects audiovisual integration, enhancing it in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Ao Guo
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
- *Correspondence: Yanna Ren
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Zhilin Zhang
| |
Collapse
|
23
|
Du YC, Li YZ, Qin L, Bi HY. The influence of temporal asynchrony on character-speech integration in Chinese children with and without dyslexia: An ERP study. BRAIN AND LANGUAGE 2022; 233:105175. [PMID: 36029751 DOI: 10.1016/j.bandl.2022.105175] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 08/11/2022] [Accepted: 08/16/2022] [Indexed: 06/15/2023]
Abstract
Dyslexic readers have been reported to show abnormal temporal acuity and multisensory integration deficiency. Here, we investigated the influence of temporal intervals on Chinese character-speech integration in children with and without dyslexia. Visual characters were presented synchronously to the onset of speech sounds (AV0) or before speech sound by 300 ms (AV300). Event-related potentials (ERP) evoked by congruent condition (speech sounds presented with congruent Chinese characters) and by baseline condition (speech sounds presented with Korean characters) were compared. Typically developing (TD) children exhibited congruency effect in AV0 condition, whereas dyslexic children exhibited congruency effect in AV300 condition. Moreover, congruency effect in TD children was due to enhanced neural activation to congruent trials, congruency effect in dyslexic children was contributed by neural suppression for baseline trials. These results suggested that different underlying mechanisms were involved in character-speech integration for typical and dyslexic children.
Collapse
Affiliation(s)
- Ying-Chun Du
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yi-Zhen Li
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Li Qin
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hong-Yan Bi
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
24
|
Audiovisual temporal processing in adult patients with first-episode schizophrenia and high-functioning autism. SCHIZOPHRENIA 2022; 8:75. [PMID: 36138029 PMCID: PMC9500036 DOI: 10.1038/s41537-022-00284-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 09/03/2022] [Indexed: 11/30/2022]
Abstract
Schizophrenia and autism spectrum disorder (ASD) are both neurodevelopmental disorders with altered sensory processing. Widened temporal binding window (TBW) signifies reduced sensitivity to detect stimulus asynchrony, and may be a shared feature in schizophrenia and ASD. Few studies directly compared audiovisual temporal processing ability in the two disorders. We recruited 43 adult patients with first-episode schizophrenia (FES), 35 average intelligent and verbally-fluent adult patients with high-functioning ASD and 48 controls. We employed two unisensory Temporal Order Judgement (TOJ) tasks within visual or auditory modalities, and two audiovisual Simultaneity Judgement (SJ) tasks with flash-beeps and videos of syllable utterance as stimuli. Participants with FES exhibited widened TBW affecting both speech and non-speech processing, which were not attributable to altered unisensory sensory acuity because they had normal visual and auditory TOJ thresholds. However, adults with ASD exhibited intact unisensory and audiovisual temporal processing. Lower non-verbal IQ was correlated with larger TBW width across the three groups. Taking our findings with earlier evidence in chronic samples, widened TBW is associated with schizophrenia regardless illness stage. The altered audiovisual temporal processing in ASD may ameliorate after reaching adulthood.
Collapse
|
25
|
Musical training refines audiovisual integration but does not influence temporal recalibration. Sci Rep 2022; 12:15292. [PMID: 36097277 PMCID: PMC9468170 DOI: 10.1038/s41598-022-19665-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/01/2022] [Indexed: 11/11/2022] Open
Abstract
When the brain is exposed to a temporal asynchrony between the senses, it will shift its perception of simultaneity towards the previously experienced asynchrony (temporal recalibration). It is unknown whether recalibration depends on how accurately an individual integrates multisensory cues or on experiences they have had over their lifespan. Hence, we assessed whether musical training modulated audiovisual temporal recalibration. Musicians (n = 20) and non-musicians (n = 18) made simultaneity judgements to flash-tone stimuli before and after adaptation to asynchronous (± 200 ms) flash-tone stimuli. We analysed these judgements via an observer model that described the left and right boundaries of the temporal integration window (decisional criteria) and the amount of sensory noise that affected these judgements. Musicians’ boundaries were narrower (closer to true simultaneity) than non-musicians’, indicating stricter criteria for temporal integration, and they also exhibited enhanced sensory precision. However, while both musicians and non-musicians experienced cumulative and rapid recalibration, these recalibration effects did not differ between the groups. Unexpectedly, cumulative recalibration was caused by auditory-leading but not visual-leading adaptation. Overall, these findings suggest that the precision with which observers perceptually integrate audiovisual temporal cues does not predict their susceptibility to recalibration.
Collapse
|
26
|
Amadeo MB, Esposito D, Escelsior A, Campus C, Inuggi A, Pereira Da Silva B, Serafini G, Amore M, Gori M. Time in schizophrenia: a link between psychopathology, psychophysics and technology. Transl Psychiatry 2022; 12:331. [PMID: 35961974 PMCID: PMC9374791 DOI: 10.1038/s41398-022-02101-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 07/25/2022] [Accepted: 07/28/2022] [Indexed: 12/03/2022] Open
Abstract
It has been widely demonstrated that time processing is altered in patients with schizophrenia. This perspective review delves into such temporal deficit and highlights its link to low-level sensory alterations, which are often overlooked in rehabilitation protocols for psychosis. However, if temporal impairment at the sensory level is inherent to the disease, new interventions should focus on this dimension. Beyond more traditional types of intervention, here we review the most recent digital technologies for rehabilitation and the most promising ones for sensory training. The overall aim is to synthesise existing literature on time in schizophrenia linking psychopathology, psychophysics, and technology to help future developments.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy.
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy.
| | - Davide Esposito
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, Università degli Studi di Genova, Genoa, Italy
| | - Andrea Escelsior
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Claudio Campus
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
| | - Alberto Inuggi
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
| | - Beatriz Pereira Da Silva
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
| | - Gianluca Serafini
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Mario Amore
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
- IRCCS Ospedale Policlinico San Martino, Genoa, Italy
| | - Monica Gori
- U-VIP Unit for Visually Impaired People, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
- Applied Neurosciences for Technological Advances in Rehabilitation Systems (ANTARES) Joint Lab: Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), Section of Psychiatry, University of Genoa - Clinica Psichiatrica ed SPDC-Italian Institute of Technology (IIT); Largo Rosanna Benzi, 10 - 16132, Genoa, (GE), Italy
| |
Collapse
|
27
|
De Winne J, Devos P, Leman M, Botteldooren D. With No Attention Specifically Directed to It, Rhythmic Sound Does Not Automatically Facilitate Visual Task Performance. Front Psychol 2022; 13:894366. [PMID: 35756201 PMCID: PMC9226390 DOI: 10.3389/fpsyg.2022.894366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/19/2022] [Indexed: 11/22/2022] Open
Abstract
In a century where humans and machines—powered by artificial intelligence or not—increasingly work together, it is of interest to understand human processing of multi-sensory stimuli in relation to attention and working memory. This paper explores whether and when supporting visual information with rhythmic auditory stimuli can optimize multi-sensory information processing. In turn, this can make the interaction between humans or between machines and humans more engaging, rewarding and activating. For this purpose a novel working memory paradigm was developed where participants are presented with a series of five target digits randomly interchanged with five distractor digits. Their goal is to remember the target digits and recall them orally. Depending on the condition support is provided by audio and/or rhythm. It is expected that the sound will lead to a better performance. It is also expected that this effect of sound is different in case of rhythmic and non-rhythmic sound. Last but not least, some variability is expected across participants. To make correct conclusions, the data of the experiment was statistically analyzed in a classic way, but also predictive models were developed in order to predict outcomes based on a range of input variables related to the experiment and the participant. The effect of auditory support could be confirmed, but no difference was observed between rhythmic and non-rhythmic sounds. Overall performance was indeed affected by individual differences, such as visual dominance or perceived task difficulty. Surprisingly a music education did not significantly affect the performance and even tended toward a negative effect. To better understand the underlying processes of attention, also brain activation data, e.g., by means of electroencephalography (EEG), should be recorded. This approach can be subject to a future work.
Collapse
Affiliation(s)
- Jorg De Winne
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium.,Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Paul Devos
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Art, Music and Theater Studies, Institute for Psychoacoustics and Electronic Music (IPEM), Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
28
|
Zhou HY, Yang HX, Wei Z, Wan GB, Lui SSY, Chan RCK. Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study. Psych J 2022; 11:409-418. [PMID: 35350086 DOI: 10.1002/pchj.538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/09/2022] [Accepted: 02/17/2022] [Indexed: 11/05/2022]
Abstract
During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3-6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Wei
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Guo-Bin Wan
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Simon S Y Lui
- Department of Psychiatry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
29
|
Noguchi Y. Individual differences in beta frequency correlate with the audio-visual fusion illusion. Psychophysiology 2022; 59:e14041. [PMID: 35274314 DOI: 10.1111/psyp.14041] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 12/27/2021] [Accepted: 02/22/2022] [Indexed: 11/29/2022]
Abstract
Presenting one flash with two beeps induces a perception of two flashes (audio-visual [AV] fission illusion), while presenting two flashes with one beep induces a perception of one flash (fusion illusion). Although previous studies showed a relationship between the frequency of the alpha rhythm (alpha cycle) and one's susceptibility to the fission illusion, the relationship between neural oscillations and the fusion illusion is unknown. Using electroencephalography, here I investigated the frequency of oscillatory signals in the pre-stimulus period and found a significant correlation between the beta rhythm and the fusion illusion; specifically, participants with a lower beta frequency showed a larger fusion illusion. These data indicate two separate time windows of AV integration in the human brain, one defined by the alpha cycle (fission) and another defined by the beta cycle (fusion).
Collapse
Affiliation(s)
- Yasuki Noguchi
- Department of Psychology, Graduate School of Humanities, Kobe University, Kobe, Japan
| |
Collapse
|
30
|
Marsicano G, Cerpelloni F, Melcher D, Ronconi L. Lower multisensory temporal acuity in individuals with high schizotypal traits: a web-based study. Sci Rep 2022; 12:2782. [PMID: 35177673 PMCID: PMC8854550 DOI: 10.1038/s41598-022-06503-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/25/2022] [Indexed: 12/02/2022] Open
Abstract
Natural events are often multisensory, requiring the brain to combine information from the same spatial location and timing, across different senses. The importance of temporal coincidence has led to the introduction of the temporal binding window (TBW) construct, defined as the time range within which multisensory inputs are highly likely to be perceptually bound into a single entity. Anomalies in TBWs have been linked to confused perceptual experiences and inaccurate filtering of sensory inputs coming from different environmental sources. Indeed, larger TBWs have been associated with disorders such as schizophrenia and autism and are also correlated to a higher level of subclinical traits of these conditions in the general population. Here, we tested the feasibility of using a web-based version of a classic audio-visual simultaneity judgment (SJ) task with simple flash-beep stimuli in order to measure multisensory temporal acuity and its relationship with schizotypal traits as measured in the general population. Results show that: (i) the response distribution obtained in the web-based SJ task was strongly similar to those reported by studies carried out in controlled laboratory settings, and (ii) lower multisensory temporal acuity was associated with higher schizotypal traits in the “cognitive-perceptual” domains. Our findings reveal the possibility of adequately using a web-based audio-visual SJ task outside a controlled laboratory setting, available to a more diverse and representative pool of participants. These results provide additional evidence for a close relationship between lower multisensory acuity and the expression of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Gianluca Marsicano
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Filippo Cerpelloni
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy.,Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, KU Leuve, Leuven, Belgium.,Institute of Research in Psychology (IPSY) & Institute of Neuroscience (IoNS)-University of Louvain (UCLouvain), Leuven, Belgium
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy. .,Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
31
|
Wang L, Lin L, Sun Y, Hou S, Ren J. The effect of movement speed on audiovisual temporal integration in streaming-bouncing illusion. Exp Brain Res 2022; 240:1139-1149. [PMID: 35147722 DOI: 10.1007/s00221-022-06312-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Motion perception in real situations is often stimulated by multisensory information. Speed is an essential characteristic of moving objects; however, at present, it is not clear whether speed affects the process of audiovisual temporal integration in motion perception. Therefore, this study used a streaming-bouncing task (a bistable motion perception; SB task) combined with a simultaneous judgment task (SJ task) to explore the effect of speed on audiovisual temporal integration from implicit and explicit perspectives. The experiment had a within-subjects design, two speed conditions (fast/slow), eleven audiovisual conditions [stimulus onset asynchrony (SOA): 0 ms/ ± 60 ms/ ± 120 ms/ ± 180 ms/ ± 240 ms/ ± 300 ms], and a visual-only condition. A total of 30 subjects were recruited for the study. These participants completed the SB task and the SJ task successively. The results showed the following outcomes: (1) the optimal times needed to induce the "bouncing" illusion and maximum audiovisual bounce-inducing effect (ABE) magnitude were much earlier than that for the optimal time of audiovisual synchrony, (2) speed as a bottom-up factor could affect the proportion of "bouncing" perception in SB illusions but did not affect the ABE magnitude, (3) speed could also affect the ability of audiovisual temporal integration in motion perception, and the main manifestation was that the point of subjective simultaneity (PSS) in fast speed conditions was earlier than that of slow speed conditions in the SJ task and (4) the SB task and SJ task were not related. In conclusion, the time to complete the maximum audiovisual integration was different from the optimal time for synchrony perception; moreover, speed could affect audiovisual temporal integration in motion perception but only in explicit temporal tasks.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Yujia Sun
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China
| | - Shuang Hou
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China.
| |
Collapse
|
32
|
Montag M, Paschall C, Ojemann J, Rao R, Herron J. A Platform for Virtual Reality Task Design with Intracranial Electrodes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6659-6662. [PMID: 34892635 DOI: 10.1109/embc46164.2021.9630231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Research with human intracranial electrodes has traditionally been constrained by the limitations of the inpatient clinical setting. Immersive virtual reality (VR), however, can transcend setting and enable novel task design with precise control over visual and auditory stimuli. This control over visual and auditory feedback makes VR an exciting platform for new in-patient, human electrocorticography (ECOG) and stereo-electroencephalography (sEEG) research. The integration of intracranial electrode recording and stimulation with VR task dynamics required foundational systems engineering. In this work, we present a custom API that bridges Unity, the leading VR game development engine, and Synapse, the proprietary software of the Tucker Davis Technologies (TDT) neural recording and stimulation platform. To demonstrate the functionality and efficiency of our API, we developed a closed-loop brain-computer interface (BCI) task in which filtered neural signals controlled the movement of a virtual object and virtual object dynamics triggered neural stimulation. This closed-loop VR-BCI task confirmed the utility, safety, and efficacy of our API and its readiness for human task deployment.
Collapse
|
33
|
Ball F, Nentwich A, Noesselt T. Cross-modal perceptual enhancement of unisensory targets is uni-directional and does not affect temporal expectations. Vision Res 2021; 190:107962. [PMID: 34757275 DOI: 10.1016/j.visres.2021.107962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 10/05/2021] [Accepted: 10/15/2021] [Indexed: 10/20/2022]
Abstract
Temporal structures in the environment can shape temporal expectations (TE); and previous studies demonstrated that TEs interact with multisensory interplay (MSI) when multisensory stimuli are presented synchronously. Here, we tested whether other types of MSI - evoked by asynchronous yet temporally flanking irrelevant stimuli - result in similar performance patterns. To this end, we presented sequences of 12 stimuli (10 Hz) which consisted of auditory (A), visual (V) or alternating auditory-visual stimuli (e.g. A-V-A-V-…) with either auditory or visual targets (Exp. 1). Participants discriminated target frequencies (auditory pitch or visual spatial frequency) embedded in these sequences. To test effects of TE, the proportion of early and late temporal target positions was manipulated run-wise. Performance for unisensory targets was affected by temporally flanking distractors, with auditory temporal flankers selectively improving visual target perception (Exp. 1). However, no effect of temporal expectation was observed. Control experiments (Exp. 2-3) tested whether this lack of TE effect was due to the higher presentation frequency in Exp. 1 relative to previous experiments. Importantly, even at higher stimulation frequencies redundant multisensory targets (Exp. 2-3) reliably modulated TEs. Together, our results indicate that visual target detection was enhanced by MSI. However, this cross-modal enhancement - in contrast to the redundant target effect - was still insufficient to generate TEs. We posit that unisensory target representations were either instable or insufficient for the generation of TEs while less demanding MSI still occurred; highlighting the need for robust stimulus representations when generating temporal expectations.
Collapse
Affiliation(s)
- Felix Ball
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany.
| | - Annika Nentwich
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany
| | - Toemme Noesselt
- Department of Biological Psychology, Faculty of Natural Science, Otto-von-Guericke-University Magdeburg, Germany; Center for Behavioral Brain Sciences, Otto-von-Guericke-University Magdeburg, Germany
| |
Collapse
|
34
|
Heins N, Pomp J, Kluger DS, Vinbrüx S, Trempler I, Kohler A, Kornysheva K, Zentgraf K, Raab M, Schubotz RI. Surmising synchrony of sound and sight: Factors explaining variance of audiovisual integration in hurdling, tap dancing and drumming. PLoS One 2021; 16:e0253130. [PMID: 34293800 PMCID: PMC8298114 DOI: 10.1371/journal.pone.0253130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 05/31/2021] [Indexed: 11/18/2022] Open
Abstract
Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds–hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.
Collapse
Affiliation(s)
- Nina Heins
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Jennifer Pomp
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Daniel S. Kluger
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
- Institute for Biomagnetism and Biosignal Analysis, University Hospital Muenster, Muenster, Germany
| | - Stefan Vinbrüx
- Institute of Sport and Exercise Sciences, Human Performance and Training, University of Muenster, Muenster, Germany
| | - Ima Trempler
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Axel Kohler
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Katja Kornysheva
- School of Psychology and Bangor Neuroimaging Unit, Bangor University, Wales, United Kingdom
| | - Karen Zentgraf
- Department of Movement Science and Training in Sports, Institute of Sport Sciences, Goethe University Frankfurt, Frankfurt, Germany
| | - Markus Raab
- Institute of Psychology, German Sport University Cologne, Cologne, Germany
- School of Applied Sciences, London South Bank University, London, United Kingdom
| | - Ricarda I. Schubotz
- Department of Psychology, University of Muenster, Muenster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
- * E-mail:
| |
Collapse
|
35
|
Wiring of higher-order cortical areas: Spatiotemporal development of cortical hierarchy. Semin Cell Dev Biol 2021; 118:35-49. [PMID: 34034988 DOI: 10.1016/j.semcdb.2021.05.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/27/2021] [Accepted: 05/08/2021] [Indexed: 01/04/2023]
Abstract
A hierarchical development of cortical areas was suggested over a century ago, but the diversity and complexity of cortical hierarchy properties have so far prevented a formal demonstration. The aim of this review is to clarify the similarities and differences in the developmental processes underlying cortical development of primary and higher-order areas. We start by recapitulating the historical and recent advances underlying the biological principle of cortical hierarchy in adults. We then revisit the arguments for a hierarchical maturation of cortical areas, and further integrate the principles of cortical areas specification during embryonic and postnatal development. We highlight how the dramatic expansion in cortical size might have contributed to the increased number of association areas sustaining cognitive complexification in evolution. Finally, we summarize the recent observations of an alteration of cortical hierarchy in neuropsychiatric disorders and discuss their potential developmental origins.
Collapse
|
36
|
Kaya U, Kafaligonul H. Audiovisual interactions in speeded discrimination of a visual event. Psychophysiology 2021; 58:e13777. [PMID: 33483971 DOI: 10.1111/psyp.13777] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 01/07/2021] [Accepted: 01/07/2021] [Indexed: 01/10/2023]
Abstract
The integration of information from different senses is central to our perception of the external world. Audiovisual interactions have been particularly well studied in this context and various illusions have been developed to demonstrate strong influences of these interactions on the final percept. Using audiovisual paradigms, previous studies have shown that even task-irrelevant information provided by a secondary modality can change the detection and discrimination of a primary target. These modulations have been found to be significantly dependent on the relative timing between auditory and visual stimuli. Although these interactions in time have been commonly reported, we have still limited understanding of the relationship between the modulations of event-related potentials (ERPs) and final behavioral performance. Here, we aimed to shed light on this important issue by using a speeded discrimination paradigm combined with electroencephalogram (EEG). During the experimental sessions, the timing between an auditory click and a visual flash was varied over a wide range of stimulus onset asynchronies and observers were engaged in speeded discrimination of flash location. Behavioral reaction times were significantly changed by click timing. Furthermore, the modulations of evoked activities over medial parietal/parieto-occipital electrodes were associated with this effect. These modulations were within the 126-176 ms time range and more importantly, they were also correlated with the changes in reaction times. These results provide an important functional link between audiovisual interactions at early stages of sensory processing and reaction times. Together with previous research, they further suggest that early crossmodal interactions play a critical role in perceptual performance.
Collapse
Affiliation(s)
- Utku Kaya
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Informatics Institute, Middle East Technical University, Ankara, Turkey.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, USA
| | - Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Turkey.,Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, Ankara, Turkey
| |
Collapse
|
37
|
Lalonde K, Werner LA. Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit. Brain Sci 2021; 11:49. [PMID: 33466253 PMCID: PMC7824772 DOI: 10.3390/brainsci11010049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants' and children's use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE 68131, USA
| | - Lynne A. Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA;
| |
Collapse
|
38
|
Zhou HY, Wang YM, Zhang RT, Cheung EFC, Pantelis C, Chan RCK. Neural Correlates of Audiovisual Temporal Binding Window in Individuals With Schizotypal and Autistic Traits: Evidence From Resting-State Functional Connectivity. Autism Res 2020; 14:668-680. [PMID: 33314710 DOI: 10.1002/aur.2456] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 01/02/2023]
Abstract
Temporal proximity is an important clue for multisensory integration. Previous evidence indicates that individuals with autism and schizophrenia are more likely to integrate multisensory inputs over a longer temporal binding window (TBW). However, whether such deficits in audiovisual temporal integration extend to subclinical populations with high schizotypal and autistic traits are unclear. Using audiovisual simultaneity judgment (SJ) tasks for nonspeech and speech stimuli, our results suggested that the width of the audiovisual TBW was not significantly correlated with self-reported schizotypal and autistic traits in a group of young adults. Functional magnetic resonance imaging (fMRI) resting-state activity was also acquired to explore the neural correlates underlying inter-individual variability of TBW width. Across the entire sample, stronger resting-state functional connectivity (rsFC) between the left superior temporal cortex and the left precuneus, and weaker rsFC between the left cerebellum and the right dorsal lateral prefrontal cortex were correlated with a narrower TBW for speech stimuli. Meanwhile, stronger rsFC between the left anterior superior temporal gyrus and the right inferior temporal gyrus was correlated with a wider audiovisual TBW for non-speech stimuli. The TBW-related rsFC was not affected by levels of subclinical traits. In conclusion, this study indicates that audiovisual temporal processing may not be affected by autistic and schizotypal traits and rsFC between brain regions responding to multisensory information and timing may account for the inter-individual difference in TBW width. LAY SUMMARY: Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window. Autism Res 2021, 14: 668-680. © 2020 International Society for Autism Research and Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yong-Ming Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Rui-Ting Zhang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, China
| | - Christos Pantelis
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne & Melbourne Health, Carlton South, Victoria, Australia.,Florey Institute for Neurosciences and Mental Health, Parkville, Victoria, Australia
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|