1
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
2
|
Çetinçelik M, Jordan-Barros A, Rowland CF, Snijders TM. The effect of visual speech cues on neural tracking of speech in 10-month-old infants. Eur J Neurosci 2024; 60:5381-5399. [PMID: 39188179 DOI: 10.1111/ejn.16492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 07/04/2024] [Accepted: 07/20/2024] [Indexed: 08/28/2024]
Abstract
While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.
Collapse
Affiliation(s)
- Melis Çetinçelik
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
| | - Antonia Jordan-Barros
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Caroline F Rowland
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Tineke M Snijders
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Cognitive Neuropsychology Department, Tilburg University, Tilburg, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
4
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Age, not autism, influences multisensory integration of speech stimuli among adults in a McGurk/MacDonald paradigm. Eur J Neurosci 2024; 59:2979-2994. [PMID: 38570828 DOI: 10.1111/ejn.16319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 04/05/2024]
Abstract
Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Hilde M Geurts
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
- Leo Kannerhuis (Youz/Parnassiagroup), Den Haag, The Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- India Autism Center, Kolkata, India
- Department of Psychology, Ashoka University, Sonipat, India
| | - Erik Van der Burg
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
5
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
6
|
Liu S, Li X, Sun R. The effect of masks on infants' ability to fast-map and generalize new words. JOURNAL OF CHILD LANGUAGE 2024:1-19. [PMID: 38189211 DOI: 10.1017/s0305000923000697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Young children today are exposed to masks on a regular basis. However, there is limited empirical evidence on how masks may affect word learning. The study explored the effect of masks on infants' abilities to fast-map and generalize new words. Seventy-two Chinese infants (43 males, Mage = 18.26 months) were taught two novel word-object pairs by a speaker with or without a mask. They then heard the words and had to visually identify the correct objects and also generalize words to a different speaker and objects from the same category. Eye-tracking results indicate that infants looked longer at the target regardless of whether a speaker wore a mask. They also looked longer at the speaker's eyes than at the mouth only when words were taught through a mask. Thus, fast-mapping and generalization occur in both masked and not masked conditions as infants can flexibly access different visual cues during word-learning.
Collapse
Affiliation(s)
- Siying Liu
- Institute of Linguistics, Shanghai International Studies University, Shanghai, China
| | - Xun Li
- Institute of Linguistics, Shanghai International Studies University, Shanghai, China
| | - Renji Sun
- East China University of Political Science and Law, China
| |
Collapse
|
7
|
Olson HA, Chen EM, Lydic KO, Saxe RR. Left-Hemisphere Cortical Language Regions Respond Equally to Observed Dialogue and Monologue. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:575-610. [PMID: 38144236 PMCID: PMC10745132 DOI: 10.1162/nol_a_00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 09/20/2023] [Indexed: 12/26/2023]
Abstract
Much of the language we encounter in our everyday lives comes in the form of conversation, yet the majority of research on the neural basis of language comprehension has used input from only one speaker at a time. Twenty adults were scanned while passively observing audiovisual conversations using functional magnetic resonance imaging. In a block-design task, participants watched 20 s videos of puppets speaking either to another puppet (the dialogue condition) or directly to the viewer (the monologue condition), while the audio was either comprehensible (played forward) or incomprehensible (played backward). Individually functionally localized left-hemisphere language regions responded more to comprehensible than incomprehensible speech but did not respond differently to dialogue than monologue. In a second task, participants watched videos (1-3 min each) of two puppets conversing with each other, in which one puppet was comprehensible while the other's speech was reversed. All participants saw the same visual input but were randomly assigned which character's speech was comprehensible. In left-hemisphere cortical language regions, the time course of activity was correlated only among participants who heard the same character speaking comprehensibly, despite identical visual input across all participants. For comparison, some individually localized theory of mind regions and right-hemisphere homologues of language regions responded more to dialogue than monologue in the first task, and in the second task, activity in some regions was correlated across all participants regardless of which character was speaking comprehensibly. Together, these results suggest that canonical left-hemisphere cortical language regions are not sensitive to differences between observed dialogue and monologue.
Collapse
|
8
|
Thompson E, Feldman JI, Valle A, Davis H, Keceli-Kaysili B, Dunham K, Woynaroski T, Tharpe AM, Picou EM. A Comparison of Listening Skills of Autistic and Non-Autistic Youth While Using and Not Using Remote Microphone Systems. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4618-4634. [PMID: 37870877 PMCID: PMC10721240 DOI: 10.1044/2023_jslhr-22-00720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/09/2023] [Accepted: 08/14/2023] [Indexed: 10/24/2023]
Abstract
OBJECTIVES The purposes of this study were to compare (a) listening-in-noise (accuracy and effort) and (b) remote microphone (RM) system benefits between autistic and non-autistic youth. DESIGN Groups of autistic and non-autistic youth that were matched on chronological age and biological sex completed listening-in-noise testing when wearing and not wearing an RM system. Listening-in-noise accuracy and listening effort were evaluated simultaneously using a dual-task paradigm for stimuli varying in type (syllables, words, sentences, and passages). Several putative moderators of RM system effects on outcomes of interest were also evaluated. RESULTS Autistic youth outperformed non-autistic youth in some conditions on listening-in-noise accuracy; listening effort between the two groups was not significantly different. RM system use resulted in listening-in-noise accuracy improvements that were nonsignificantly different across groups. Benefits of listening-in-noise accuracy were all large in magnitude. RM system use did not have an effect on listening effort for either group. None of the putative moderators yielded effects of the RM system on listening-in-noise accuracy or effort for non-autistic youth that were significant and interpretable, indicating that RM system benefits did not vary according to any of the participant characteristics assessed. CONCLUSIONS Contrary to expectations, autistic youth did not demonstrate listening-in-noise deficits compared to non-autistic youth. Both autistic and non-autistic youth appear to experience RM system benefits marked by large gains in listening-in-noise performance. Thus, the use of this technology in educational and other noisy settings where speech perception needs enhancement might be beneficial for both groups of children.
Collapse
Affiliation(s)
- Emily Thompson
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Jacob I. Feldman
- Frist Center for Autism and Innovation, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Annalise Valle
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Hilary Davis
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Bahar Keceli-Kaysili
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Kacie Dunham
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Vanderbilt Brain Institute, Nashville, TN
| | - Tiffany Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Frist Center for Autism and Innovation, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Anne Marie Tharpe
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
9
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
10
|
Suri KN, Whedon M, Lewis M. Perception of audio-visual synchrony in infants at elevated likelihood of developing autism spectrum disorder. Eur J Pediatr 2023; 182:2105-2117. [PMID: 36820895 DOI: 10.1007/s00431-023-04871-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 02/24/2023]
Abstract
UNLABELLED The inability to perceive audio-visual speech as a unified event may contribute to social impairments and language deficits in children with autism spectrum disorder (ASD). In this study, we examined and compared two groups of infants on their sensitivity to audio-visual asynchrony for a social (speaking face) and non-social event (bouncing ball) and assessed the relations between multisensory integration and language production. Infants at elevated likelihood of developing ASD were less sensitive to audio-visual synchrony for the social event than infants without elevated likelihood. Among infants without elevated likelihood, greater sensitivity to audio-visual synchrony for the social event was associated with a larger productive vocabulary. CONCLUSION Findings suggest that early deficits in multisensory integration may impair language development among infants with elevated likelihood of developing ASD. WHAT IS KNOWN •Perceptual integration of auditory and visual cues within speech is important for language development. •Prior work suggests that children with ASD are less sensitive to the temporal synchrony within audio-visual speech. WHAT IS NEW •In this study, infants at elevated likelihood of developing ASD showed a larger temporal binding window for adynamic social event (Speaking Face) than TD infants, suggesting less efficient multisensory integration.
Collapse
Affiliation(s)
- Kirin N Suri
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, 89 French Street, New Brunswick, NJ, 08901, USA.,Children's Health at Hackensack Meridian, Hackensack, NJ, 07601, USA
| | - Margaret Whedon
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, 89 French Street, New Brunswick, NJ, 08901, USA.
| | - Michael Lewis
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, 89 French Street, New Brunswick, NJ, 08901, USA
| |
Collapse
|
11
|
Wu H, Lu H, Lin Q, Zhang Y, Liu Q. Reduced audiovisual temporal sensitivity in Chinese children with dyslexia. Front Psychol 2023; 14:1126720. [PMID: 37151347 PMCID: PMC10157467 DOI: 10.3389/fpsyg.2023.1126720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Background Temporal processing deficits regarding audiovisual cross-modal stimuli could affect children's speed and accuracy of decoding. Aim To investigate the characteristics of audiovisual temporal sensitivity (ATS) in Chinese children, with and without developmental dyslexia and its impact on reading ability. Methods The audiovisual simultaneity judgment and temporal order judgment tasks were performed to investigate the ATS of 106 Chinese children (53 with dyslexia) aged 8 to 12 and 37 adults without a history of dyslexia. The predictive effect of children's audiovisual time binding window on their reading ability and the effects of extra cognitive processing in the temporal order judgment task on participants' ATS were also investigated. Outcomes and results With increasing inter-stimulus intervals, the percentage of synchronous responses in adults declined more rapidly than in children. Adults and typically developing children had significantly narrower time binding windows than children with dyslexia. The size of visual stimuli preceding auditory stimuli time binding window had a marginally significant predictive effect on children's reading fluency. Compared with the simultaneity judgment task, the extra cognitive processing of the temporal order judgment task affected children's ATS. Conclusion and implications The ATS of 8-12-year-old Chinese children is immature. Chinese children with dyslexia have lower ATS than their peers.
Collapse
Affiliation(s)
- Huiduo Wu
- College of Child Development and Education, Zhejiang Normal University, Hangzhou, China
| | - Haidan Lu
- Faculty of Education, East China Normal University, Shanghai, China
| | - Qing Lin
- Department of Preschool Education, China Women’s University, Beijing, China
| | - Yuhong Zhang
- The College of Education Science, Xinjiang Normal University, Urumqi, China
| | - Qiaoyun Liu
- Faculty of Education, East China Normal University, Shanghai, China
- *Correspondence: Qiaoyun Liu,
| |
Collapse
|
12
|
Keenaghan S, Polaskova M, Thurlbeck S, Kentridge RW, Cowie D. Alice in Wonderland: The effects of body size and movement on children's size perception and body representation in virtual reality. J Exp Child Psychol 2022; 224:105518. [PMID: 35964343 DOI: 10.1016/j.jecp.2022.105518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 06/11/2022] [Accepted: 07/07/2022] [Indexed: 11/26/2022]
Abstract
Previous work shows that in adults, illusory embodiment of a virtual avatar can be induced using congruent visuomotor cues. Furthermore, embodying different-sized avatars influences adults' perception of their environment's size. This study (N = 92) investigated whether children are also susceptible to such embodiment and size illusions. Adults and 5-year-old children viewed a first-person perspective of different-sized avatars moving either congruently or incongruently with their own body. Participants rated their feelings of embodiment over the avatar and also estimated the sizes of their body and objects in the environment. Unlike adults, children embodied the avatar regardless of visuomotor congruency. Both adults and children freely embodied different-sized avatars, and this affected their size perception in the surrounding virtual environment; they felt that objects were larger in a small body and vice versa in a large body. In addition, children felt that their body had grown in the large body condition. These findings have important implications for both our theoretical understanding of own-body representation, and our knowledge of perception in virtual environments.
Collapse
Affiliation(s)
| | - Marie Polaskova
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Simon Thurlbeck
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Robert W Kentridge
- Department of Psychology, University of Durham, Durham DH1 3LE, UK; Azrieli Program in Mind, Brain & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario M5G 1M1, Canada
| | - Dorothy Cowie
- Department of Psychology, University of Durham, Durham DH1 3LE, UK.
| |
Collapse
|
13
|
Yates TS, Skalaban LJ, Ellis CT, Bracher AJ, Baldassano C, Turk-Browne NB. Neural event segmentation of continuous experience in human infants. Proc Natl Acad Sci U S A 2022; 119:e2200257119. [PMID: 36252007 PMCID: PMC9618143 DOI: 10.1073/pnas.2200257119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How infants experience the world is fundamental to understanding their cognition and development. A key principle of adult experience is that, despite receiving continuous sensory input, we perceive this input as discrete events. Here we investigate such event segmentation in infants and how it differs from adults. Research on event cognition in infants often uses simplified tasks in which (adult) experimenters help solve the segmentation problem for infants by defining event boundaries or presenting discrete actions/vignettes. This presupposes which events are experienced by infants and leaves open questions about the principles governing infant segmentation. We take a different, data-driven approach by studying infant event segmentation of continuous input. We collected whole-brain functional MRI (fMRI) data from awake infants (and adults, for comparison) watching a cartoon and used a hidden Markov model to identify event states in the brain. We quantified the existence, timescale, and organization of multiple-event representations across brain regions. The adult brain exhibited a known hierarchical gradient of event timescales, from shorter events in early visual regions to longer events in later visual and associative regions. In contrast, the infant brain represented only longer events, even in early visual regions, with no timescale hierarchy. The boundaries defining these infant events only partially overlapped with boundaries defined from adult brain activity and behavioral judgments. These findings suggest that events are organized differently in infants, with longer timescales and more stable neural patterns, even in sensory regions. This may indicate greater temporal integration and reduced temporal precision during dynamic, naturalistic perception.
Collapse
Affiliation(s)
| | | | - Cameron T. Ellis
- bDepartment of Psychology, Stanford University, Stanford, CA 94305
| | - Angelika J. Bracher
- cInternational Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, 04303 Leipzig, Germany
- dDepartment of Child and Adolescent Psychiatry, Psychotherapy, and Psychosomatics, University of Leipzig, 04103 Leipzig, Germany
| | | | - Nicholas B. Turk-Browne
- aDepartment of Psychology, Yale University, New Haven, CT 06520
- fWu Tsai Institute, Yale University, New Haven, CT 06510
- 1To whom correspondence may be addressed.
| |
Collapse
|
14
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
15
|
Feldman JI, Conrad JG, Kuang W, Tu A, Liu Y, Simon DM, Wallace MT, Woynaroski TG. Relations Between the McGurk Effect, Social and Communication Skill, and Autistic Features in Children with and without Autism. J Autism Dev Disord 2022; 52:1920-1928. [PMID: 34101080 PMCID: PMC8842559 DOI: 10.1007/s10803-021-05074-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2021] [Indexed: 12/20/2022]
Abstract
Children with autism show alterations in multisensory integration that have been theoretically and empirically linked with the core and related features of autism. It is unclear, however, to what extent multisensory integration maps onto features of autism within children with and without autism. This study, thus, evaluates relations between audiovisual integration and core and related autism features across children with and without autism. Thirty-six children reported perceptions of the McGurk illusion during a psychophysical task. Parents reported on participants' autistic features. Increased report of illusory percepts tended to covary with reduced autistic features and greater communication skill. Some relations, though, were moderated by group. This work suggests that associations between multisensory integration and higher-order skills are present, but in some instances vary according to diagnostic group.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA.
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Julie G Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- College of Medicine, University of Illinois, Chicago, IL, USA
| | - Wayne Kuang
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- College of Osteopathic Medicine of the Pacific, Western University of Health Sciences, Pomona, CA, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
| | - David M Simon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
16
|
Verhaar E, Medendorp WP, Hunnius S, Stapel JC. Bayesian causal inference in visuotactile integration in children and adults. Dev Sci 2022; 25:e13184. [PMID: 34698430 PMCID: PMC9285718 DOI: 10.1111/desc.13184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 09/01/2021] [Accepted: 10/05/2021] [Indexed: 11/27/2022]
Abstract
If cues from different sensory modalities share the same cause, their information can be integrated to improve perceptual precision. While it is well established that adults exploit sensory redundancy by integrating cues in a Bayes optimal fashion, whether children under 8 years of age combine sensory information in a similar fashion is still under debate. If children differ from adults in the way they infer causality between cues, this may explain mixed findings on the development of cue integration in earlier studies. Here we investigated the role of causal inference in the development of cue integration, by means of a visuotactile localization task. Young children (6-8 years), older children (9.5-12.5 years) and adults had to localize a tactile stimulus, which was presented to the forearm simultaneously with a visual stimulus at either the same or a different location. In all age groups, responses were systematically biased toward the position of the visual stimulus, but relatively more so when the distance between the visual and tactile stimulus was small rather than large. This pattern of results was better captured by a Bayesian causal inference model than by alternative models of forced fusion or full segregation of the two stimuli. Our results suggest that already from a young age the brain implicitly infers the probability that a tactile and a visual cue share the same cause and uses this probability as a weighting factor in visuotactile localization.
Collapse
Affiliation(s)
- Erik Verhaar
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | | - Sabine Hunnius
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | |
Collapse
|
17
|
Zhou HY, Yang HX, Wei Z, Wan GB, Lui SSY, Chan RCK. Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study. Psych J 2022; 11:409-418. [PMID: 35350086 DOI: 10.1002/pchj.538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/09/2022] [Accepted: 02/17/2022] [Indexed: 11/05/2022]
Abstract
During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3-6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Wei
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Guo-Bin Wan
- Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Simon S Y Lui
- Department of Psychiatry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
18
|
Pattamadilok C, Sato M. How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing. BRAIN AND LANGUAGE 2022; 225:105058. [PMID: 34929531 DOI: 10.1016/j.bandl.2021.105058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 10/31/2021] [Accepted: 12/08/2021] [Indexed: 06/14/2023]
Abstract
Both visual articulatory gestures and orthography provide information on the phonological content of speech. This EEG study investigated the integration between speech and these two visual inputs. A comparison of skilled readers' brain responses elicited by a spoken word presented alone versus synchronously with a static image of a viseme or a grapheme of the spoken word's onset showed that while neither visual input induced audiovisual integration on N1 acoustic component, both led to a supra-additive integration on P2, with a stronger integration between speech and graphemes on left-anterior electrodes. This pattern persisted in P350 time-window and generalized to all electrodes. The finding suggests a strong impact of spelling knowledge on phonetic processing and lexical access. It also indirectly indicates that the dynamic and predictive value present in natural lip movements but not in static visemes is particularly critical to the contribution of visual articulatory gestures to speech processing.
Collapse
Affiliation(s)
| | - Marc Sato
- Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| |
Collapse
|
19
|
Addabbo M, Colombo L, Picciolini O, Tagliabue P, Turati C. Newborns’ ability to match non-speech audio-visual information in the absence of temporal synchrony. EUROPEAN JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2021. [DOI: 10.1080/17405629.2021.1931105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Margaret Addabbo
- Department of Psychology, University of Milan-Bicocca, Milano, Italy
| | - Lorenzo Colombo
- Neonatal Intensive Care Unit, Fondazione IRCCS Cà Granda Ospedale Maggiore Policlinico, Italy
| | - Odoardo Picciolini
- Pediatric Physical Medicine & Rehabilitation Unit, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, Italy
| | - Paolo Tagliabue
- Neonatology and Intensive Care Unit, MBBM Foundation, San Gerardo Hospital, Monza, Italy
| | - Chiara Turati
- Department of Psychology, University of Milan-Bicocca, Milano, Italy
| |
Collapse
|
20
|
Singh L, Tan A, Quinn PC. Infants recognize words spoken through opaque masks but not through clear masks. Dev Sci 2021; 24:e13117. [PMID: 33942441 PMCID: PMC8236912 DOI: 10.1111/desc.13117] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 04/20/2021] [Accepted: 04/22/2021] [Indexed: 12/20/2022]
Abstract
COVID-19 has modified numerous aspects of children's social environments. Many children are now spoken to through a mask. There is little empirical evidence attesting to the effects of masked language input on language processing. In addition, not much is known about the effects of clear masks (i.e., transparent face shields) versus opaque masks on language comprehension in children. In the current study, 2-year-old infants were tested on their ability to recognize familiar spoken words in three conditions: words presented with no mask, words presented through a clear mask, and words presented through an opaque mask. Infants were able to recognize familiar words presented without a mask and when hearing words through opaque masks, but not when hearing words through clear masks. Findings suggest that the ability of infants to recover spoken language input through masks varies depending on the surface properties of the mask.
Collapse
Affiliation(s)
- Leher Singh
- Department of Psychology, National University of Singapore, Singapore
| | - Agnes Tan
- Department of Psychology, National University of Singapore, Singapore
| | - Paul C Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| |
Collapse
|
21
|
Lewkowicz DJ, Schmuckler M, Agrawal V. The multisensory cocktail party problem in adults: Perceptual segregation of talking faces on the basis of audiovisual temporal synchrony. Cognition 2021; 214:104743. [PMID: 33940250 DOI: 10.1016/j.cognition.2021.104743] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 04/16/2021] [Accepted: 04/21/2021] [Indexed: 10/21/2022]
Abstract
Social interactions often involve a cluttered multisensory scene consisting of multiple talking faces. We investigated whether audiovisual temporal synchrony can facilitate perceptual segregation of talking faces. Participants either saw four identical or four different talking faces producing temporally jittered versions of the same visible speech utterance and heard the audible version of the same speech utterance. The audible utterance was either synchronized with the visible utterance produced by one of the talking faces or not synchronized with any of them. Eye tracking indicated that participants exhibited a marked preference for the synchronized talking face, that they gazed more at the mouth than the eyes overall, that they gazed more at the eyes of an audiovisually synchronized than a desynchronized talking face, and that they gazed more at the mouth when all talking faces were audiovisually desynchronized. These findings demonstrate that audiovisual temporal synchrony plays a major role in perceptual segregation of multisensory clutter and that adults rely on differential scanning strategies of a talker's eyes and mouth to discover sources of multisensory coherence.
Collapse
Affiliation(s)
- David J Lewkowicz
- Haskins Laboratories, New Haven, CT, USA; Yale Child Study Center, New Haven, CT, USA.
| | - Mark Schmuckler
- Department of Psychology, University of Toronto at Scarborough, Toronto, Canada
| | | |
Collapse
|
22
|
Pedale T, Mastroberardino S, Capurso M, Bremner AJ, Spence C, Santangelo V. Crossmodal spatial distraction across the lifespan. Cognition 2021; 210:104617. [PMID: 33556891 DOI: 10.1016/j.cognition.2021.104617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 01/25/2021] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
The ability to resist distracting stimuli whilst voluntarily focusing on a task is fundamental to our everyday cognitive functioning. Here, we investigated how this ability develops, and thereafter declines, across the lifespan using a single task/experiment. Young children (5-7 years), older children (10-11 years), young adults (20-27 years), and older adults (62-86 years) were presented with complex visual scenes. Endogenous (voluntary) attention was engaged by having the participants search for a visual target presented on either the left or right side of the display. The onset of the visual scenes was preceded - at stimulus onset asynchronies (SOAs) of 50, 200, or 500 ms - by a task-irrelevant sound (an exogenous crossmodal spatial distractor) delivered either on the same or opposite side as the visual target, or simultaneously on both sides (cued, uncued, or neutral trials, respectively). Age-related differences were revealed, especially in the extreme age-groups, which showed a greater impact of crossmodal spatial distractors. Young children were highly susceptible to exogenous spatial distraction at the shortest SOA (50 ms), whereas older adults were distracted at all SOAs, showing significant exogenous capture effects during the visual search task. By contrast, older children and young adults' search performance was not significantly affected by crossmodal spatial distraction. Overall, these findings present a detailed picture of the developmental trajectory of endogenous resistance to crossmodal spatial distraction from childhood to old age and demonstrate a different efficiency in coping with distraction across the four age-groups studied.
Collapse
Affiliation(s)
- Tiziana Pedale
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy
| | | | - Michele Capurso
- Department of Philosophy, Social Sciences & Education, University of Perugia, Italy
| | | | - Charles Spence
- Department of Experimental Psychology, Oxford University, UK
| | - Valerio Santangelo
- Neuroimaging Laboratory, IRCCS Santa Lucia Foundation, Rome, Italy; Department of Philosophy, Social Sciences & Education, University of Perugia, Italy.
| |
Collapse
|
23
|
Lalonde K, Werner LA. Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit. Brain Sci 2021; 11:49. [PMID: 33466253 PMCID: PMC7824772 DOI: 10.3390/brainsci11010049] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants' and children's use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE 68131, USA
| | - Lynne A. Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA;
| |
Collapse
|
24
|
Badde S, Ley P, Rajendran SS, Shareef I, Kekunnaya R, Röder B. Sensory experience during early sensitive periods shapes cross-modal temporal biases. eLife 2020; 9:61238. [PMID: 32840213 PMCID: PMC7476755 DOI: 10.7554/elife.61238] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 08/18/2020] [Indexed: 11/13/2022] Open
Abstract
Typical human perception features stable biases such as perceiving visual events as later than synchronous auditory events. The origin of such perceptual biases is unknown. To investigate the role of early sensory experience, we tested whether a congenital, transient loss of pattern vision, caused by bilateral dense cataracts, has sustained effects on audio-visual and tactile-visual temporal biases and resolution. Participants judged the temporal order of successively presented, spatially separated events within and across modalities. Individuals with reversed congenital cataracts showed a bias towards perceiving visual stimuli as occurring earlier than auditory (Expt. 1) and tactile (Expt. 2) stimuli. This finding stood in stark contrast to normally sighted controls and sight-recovery individuals who had developed cataracts later in childhood: both groups exhibited the typical bias of perceiving vision as delayed compared to audition. These findings provide strong evidence that cross-modal temporal biases depend on sensory experience during an early sensitive period.
Collapse
Affiliation(s)
- Stephanie Badde
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Department of Psychology and Center of Neural Science, New York University, New York, United States
| | - Pia Ley
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Siddhart S Rajendran
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Idris Shareef
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Ramesh Kekunnaya
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.,Child Sight Institute, Jasti V Ramanamma Children's Eye Care Center, LV Prasad Eye Institute, Hyderabad, India
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
25
|
Dunham K, Feldman JI, Liu Y, Cassidy M, Conrad JG, Santapuram P, Suzman E, Tu A, Butera I, Simon DM, Broderick N, Wallace MT, Lewkowicz D, Woynaroski TG. Stability of Variables Derived From Measures of Multisensory Function in Children With Autism Spectrum Disorder. AMERICAN JOURNAL ON INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2020; 125:287-303. [PMID: 32609807 PMCID: PMC8903073 DOI: 10.1352/1944-7558-125.4.287] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 10/11/2019] [Indexed: 06/11/2023]
Abstract
Children with autism spectrum disorder (ASD) display differences in multisensory function as quantified by several different measures. This study estimated the stability of variables derived from commonly used measures of multisensory function in school-aged children with ASD. Participants completed: a simultaneity judgment task for audiovisual speech, tasks designed to elicit the McGurk effect, listening-in-noise tasks, electroencephalographic recordings, and eye-tracking tasks. Results indicate the stability of indices derived from tasks tapping multisensory processing is variable. These findings have important implications for measurement in future research. Averaging scores across repeated observations will often be required to obtain acceptably stable estimates and, thus, to increase the likelihood of detecting effects of interest, as it relates to multisensory processing in children with ASD.
Collapse
Affiliation(s)
- Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Jacob I. Feldman
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Julie G. Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Illinois, Chicago, IL, USA
| | - Pooja Santapuram
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Evan Suzman
- Department of Biomedical Sciences, Vanderbilt University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Iliza Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David M. Simon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Present Address: axialHealthcare, Nashville, TN, USA
| | - Neill Broderick
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - David Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, MA, USA
| | - Tiffany G. Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
26
|
Meilleur A, Foster NEV, Coll SM, Brambati SM, Hyde KL. Unisensory and multisensory temporal processing in autism and dyslexia: A systematic review and meta-analysis. Neurosci Biobehav Rev 2020; 116:44-63. [PMID: 32544540 DOI: 10.1016/j.neubiorev.2020.06.013] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 06/01/2020] [Accepted: 06/08/2020] [Indexed: 12/28/2022]
Abstract
This study presents a comprehensive systematic review and meta-analysis of temporal processing in autism spectrum disorder (ASD) and developmental dyslexia (DD), two neurodevelopmental disorders in which temporal processing deficits have been highly researched. The results provide strong evidence for impairments in temporal processing in both ASD (g = 0.48) and DD (g = 0.82), as measured by judgments of temporal order and simultaneity. In individual analyses, multisensory temporal processing was impaired for both ASD and DD, and unisensory auditory, visual and tactile processing were all impaired in DD. In ASD, speech stimuli showed moderate impairment effect sizes, whereas nonspeech stimuli showed small effects. Greater reading and spelling skills in DD were associated with greater temporal precision. Temporal deficits did not show changes with age in either disorder. In addition to more clearly defining temporal impairments in ASD and DD, the results highlight common and distinct patterns of temporal processing between these disorders. Deficits are discussed in relation to existing theoretical models, and recommendations are made for future research.
Collapse
Affiliation(s)
- Alexa Meilleur
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Nicholas E V Foster
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| | - Sarah-Maude Coll
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| | - Simona M Brambati
- Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada; Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, 4545 Chemin Queen Mary, Montréal, QC, H3W 1W4, Canada
| | - Krista L Hyde
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, University of Montreal, Marie-Victorin Building, 90 Avenue Vincent D'Indy, Montréal, QC, H2V 2S9, Canada; Department of Psychology, University of Montréal, Marie-Victorin Building, 90 avenue Vincent-d'Indy, Suite D-418, Montréal, QC, H3C 3J7, Canada; Centre for Research on Brain, Language and Music, Faculty of Medicine, McGill University, Rabinovitch house, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
| |
Collapse
|
27
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
28
|
Zhou HY, Shi LJ, Yang HX, Cheung EFC, Chan RCK. Audiovisual temporal integration and rapid temporal recalibration in adolescents and adults: Age-related changes and its correlation with autistic traits. Autism Res 2019; 13:615-626. [PMID: 31808321 DOI: 10.1002/aur.2249] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/19/2019] [Indexed: 12/26/2022]
Abstract
Temporal structure is a key factor in determining the relatedness of multisensory stimuli. Stimuli that are close in time are more likely to be integrated into a unified perceptual representation. To investigate the age-related developmental differences in audiovisual temporal integration and rapid temporal recalibration, we administered simultaneity judgment (SJ) tasks to a group of adolescents (11-14 years) and young adults (18-28 years). No age-related changes were found in the width of the temporal binding window within which participants are highly likely to combine multisensory stimuli. The main distinction between adolescents and adults was audiovisual temporal recalibration. Although participants of both age groups could rapidly recalibrate based on the previous trial for speech stimuli (i.e., syllable utterances), only adults but not adolescents showed short-term recalibration for simple and non-speech stimuli. In both adolescents and adults, no significant correlation was found between audiovisual temporal integration ability and autistic or schizotypal traits. These findings provide new information on the developmental trajectory of basic multisensory function and may have implications for neurodevelopmental disorders (e.g., autism) with altered audiovisual temporal integration. Autism Res 2020, 13: 615-626. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Utilizing temporal cues to integrate and separate audiovisual information is a fundamental ability underlying higher order social communicative functions. This study examines the developmental changes of the ability to detect audiovisual asynchrony and rapidly adjust sensory decisions based on previous sensory input. In healthy adolescents and young adults, the correlation between autistic traits and audiovisual integration ability failed to reach a significant level. Therefore, more research is needed to examine whether impairment in basic sensory functions is correlated with broader autism phenotype in nonclinical populations. These results may help us understand altered multisensory integration in people with autism.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li-Juan Shi
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,School of Education, Hunan University of Science and Technology, Xiangtan, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
29
|
Cutts SA, Fragaszy DM, Mangalam M. Consistent inter-individual differences in susceptibility to bodily illusions. Conscious Cogn 2019; 76:102826. [PMID: 31670011 DOI: 10.1016/j.concog.2019.102826] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 08/26/2019] [Accepted: 09/27/2019] [Indexed: 12/17/2022]
Abstract
Illusory senses of ownership and agency (that the hand or effector that we see belongs to us and moves at our will, respectively) support the embodiment of prosthetic limbs, tele-operated surgical devices, and human-machine interfaces. We exposed forty-eight individuals to four different procedures known to elicit illusory ownership or agency over a fake visible rubber hand or finger. The illusory ownership or agency arising from the hand correlated with that of the finger. For both body parts, sensory stimulation across different modalities (visual with tactile or visual with kinesthetic) produced illusions of similar strength. However, the strengths of the illusions of ownership and agency were unrelated within individuals, supporting the proposal that distinct neuropsychological processes underlie these two senses. Developing training programs to enhance susceptibility to illusions of agency or ownership for people with lower natural susceptibility could broaden the usefulness of the above technologies.
Collapse
Affiliation(s)
- Sarah A Cutts
- Department of Psychology, University of Georgia, Athens, GA 30602, USA
| | | | - Madhur Mangalam
- Department of Physical Therapy, Movement and Rehabilitation Sciences, Northeastern University, Boston, MA 02115, USA.
| |
Collapse
|
30
|
van Laarhoven T, Stekelenburg JJ, Vroomen J. Increased sub-clinical levels of autistic traits are associated with reduced multisensory integration of audiovisual speech. Sci Rep 2019; 9:9535. [PMID: 31267024 PMCID: PMC6606565 DOI: 10.1038/s41598-019-46084-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Accepted: 06/20/2019] [Indexed: 12/21/2022] Open
Abstract
Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| |
Collapse
|
31
|
Developmental changes in the perception of audiotactile simultaneity. J Exp Child Psychol 2019; 183:208-221. [DOI: 10.1016/j.jecp.2019.02.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 01/29/2019] [Accepted: 02/13/2019] [Indexed: 11/23/2022]
|
32
|
Hirst RJ, Kicks EC, Allen HA, Cragg L. Cross-modal interference-control is reduced in childhood but maintained in aging: A cohort study of stimulus- and response-interference in cross-modal and unimodal Stroop tasks. J Exp Psychol Hum Percept Perform 2019; 45:553-572. [PMID: 30945905 PMCID: PMC6484713 DOI: 10.1037/xhp0000608] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Interference-control is the ability to exclude distractions and focus on a specific task or stimulus. However, it is currently unclear whether the same interference-control mechanisms underlie the ability to ignore unimodal and cross-modal distractions. In 2 experiments we assessed whether unimodal and cross-modal interference follow similar trajectories in development and aging and occur at similar processing levels. In Experiment 1, 42 children (6-11 years), 31 younger adults (18-25 years) and 32 older adults (60-84 years) identified color rectangles with either written (unimodal) or spoken (cross-modal) distractor-words. Stimuli could be congruent, incongruent but mapped to the same response (stimulus-incongruent), or incongruent and mapped to different responses (response-incongruent); thus, separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference was worst in childhood and old age; however, older adults maintained the ability to ignore cross-modal distraction. Unimodal but not cross-modal response-interference also reduced accuracy. In Experiment 2 we compared the effect of audition on vision and vice versa in 52 children (6-11 years), 30 young adults (22-33 years) and 30 older adults (60-84 years). As in Experiment 1, older adults maintained the ability to ignore cross-modal distraction arising from either modality, and neither type of cross-modal distraction limited accuracy in adults. However, cross-modal distraction still reduced accuracy in children and children were more slowed by stimulus-interference compared with adults. We conclude that; unimodal and cross-modal interference follow different life span trajectories and differences in stimulus- and response-interference may increase cross-modal distractibility in childhood. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Ella C Kicks
- School of Psychology and Neuroscience, University of St. Andrews
| | | | - Lucy Cragg
- School of Psychology, University of Nottingham
| |
Collapse
|
33
|
Bahrick LE, Soska KC, Todd JT. Assessing individual differences in the speed and accuracy of intersensory processing in young children: The intersensory processing efficiency protocol. Dev Psychol 2018; 54:2226-2239. [PMID: 30346188 PMCID: PMC6261800 DOI: 10.1037/dev0000575] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Detecting intersensory redundancy guides cognitive, social, and language development. Yet, researchers lack fine-grained, individual difference measures needed for studying how early intersensory skills lead to later outcomes. The intersensory processing efficiency protocol (IPEP) addresses this need. Across a number of brief trials, participants must find a sound-synchronized visual target event (social, nonsocial) amid five visual distractor events, simulating the "noisiness" of natural environments. Sixty-four 3- to 5-year-old children were tested using remote eye-tracking. Children showed intersensory processing by attending to the sound-synchronous event more frequently and longer than in a silent visual control, and more frequently than expected by chance. The IPEP provides a fine-grained, nonverbal method for characterizing individual differences in intersensory processing appropriate for infants and children. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Kasey C Soska
- Department of Psychology, Florida International University
| | | |
Collapse
|
34
|
Bahrick LE, Todd JT, Soska KC. The Multisensory Attention Assessment Protocol (MAAP): Characterizing individual differences in multisensory attention skills in infants and children and relations with language and cognition. Dev Psychol 2018; 54:2207-2225. [PMID: 30359058 PMCID: PMC6263835 DOI: 10.1037/dev0000594] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multisensory attention skills provide a crucial foundation for early cognitive, social, and language development, yet there are no fine-grained, individual difference measures of these skills appropriate for preverbal children. The Multisensory Attention Assessment Protocol (MAAP) fills this need. In a single video-based protocol requiring no language skills, the MAAP assesses individual differences in three fundamental building blocks of attention to multisensory events-the duration of attention maintenance, the accuracy of intersensory (audiovisual) matching, and the speed of shifting-for both social and nonsocial events, in the context of high and low competing visual stimulation. In Experiment 1, 2- to 5-year-old children (N = 36) received the MAAP and assessments of language and cognitive functioning. In Experiment 2 the procedure was streamlined and presented to 12-month-olds (N = 48). Both infants and children showed high levels of attention maintenance to social and nonsocial events, impaired attention maintenance and speed of shifting when competing stimulation was high, and significant intersensory matching. Children showed longer maintenance, faster shifting, and less impairment from competing stimulation than infants. In 2- to 5-year-old children, duration and accuracy were intercorrelated, showed increases with age, and predicted cognitive and language functioning. The MAAP opens the door to assessing developmental pathways between early attention patterns to audiovisual events and language, cognitive, and social development. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Collapse
|
35
|
Feldman JI, Dunham K, Cassidy M, Wallace MT, Liu Y, Woynaroski TG. Audiovisual multisensory integration in individuals with autism spectrum disorder: A systematic review and meta-analysis. Neurosci Biobehav Rev 2018; 95:220-234. [PMID: 30287245 PMCID: PMC6291229 DOI: 10.1016/j.neubiorev.2018.09.020] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 09/10/2018] [Accepted: 09/25/2018] [Indexed: 02/04/2023]
Abstract
An ever-growing literature has aimed to determine how individuals with autism spectrum disorder (ASD) differ from their typically developing (TD) peers on measures of multisensory integration (MSI) and to ascertain the degree to which differences in MSI are associated with the broad range of symptoms associated with ASD. Findings, however, have been highly variable across the studies carried out to date. The present work systematically reviews and quantitatively synthesizes the large literature on audiovisual MSI in individuals with ASD to evaluate the cumulative evidence for (a) group differences between individuals with ASD and TD peers, (b) correlations between MSI and autism symptoms in individuals with ASD and (c) study level factors that may moderate findings (i.e., explain differential effects) observed across studies. To identify eligible studies, a comprehensive search strategy was employed using the ProQuest search engine, PubMed database, forwards and backwards citation searches, direct author contact, and hand-searching of select conference proceedings. A significant between-group difference in MSI was evident in the literature, with individuals with ASD demonstrating worse audiovisual integration on average across studies compared to TD controls. This effect was moderated by mean participant age, such that between-group differences were more pronounced in younger samples. The mean correlation between MSI and autism and related symptomatology was also significant, indicating that increased audiovisual integration in individuals with ASD is associated with better language/communication abilities and/or reduced autism symptom severity in the extant literature. This effect was moderated by whether the stimuli were linguistic versus non-linguistic in nature, such that correlation magnitudes tended to be significantly greater when linguistic stimuli were utilized in the measure of MSI. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 37232, USA.
| | - Kacie Dunham
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Pharmacology, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| |
Collapse
|
36
|
Schormans AL, Allman BL. Behavioral Plasticity of Audiovisual Perception: Rapid Recalibration of Temporal Sensitivity but Not Perceptual Binding Following Adult-Onset Hearing Loss. Front Behav Neurosci 2018; 12:256. [PMID: 30429780 PMCID: PMC6220077 DOI: 10.3389/fnbeh.2018.00256] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Accepted: 10/11/2018] [Indexed: 11/13/2022] Open
Abstract
The ability to accurately integrate or bind stimuli from more than one sensory modality is highly dependent on the features of the stimuli, such as their intensity and relative timing. Previous studies have demonstrated that the ability to perceptually bind stimuli is impaired in various clinical conditions such as autism, dyslexia, schizophrenia, as well as aging. However, it remains unknown if adult-onset hearing loss, separate from aging, influences audiovisual temporal acuity. In the present study, rats were trained using appetitive operant conditioning to perform an audiovisual temporal order judgment (TOJ) task or synchrony judgment (SJ) task in order to investigate the nature and extent that audiovisual temporal acuity is affected by adult-onset hearing loss, with a specific focus on the time-course of perceptual changes following loud noise exposure. In our first series of experiments, we found that audiovisual temporal acuity in normal-hearing rats was influenced by sound intensity, such that when a quieter sound was presented, the rats were biased to perceive the audiovisual stimuli as asynchronous (SJ task), or as though the visual stimulus was presented first (TOJ task). Psychophysical testing demonstrated that noise-induced hearing loss did not alter the rats' temporal sensitivity 2-3 weeks post-noise exposure, despite rats showing an initial difficulty in differentiating the temporal order of audiovisual stimuli. Furthermore, consistent with normal-hearing rats, the timing at which the stimuli were perceived as simultaneous (i.e., the point of subjective simultaneity, PSS) remained sensitive to sound intensity following hearing loss. Contrary to the TOJ task, hearing loss resulted in persistent impairments in asynchrony detection during the SJ task, such that a greater proportion of trials were now perceived as synchronous. Moreover, psychophysical testing found that noise-exposed rats had altered audiovisual synchrony perception, consistent with impaired audiovisual perceptual binding (e.g., an increase in the temporal window of integration on the right side of simultaneity; right temporal binding window (TBW)). Ultimately, our collective results show for the first time that adult-onset hearing loss leads to behavioral plasticity of audiovisual perception, characterized by a rapid recalibration of temporal sensitivity but a persistent impairment in the perceptual binding of audiovisual stimuli.
Collapse
Affiliation(s)
- Ashley L Schormans
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada
| | - Brian L Allman
- Department of Anatomy and Cell Biology, Schulich School of Medicine and Dentistry, University of Western Ontario, London, ON, Canada
| |
Collapse
|
37
|
Wan Y, Chen L. Temporal Reference, Attentional Modulation, and Crossmodal Assimilation. Front Comput Neurosci 2018; 12:39. [PMID: 29922143 PMCID: PMC5996128 DOI: 10.3389/fncom.2018.00039] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 05/16/2018] [Indexed: 11/18/2022] Open
Abstract
Crossmodal assimilation effect refers to the prominent phenomenon by which ensemble mean extracted from a sequence of task-irrelevant distractor events, such as auditory intervals, assimilates/biases the perception (such as visual interval) of the subsequent task-relevant target events in another sensory modality. In current experiments, using visual Ternus display, we examined the roles of temporal reference, materialized as the time information accumulated before the onset of target event, as well as the attentional modulation in crossmodal temporal interaction. Specifically, we examined how the global time interval, the mean auditory inter-intervals and the last interval in the auditory sequence assimilate and bias the subsequent percept of visual Ternus motion (element motion vs. group motion). We demonstrated that both the ensemble (geometric) mean and the last interval in the auditory sequence contribute to bias the percept of visual motion. Longer mean (or last) interval elicited more reports of group motion, whereas the shorter mean (or last) auditory intervals gave rise to more dominant percept of element motion. Importantly, observers have shown dynamic adaptation to the temporal reference of crossmodal assimilation: when the target visual Ternus stimuli were separated by a long gap interval after the preceding sound sequence, the assimilation effect by ensemble mean was reduced. Our findings suggested that crossmodal assimilation relies on a suitable temporal reference on adaptation level, and revealed a general temporal perceptual grouping principle underlying complex audio-visual interactions in everyday dynamic situations.
Collapse
Affiliation(s)
| | - Lihan Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
38
|
Comparisons of visual attention in school-age children with cochlear implants versus hearing peers and normative data. Hear Res 2018; 359:91-100. [PMID: 29370963 DOI: 10.1016/j.heares.2018.01.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 12/26/2017] [Accepted: 01/04/2018] [Indexed: 11/20/2022]
Abstract
OBJECTIVE Previous research has found that preschoolers with hearing loss have worse visual attention and elevated rates of behavior problems when compared to typically hearing peers (Barker et al., 2009). However, little is known about these deficits in school-age children with cochlear implants (CIs). We evaluated visual selective attention in school-age children with CIs and hearing peers and examined the link between visual attention and behavior problems. METHOD Data were drawn from the Childhood Development after Cochlear Implantation (CDaCI) study, the largest longitudinal, multi-site study of children with CIs. Visual attention was measured using d prime (d') on a continuous performance test (The Gordon CPT), which requires participants to watch a stream of digits and hit a button after seeing a certain target (a 9 following a 1). The CPT captures the probability of a hit (pressing button for a target) vs a false alarm (pressing the button for a non-target) while accounting for chance responding. In addition, predictors of visual attention, including IQ (using Processing Speed and Perceptional Reasoning on the WISC-IV), age at implantation, gender, and device management were examined. Externalizing problems were assessed using parent-report on the BASC-2. Data were drawn from 60 months post-implantation. RESULTS Children with CIs (n = 106) showed significantly worse visual selective attention than hearing peers. The difference in d' was driven by higher rates of false alarms. In the CI group, the Processing Speed Index on the WISC was correlated with total omissions, total commissions and d'. Within the CI group, d' significantly predicted parent-reported externalizing behavior problems. This finding was primarily driven by elevated Hyperactivity in the CI group. CONCLUSION Children with CIs continue to display deficits in visual attention when compared to their hearing peers. Despite improvements in oral language, these problems have critical implications for academic performance and social competence. Currently, cochlear implant teams do not focus on these other dimensions of development and thus, may not be positioned to address them. Assessment of attention and behavior should be incorporated into routine, annual visits soon after implant surgery, and remediation of these deficits should be included in early intervention programs.
Collapse
|
39
|
Kaganovich N. Sensitivity to Audiovisual Temporal Asynchrony in Children With a History of Specific Language Impairment and Their Peers With Typical Development: A Replication and Follow-Up Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2259-2270. [PMID: 28715546 PMCID: PMC5829802 DOI: 10.1044/2017_jslhr-l-16-0327] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Revised: 01/26/2017] [Accepted: 04/05/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE Earlier, my colleagues and I showed that children with a history of specific language impairment (H-SLI) are significantly less able to detect audiovisual asynchrony compared with children with typical development (TD; Kaganovich & Schumaker, 2014). Here, I first replicate this finding in a new group of children with H-SLI and TD and then examine a relationship among audiovisual function, attention skills, and language in a combined pool of children. METHOD The stimuli were a pure tone and an explosion-shaped figure. Stimulus onset asynchrony (SOA) varied from 0-500 ms. Children pressed 1 button for perceived synchrony and another for asynchrony. I measured the number of synchronous perceptions at each SOA and calculated children's temporal binding windows. I, then, conducted multiple regressions to determine if audiovisual processing and attention can predict language skills. RESULTS As in the earlier study, children with H-SLI perceived asynchrony significantly less frequently than children with TD at SOAs of 400-500 ms. Their temporal binding windows were also larger. Temporal precision and attention predicted 23%-37% of children's language ability. CONCLUSIONS Audiovisual temporal processing is impaired in children with H-SLI. The degree of this impairment is a predictor of language skills. Once understood, the mechanisms underlying this deficit may become a new focus for language remediation.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
- Department of Psychological Sciences, Purdue University, West Lafayette, IN
| |
Collapse
|
40
|
Smith E, Zhang S, Bennetto L. Temporal synchrony and audiovisual integration of speech and object stimuli in autism. RESEARCH IN AUTISM SPECTRUM DISORDERS 2017; 39:11-19. [PMID: 30220908 PMCID: PMC6135104 DOI: 10.1016/j.rasd.2017.04.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Individuals with Autism Spectrum Disorders (ASD) have been shown to have multisensory integration deficits, which may lead to problems perceiving complex, multisensory environments. For example, understanding audiovisual speech requires integration of visual information from the lips and face with auditory information from the voice, and audiovisual speech integration deficits can lead to impaired understanding and comprehension. While there is strong evidence for an audiovisual speech integration impairment in ASD, it is unclear whether this impairment is due to low level perceptual processes that affect all types of audiovisual integration or if it is specific to speech processing. METHOD Here, we measure audiovisual integration of basic speech (i.e., consonant-vowel utterances) and object stimuli (i.e., a bouncing ball) in adolescents with ASD and well-matched controls. We calculate a temporal window of integration (TWI) using each individual's ability to identify which of two videos (one temporally aligned and one misaligned) matches auditory stimuli. The TWI measures tolerance for temporal asynchrony between the auditory and visual streams, and is an important feature of audiovisual perception. RESULTS While controls showed similar tolerance of asynchrony for the simple speech and object stimuli, individuals with ASD did not. Specifically, individuals with ASD showed less tolerance of asynchrony for speech stimuli compared to object stimuli. In individuals with ASD, decreased tolerance for asynchrony in speech stimuli was associated with higher ratings of autism symptom severity. CONCLUSIONS These results suggest that audiovisual perception in ASD may vary for speech and object stimuli beyond what can be accounted for by stimulus complexity.
Collapse
Affiliation(s)
- Elizabeth Smith
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Shouling Zhang
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Loisa Bennetto
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| |
Collapse
|
41
|
Sartorato F, Przybylowski L, Sarko DK. Improving therapeutic outcomes in autism spectrum disorders: Enhancing social communication and sensory processing through the use of interactive robots. J Psychiatr Res 2017; 90:1-11. [PMID: 28213292 DOI: 10.1016/j.jpsychires.2017.02.004] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Accepted: 02/03/2017] [Indexed: 11/20/2022]
Abstract
For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD.
Collapse
Affiliation(s)
- Felippe Sartorato
- Osteopathic Medical Student (OMS-IV), Edward Via College of Osteopathic Medicine (VCOM), Spartanburg, SC, USA
| | - Leon Przybylowski
- Osteopathic Medical Student (OMS-IV), Edward Via College of Osteopathic Medicine (VCOM), Spartanburg, SC, USA
| | - Diana K Sarko
- Department of Anatomy, Southern Illinois University School of Medicine, Carbondale, IL, USA; Department of Psychology, Southern Illinois University School of Medicine, Carbondale, IL, USA.
| |
Collapse
|
42
|
Alterations in audiovisual simultaneity perception in amblyopia. PLoS One 2017; 12:e0179516. [PMID: 28598996 PMCID: PMC5466335 DOI: 10.1371/journal.pone.0179516] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2017] [Accepted: 05/30/2017] [Indexed: 11/19/2022] Open
Abstract
Amblyopia is a developmental visual impairment that is increasingly recognized to affect higher-level perceptual and multisensory processes. To further investigate the audiovisual (AV) perceptual impairments associated with this condition, we characterized the temporal interval in which asynchronous auditory and visual stimuli are perceived as simultaneous 50% of the time (i.e., the AV simultaneity window). Adults with unilateral amblyopia (n = 17) and visually normal controls (n = 17) judged the simultaneity of a flash and a click presented with both eyes viewing. The signal onset asynchrony (SOA) varied from 0 ms to 450 ms for auditory-lead and visual-lead conditions. A subset of participants with amblyopia (n = 6) was tested monocularly. Compared to the control group, the auditory-lead side of the AV simultaneity window was widened by 48 ms (36%; p = 0.002), whereas that of the visual-lead side was widened by 86 ms (37%; p = 0.02). The overall mean window width was 500 ms, compared to 366 ms among controls (37% wider; p = 0.002). Among participants with amblyopia, the simultaneity window parameters were unchanged by viewing condition, but subgroup analysis revealed differential effects on the parameters by amblyopia severity, etiology, and foveal suppression status. Possible mechanisms to explain these findings include visual temporal uncertainty, interocular perceptual latency asymmetry, and disruption of normal developmental tuning of sensitivity to audiovisual asynchrony.
Collapse
|
43
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
44
|
Bremner AJ. Multisensory Development: Calibrating a Coherent Sensory Milieu in Early Life. Curr Biol 2017; 27:R305-R307. [DOI: 10.1016/j.cub.2017.02.055] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
45
|
Bremner AJ, Spence C. The Development of Tactile Perception. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2017; 52:227-268. [PMID: 28215286 DOI: 10.1016/bs.acdb.2016.12.002] [Citation(s) in RCA: 52] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Touch is the first of our senses to develop, providing us with the sensory scaffold on which we come to perceive our own bodies and our sense of self. Touch also provides us with direct access to the external world of physical objects, via haptic exploration. Furthermore, a recent area of interest in tactile research across studies of developing children and adults is its social function, mediating interpersonal bonding. Although there are a range of demonstrations of early competence with touch, particularly in the domain of haptics, the review presented here indicates that many of the tactile perceptual skills that we take for granted as adults (e.g., perceiving touches in the external world as well as on the body) take some time to develop in the first months of postnatal life, likely as a result of an extended process of connection with other sense modalities which provide new kinds of information from birth (e.g., vision and audition). Here, we argue that because touch is of such fundamental importance across a wide range of social and cognitive domains, it should be placed much more centrally in the study of early perceptual development than it currently is.
Collapse
Affiliation(s)
- A J Bremner
- Goldsmiths, University of London, London, United Kingdom.
| | - C Spence
- University of Oxford, Oxford, United Kingdom
| |
Collapse
|
46
|
Noel JP, De Niear M, Van der Burg E, Wallace MT. Audiovisual Simultaneity Judgment and Rapid Recalibration throughout the Lifespan. PLoS One 2016; 11:e0161698. [PMID: 27551918 PMCID: PMC4994953 DOI: 10.1371/journal.pone.0161698] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2016] [Accepted: 08/10/2016] [Indexed: 11/18/2022] Open
Abstract
Multisensory interactions are well established to convey an array of perceptual and behavioral benefits. One of the key features of multisensory interactions is the temporal structure of the stimuli combined. In an effort to better characterize how temporal factors influence multisensory interactions across the lifespan, we examined audiovisual simultaneity judgment and the degree of rapid recalibration to paired audiovisual stimuli (Flash-Beep and Speech) in a sample of 220 participants ranging from 7 to 86 years of age. Results demonstrate a surprisingly protracted developmental time-course for both audiovisual simultaneity judgment and rapid recalibration, with neither reaching maturity until well into adolescence. Interestingly, correlational analyses revealed that audiovisual simultaneity judgments (i.e., the size of the audiovisual temporal window of simultaneity) and rapid recalibration significantly co-varied as a function of age. Together, our results represent the most complete description of age-related changes in audiovisual simultaneity judgments to date, as well as being the first to describe changes in the degree of rapid recalibration as a function of age. We propose that the developmental time-course of rapid recalibration scaffolds the maturation of more durable audiovisual temporal representations.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Neuroscience Graduate Program, Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Matthew De Niear
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Medical Scientist Training Program, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
| | - Erik Van der Burg
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- School of Psychology, University of Sydney, Sydney, Australia
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Medical School, Vanderbilt University, Nashville, TN, 37235, United States of America
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, 37235, United States of America
- Department of Psychology, Vanderbilt University, Nashville, TN, 37235, United States of America
- * E-mail:
| |
Collapse
|
47
|
Hillock-Dunn A, Grantham DW, Wallace MT. The temporal binding window for audiovisual speech: Children are like little adults. Neuropsychologia 2016; 88:74-82. [DOI: 10.1016/j.neuropsychologia.2016.02.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 12/23/2015] [Accepted: 02/22/2016] [Indexed: 10/22/2022]
|
48
|
Chen YC, Shore DI, Lewis TL, Maurer D. The development of the perception of audiovisual simultaneity. J Exp Child Psychol 2016; 146:17-33. [DOI: 10.1016/j.jecp.2016.01.010] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Revised: 01/09/2016] [Accepted: 01/12/2016] [Indexed: 10/22/2022]
|
49
|
Lalonde K, Holt RF. Audiovisual speech perception development at varying levels of perceptual processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1713. [PMID: 27106318 PMCID: PMC4826374 DOI: 10.1121/1.4945590] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 01/04/2016] [Accepted: 03/25/2016] [Indexed: 06/05/2023]
Abstract
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Department of Speech and Hearing Sciences, Indiana University, 200 South Jordan Avenue, Bloomington, Indiana 47405, USA
| | - Rachael Frush Holt
- Department of Speech and Hearing Science, Ohio State University, 110 Pressey Hall, 1070 Carmack Road, Columbus, Ohio 43210, USA
| |
Collapse
|
50
|
Abstract
Temporal proximity is one of the key factors determining whether events in different modalities are integrated into a unified percept. Sensitivity to audiovisual temporal asynchrony has been studied in adults in great detail. However, how such sensitivity matures during childhood is poorly understood. We examined perception of audiovisual temporal asynchrony in 7- to 8-year-olds, 10- to 11-year-olds, and adults by using a simultaneity judgment task (SJT). Additionally, we evaluated whether nonverbal intelligence, verbal ability, attention skills, or age influenced children's performance. On each trial, participants saw an explosion-shaped figure and heard a 2-kHz pure tone. These occurred at the following stimulus onset asynchronies (SOAs): 0, 100, 200, 300, 400, and 500 ms. In half of all trials, the visual stimulus appeared first (VA condition), and in the other half, the auditory stimulus appeared first (AV condition). Both groups of children were significantly more likely than adults to perceive asynchronous events as synchronous at all SOAs exceeding 100 ms, in both VA and AV conditions. Furthermore, only adults exhibited a significant shortening of reaction time (RT) at long SOAs compared to medium SOAs. Sensitivities to the VA and AV temporal asynchronies showed different developmental trajectories, with 10- to 11-year-olds outperforming 7- to 8-year-olds at the 300- to 500-ms SOAs, but only in the AV condition. Lastly, age was the only predictor of children's performance on the SJT. These results provide an important baseline against which children with developmental disorders associated with impaired audiovisual temporal function-such as autism, specific language impairment, and dyslexia-may be compared.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive West Lafayette, IN 47907-2038
- Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038
| |
Collapse
|